Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
consp
41 days ago
|
parent
|
context
|
favorite
| on:
A small number of samples can poison LLMs of any s...
This is the definition of training the model on it's own output. Apparently that is all ok now.
baby
41 days ago
|
next
[–]
I mean you're supposed to use RAG to avoid hallucinations
MagicMoonlight
41 days ago
|
prev
[–]
Yeah they call it “synthetic data” and wonder why their models are slop now
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: