Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm just an ignorant bystander, but is the training dataset the ground truth?

Kind of feels like calling the fruit you put into the blender the ground truth, but the meaning of the apple is kinda lost in the soup.

Now i'm not a hater by any means. I am just not sure this is the correct way to define the structured "meaning" (for lack of a better word) that we see come out of LLM complexity. It is, i thought, a very lossy operation and so the structure of the inputs may or (more likely) may not provide a like-structured output.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: