Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The statement that correctness plays no role in the training process is objectively false. It's untrue for text LLMs, even more so for code LLMs. Correct would be that the training process and the architecture of LLMs cannot guarantee correctness.


> The statement that correctness plays no role in the training process is objectively false.

this statement is objectively false.


I'm just an AI researcher, what do I know?


> I'm just an AI researcher, what do I know?

me too! what do I know?

(at least now we know where the push for this dreadful policy is coming from)


The whole purpose RLVR alignment is to ensure objectively correct outputs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: