Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Did you read the example above? Do you disagree that the LLM provided a correct explanation for the reason it answered as it did?

> They're giant Mad Libs machines: given these surrounding words, fill in this blank with whatever statistically is most likely. LLMs don't model reality in any way.

Not sure why you think this is incompatible with the statement you disagreed with.



> Do you disagree that the LLM provided a correct explanation for the reason it answered as it did?

Yes, I do. An LLM replies with the most likely string of tokens. Which may or may not correspond with the correct or reasonable string of tokens, depending on how stars align. In this case the statistically most likely explanation the LLM replied with just happened to correspond with the correct one.


> In this case the statistically most likely explanation the LLM replied with just happened to correspond with the correct one.

I claim that case is not so uncommon as people in this thread seem to think




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: