Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Some of your points are lucid, some are not. For example, an LLM does not "work out" any kind of math equation using anything approaching reasoning; rather it returns a string that is "most likely" to be correct using probability based on its training. Depending on the training data and the question being asked, that output could be accurate or absurd.

That's not of the same nature as reasoning your way to an answer.





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: