Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It too isn't rigourously defined. We're very much at the hand-waving "I know it when I see it" [1] stage for all of these terms.

[1] https://en.wikipedia.org/wiki/I_know_it_when_I_see_it





I can't speak for academic rigor, but it is very clear and specific from my understanding at least. Reasoning, simply put is the ability to come to a conclusion after analyzing information using a logic-derived deterministic algorithm.

* Humans are not deterministic.

* Humans that make mistakes are still considered to be reasoning.

* Deterministic algorithms have limitations, like Goedel incompleteness, which humans seem able to overcome, so presumably, we expect reasoning to also be able to overcome such challenges.


1) I didn't say we were, but when someone is called reasonable or acting with reason, then that implies deterministic/algorithmic thinking. When we're not deterministic, we're not reasonable.

2) Yes, to reason does imply to be infallible. The deterministic algorithms we follow are usually flawed.

3) I can't speak much to that, but I speculate that if "AI" can do reasoning, it would be a much more complex construct that uses LLMs (among other tools) as tools and variables like we do.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: