Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We also cap how long we let reasoning LLMs think for. OpenAI researchers have already discussed models they let reason for hours that could solve much harder problems.

But regardless, I feel like this conversation is useless. You are clearly motivated to not think LLMs are reasoning by 1) only looking at crappy old models as some sort of evidence about new models, which is nonsense, and 2) coming up with nonsensical arguments about how they could still be memorising answers that make no sense. Even if they memorised sequences, they still have to put that together to get the exact right answers to 8-digit multiplication in >90% of cases. That requires the application of algorithms, aka reasoning.





> only looking at crappy old model

let me repeat this: it was newly trained specialized model

other rants are ignored.


They did not use modern techniques. Therefore it is meaningless.

That’s not to mention that modern frontier LLMs can also be demonstrated to do this task, which is an existence proof in and of itself.


I am not interested in this discussion anymore. Bye.

What a shame



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: