Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've tried it out and found that some models can answer the question if it's phrased right. And pretty much all models get it right if you also spell it out letter by letter to solve the problem you pointed out.


Spelling it out works in my experience. So does asking it for a python program to solve the problem.


Yeah, it does teach me more about how LLMs work on the inside when it can't answer a plain English logic question like that, but I can provide it a code example and it can execute it step by step and get a correct answer; it's clearly been trained on enough JS that even a complex reduce + arrow function I watched kunoichi (am RP model nonetheless!) imaginary execute it step by step and arrive at a correct answer.

I think it's something like the counting parts of problems that current models are shaky with, and I imagine it's a training data problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: