Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Context length compute complexity scales N^2 with input, so moving to give it the individual letters would just hurt context length vs an optimal tokenization.

We could still have it paste the tokens into python and count the letters in hidden thinking traces if we wanted to solve that part of the Turing test instead and focus on useful things, but solving the Turing test is basically solving a deception goal instead of working on useful assistants. It's not really the goal of these systems outside of their use in North Korean scam bots etc.

I still think it's useful to say we've essentially solved the Turing test even if there are these caveats about how it is optimized in practice.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: