Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Cool experiment! My intuition suggests you would get a better result if you let the LLM generate tokens for a while before giving you an answer. Could be another experiment idea to see what kind of instructions lead to better randomness. (And to extend this, whether these instructions help humans better generate random numbers too.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: