Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It doesn't seem like the design of this experiment allows AIs to evolve novel strategy over time. I wonder if poker-as-text is similar to maths -- LLMs are unable to reason about the underlying reality.


You mean that they don’t have access to whole opponent behavior?

It would be hilaroius to allow table talk and see them trying to bluff and sway each other :D


I think by

> LLMs are unable to reason about the underlying reality

OP means that LLMs hallucinate 100% of the time with different levels of confidence and have no concept of a reality or ground truth.


Confidence? I think the word you’re looking for is ‘nonsense’


Make entire chain of thought visible to each other and see if they can evolve into hiding strategies in their cot


pardon my ignorance but how would you make them evolve?


I mean, LLMs have the same sorts of problem with

"Which poker hand is better: 7S8C or 2SJH"

as

"What is 77 + 19"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: