Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

While LLMs are better at reading the surrounding context, I am not convinced they are particularly good at taking it on board (compared to an adult human, obviously fantastic compared to any previous NLP).

Biggest failure mode I experience with LLMs is a very human-like pattern, what looks like corresponding with an interlocutor who absolutely does not understand a core point you raised 5 messages earlier and have re-emphasised on each incorrect response:

--

>>> x

>> y

> not y, x

oh, right, I see… y

--

etc.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: