Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your examples are not LLMs, though, and don't really behave like them at all. If we take the chess analogy and design an "LLM-like chess engine", it would behave like an average 1400 London spammer, not like Stockfish, because it would try to play like the average human plays in it's database.

It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?



>because it would try to play like the average human plays in it's database.

Why would it play like the average? LLMs pick tokens to try and maximize a reward function, they don't just pick the most common word from the training data set.


Maybe you didn't realise that LLMs have just wiped out entire class of problems, maybe entire disciplines- do you remember "natural language processing"? What, ehm, happened to it?

Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.

I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.


> do you remember "natural language processing"? What, ehm, happened to it

There’s this paper[1] you should read, is sparked an entire new AI dawn, it might answer your question

1. https://arxiv.org/abs/1706.03762


How is NLP solved, exactly? Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data? Maybe if we ask them very nicely it will improve the precision, right? I understand what we have now is a huge leap, but the problems in the field are far from solved, and honestly BERT has more use cases in actual text analysis.

"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff


> Can LLMs reliably (that is, with high accuracy and high precision) read, say, literary style from a corpus and output tidy data?

I think they have the capability to do it, yes. Maybe it's not the best tool you can use- too expensive, or too flexible to focus with high accuracy on that single task- but yes you can definitely use LLMs to understand literary style and extract data from it. Depending on the complexity of the text I'm sure they can do jobs that BERT can't.

> they still can't fit anywhere in production

Not sure what do you mean for "production" but there's an enormous amount of people using them for work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: