I think human brains are a combination of many things. Some part of what we do looks quite a lot like an autocomplete from our previous knowledge.
Other parts of what we do looks more as a search through the space of possibilities.
And then we act and collaborate and test the ideas that stand against scrutiny.
All of that is in principle doable by machines. The things we currently have and we call LLMs seem to currently mostly address the autocomplete part although they begin to be augmented with various extensions that allow them to take baby steps in other fronts. Will they still be called large language models once they will have so many other mechanisms beyond the mere token prediction?
Other parts of what we do looks more as a search through the space of possibilities.
And then we act and collaborate and test the ideas that stand against scrutiny.
All of that is in principle doable by machines. The things we currently have and we call LLMs seem to currently mostly address the autocomplete part although they begin to be augmented with various extensions that allow them to take baby steps in other fronts. Will they still be called large language models once they will have so many other mechanisms beyond the mere token prediction?