Agreed. It would be great if it could ask questions about missing context but maybe language models are bad for that task. Maybe one needs a second run, evaluate the answer and then make it look for info that could improve the answer.
Self-driving cars are much more difficult than one might think because the AI has to interface with the real world, something that humans have evolved for over millions of years.
Like others have said, worry more about every job that primarily interfaces with a computer.
Computers don't exist in a vacuum, what goes on in the computer is often determined by interfacing with the real world, reflecting real world intentions, observations, input, predictions, etc.
This generation of AI is great when the universe of values can be generalized. When the universe of values is literally the universe, it shows its limitations.
Well, there's one thing that humans are much better than robots for the forseeable future: Energy efficiency. Try building a robot that contains a general purpose AI and the motoric functions and sense as humans that runs on the energy of three meals.
Now of course humans want and need more than just 3 meals but still pretty good.
But if most of your work consist of interfacing with computers, you might be in trouble.
Also, since humans aren’t slaves, they do their own maintenance. And they don’t do exactly what you ask, which is good (when they do it’s called “working to rule” and is a kind of strike).
It’s very rare for automation to cause unemployment to the point it possibly never happened.
It is true but then again, we constantly ajust the trust in sources based on what works and what doesn't.
If I watch a youtube tutorial teaching me how to bake pizza and it comes out totally wrong and a watch another that produces exelent pizza, I disregard the first channel.
And there are many more ways where humans check the reputation of sources automatically.
I'm constantly amazed that most discussions on technical forums center around what ChatGPT can't do and why it can't replace X and how often it produced nonsense.
Yes, it's true. But then again, if it didn't make mistakes anymore, we would have created a general porpuse solution machine working with all of human knowledge.
"We've created a plane that can fly 10 km!"
"Meh, 10 km is not that useful. Also, it's still expensive"
20 year ago, even current ChatGPT would be straight up science fiction. We are getting to a point where we develop tools that are unlike any other in their power to solve problems for us. And development will likely only get more intense on that front. These systems made quite a splash recently so there will be even more money going into it. Custom hardware for AI systems is being advanced all the time and every large software company wants AI developers.
I'm amazed that we don't think about how we are going to handle this. There are a lot of areas where the next gen (or the one after etc.) ChatGPT might have dramatic consequences both good and bad.
It's just another instance of the same broken thinking one sees in other ML fields. For whatever reason, people 1) hold ML systems to a standard of success far in excess of that demonstrated by humans 2) endlessly quibble about whether the ML system internally has "true understanding", despite it not mattering for the system's ability to affect the external world.
Thermodynamically, general intelligence is on the order of 10 watts, as is evidenced by most human brains. This leads me to the belief that we likely already have the computational capability for AGI, and simply have not figured out the correct architecture and weightings. As we've seen with the flurry of increasingly SOTA image generation models this year, innovations in the ML space tend to arrive with little warning, and have rapid and real effects on the world. Within the context of AGI, this pattern causes me a lot of existential dread.
When comparing how the human brain works, the transformer model is not the same thing. Until convergence of the mechanics occur, there will be limitations in the efficiency of AGI. Stil, I am eager to see what a 100 trillion or 1 quadtrillion parameter GPT5 with Adaptive Computation Time will do.