that, and instead of increases of productivity reducing people's need to work, what might (I think, will) happen is that we will actually have to work more for worse results and lower incomes, for the whims of the executive class and increased energy requirements for LLMs. compound this control over channels of communication (google, facebook, xitter), means of production (microsoft, amazon), with force of social-emotional manipulation of LLMs and we have a really "winner" technology.
I do not think the executive class is actually in on the power of AI to increase productivity, but rather to increase reliance.
Socrates allegedly was opposed to writing since he felt that it would make people lazy, reducing their ability to memorize things. If it wouldn't be for his disciple Plato who wrote down his words, none of his philosophy would have survived.
So I'm not completely disagreeing with you, but I also am not too pessimistic, either. We will adapt, and benefit through the adoption of AI, even though some things will probably be lost, too.
> Better “thinking” computers will breed worse thinking people, huh?
I actually think that will be the case. We're designing society for the technology, not the technology for the people in it. The human brain wasn't built to fit whatever gap is left by AI, regardless of how many words the technologists spew to claim otherwise.
For instance: AI already is undermining education by enabling mental laziness students (why learn the material when ChatGPT can do your homework for you). It seems the current argument is that AI will replace entry-level roles but leave space for experienced and skilled people (but block the path to get there). Some of the things LLMs do a mediocre but often acceptable job at are the things one needs to do to build and hone higher-level skills.