"Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science."
I think this is the accepted model in the philosophy of science since the 1970s. That's why I find this argument about AI so strange, especially when it comes from respected science writers.
The idea that accumulated progress along the current path is insufficient for a breakthrough like AGI is almost obviously true. Your second point is important here. Most researchers aren't concerned with AGI because incremental ML and AI research is interesting and useful in its own right.
We can't predict when the next paradigm shift in AI will occur. So it's a bit absurd to be optimistic or skeptical. When that shift happens we don't know if it will catapult us straight to AGI or be another stepping stone on a potentially infinite series of breakthroughs that never reaches AGI. To think of it any other way is contrary to what we know about how science works. I find it odd how much ink is being spent on this question by journalists.
I think you're misunderstanding Kuhn slightly. He invented the term paradigm shift. What he means by normal science with intertwined spurts of revolution is more provocative. He means that in order to observe periods of revolution, the "dogma" of normal science must be cast aside and new normal must move in to replace it. Normal science hits a wall, gets stuck in a "rut" as Kuhn describes it.
I think, in a way, Doctorow is making that same argument for the current state of ML: "I don't think that we're remotely close or even on the right path in any way". In other words, general thinking that ML will lead to AGI is stuck in a rut and needs a new approach and no amount of progressive improvement on ML will lead to AGI. I don't think Doctorow's opinion here is especially insightful, he's just a writer so he commits thoughts to words and has an audience. I don't even know wether I agree or not. But I do think this piece comes off as more in the spirit of Kuhn than you're suggesting.
And of course you can interpret Kuhn however you want. I don't think Kuhn was saying you shouldn't use/apply the tools built by normal science to everyday life. But he, subtly, argues that some level of casting off entrenched dogmatic theories, in the academic domain, is a requirement for revolutionary progress. Kuhn agrees that rationalism is a good framework for approaching reality, but also equates phases of normal science to phases of religious domination that predated it. Essentially truly free thought is really really hard because society invents normals (dogma) and makes it hard to deviate. Academia is no exception. Science, during periods of normals, is (or can become) essentially over-calibrated and over-dependent on its own contemporary zeitgeist. If some contemporary theory that everyone bases progressive research off of is not quite right, it kinda spoils the derivative research. Not always true because sometimes the theories are correct.
I felt like the part that wasn't in line with Kuhn was the idea that there was something wrong with a field if incremental improvement couldn't lead to a breakthrough like AGI. You're right. He's arguing Kuhn's point. But he seems to use it to conclude that machine learning is a dead end when it comes to AGI. Further, he seems to think this means AGI won't happen any time soon.
But, if I'm not misinterpreting Kuhn again, knowing that a revolution is necessary to overturn the current dogma (which I would argue is deep learning) doesn't tell us anything about when the revolution will occur. It could be tomorrow or 50 years from now or never. So, specifically, it doesn't tell us anything about machine learning in general, whether AGI is possible, or when AGI will happen.
>So it's a bit absurd to be optimistic or skeptical.
We skeptics aren't skeptical that AI is possible, were skeptical of specific claims. I think it's perfectly reasonable to be skeptical of the optimistic estimates, since they really are little more than guesses with little or no foundation in evidence.
I agree that one would think that Science Fiction writers would have enough of an imagination to be able to consider alternate futures (Cory CYA's by saying such a scenario would make a good SF story) - but there are already promising approaches to AGI: Minsky's "Society of Mind", Jeff Hawkins' neuro-based approaches, the fairly new Hinton idea GLOM: https://www.technologyreview.com/2021/04/16/1021871/geoffrey... .
“By 2029, computers will have human-level intelligence,” Kurzweil said in an interview at SXSW 2017.
2011 ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.
So until his revised timeline for 2029 the distance into the future before we achieve strong AI and hence the singularity was, according to it's most optimistic proponents, receding by more than 1 year per year.
I wonder what it was that lead him to revise his timeline so aggressively. I think all of those predictions were unfounded, until we have a solid concept for an architecture and a plan for implementing it an informed timeline isn't possible.
That's funny. Of course, I was referring to Asimov's Elevator Effect, which is that if aliens visited NYC with some probe in 1800 and then in 1950, they would be astonished at all the very tall buildings, and would have to assume people were now living in these tall towers for reasons TBD. They would not know that elevators had been invented, and hence, the buildings would only be occupied 8 hours per day or so; and nobody would live in them. Elevators allowed this major unexpected result. There is more, I couldn't find the actual essay.
>I think this is the accepted model in the philosophy of science since the 1970s.
Perhaps, but "philosophy of science" has never been something the majority practicing scientists consider relevant, care about, or are influenced by, since forever.
> I share the skepticism towards any progress towards 'general AI' - I don't think that we're remotely close or even on the right path in any way.
This isn't how science works though. Quoting the wikipedia page for Thomas Kuhn's "The Structure of Scientific Revolutions" (https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...):
"Kuhn challenged the then prevailing view of progress in science in which scientific progress was viewed as "development-by-accumulation" of accepted facts and theories. Kuhn argued for an episodic model in which periods of conceptual continuity where there is cumulative progress, which Kuhn referred to as periods of "normal science", were interrupted by periods of revolutionary science."
I think this is the accepted model in the philosophy of science since the 1970s. That's why I find this argument about AI so strange, especially when it comes from respected science writers.
The idea that accumulated progress along the current path is insufficient for a breakthrough like AGI is almost obviously true. Your second point is important here. Most researchers aren't concerned with AGI because incremental ML and AI research is interesting and useful in its own right.
We can't predict when the next paradigm shift in AI will occur. So it's a bit absurd to be optimistic or skeptical. When that shift happens we don't know if it will catapult us straight to AGI or be another stepping stone on a potentially infinite series of breakthroughs that never reaches AGI. To think of it any other way is contrary to what we know about how science works. I find it odd how much ink is being spent on this question by journalists.