I am prepared to go out on a limb here, as I have done before, and say that business and research cycles that involve standard-issue humans are incompressible beneath a certain duration - they cannot be made to happen much faster than is possible today.
Kurzweil's Singularity is a Vingean slow burn across a decade, driven by recursively self-improving AI, enhanced human intelligence and the merger of the two. Interestingly, Kurzweil employs much the same arguments against a hard takeoff scenario - in which these processes of self-improvement in AI occur in a matter of hours or days - as I am employing against his proposed timescale: complexity must be managed and there are limits as to how fast this can happen. But artificial intelligence, or improved human intelligence, most likely through machine enhancement, is at the heart of the process. Intelligence can be thought of as the capacity for dealing with complexity; if we improve this capacity, then all the old limits we worked within can be pushed outwards. We don't need to search for keys to complexity if we can manage the complexity directly. Once the process of intelligence enhancement begins in earnest, then we can start to talk about compressing business cycles that existed due to the limits of present day human workers, individually and collectively.
Until we start pushing these limits, we're still stuck with the slow human organizational friction, limits on complexity management, and a limit on exponential growth. Couple this with slow progress towards both organizational efficiency and the development of general artificial intelligence, and this is why I believe that Kurzweil is optimistic by at least a decade or two.
I am prepared to go out on a limb here, as I have done before, and say that business and research cycles that involve standard-issue humans are incompressible beneath a certain duration - they cannot be made to happen much faster than is possible today.
Kurzweil's Singularity is a Vingean slow burn across a decade, driven by recursively self-improving AI, enhanced human intelligence and the merger of the two. Interestingly, Kurzweil employs much the same arguments against a hard takeoff scenario - in which these processes of self-improvement in AI occur in a matter of hours or days - as I am employing against his proposed timescale: complexity must be managed and there are limits as to how fast this can happen. But artificial intelligence, or improved human intelligence, most likely through machine enhancement, is at the heart of the process. Intelligence can be thought of as the capacity for dealing with complexity; if we improve this capacity, then all the old limits we worked within can be pushed outwards. We don't need to search for keys to complexity if we can manage the complexity directly. Once the process of intelligence enhancement begins in earnest, then we can start to talk about compressing business cycles that existed due to the limits of present day human workers, individually and collectively.
Until we start pushing these limits, we're still stuck with the slow human organizational friction, limits on complexity management, and a limit on exponential growth. Couple this with slow progress towards both organizational efficiency and the development of general artificial intelligence, and this is why I believe that Kurzweil is optimistic by at least a decade or two.