An interesting point, but I can't see how it could happen.
The industrial revolution was sparkled not by sudden availability of raw resources (they were always there, including coal), but by advances in physics and philosophy (probably the most important person founding the change was Thomas Aquinas, who defined the separation between secular and religious pursuits, and declared that scientific understanding of the world is actually something good in its own right — but it took nearly 500 years for his ideas to finally take off). The coal, on the other hand, was equally available in 1200 as well as in 1760.
If there will be no more coal (doubtful — at least 600 years of proven reserves exist), humanity will find the replacement. Energy resources are so cheap that we have to actually restrain ourselves to limit our footprint on the planet (otherwise, nobody would care). Oil is dirty cheap and still actually flows right from the earth in some lucky places. Uranium is so cheap we are happy with ~1% utilisation rate in "classical" nuclear reactors (breeder reactors would allow full utilisation, but they cost more — so why bother if uranium is so darn cheap? With breeders, we have enough uranium for MILLIONS of years with our current energy consumption rates).
It is knowledge that must be preserved, but utter destruction of human knowledge is no longer a "boring" apocalypse scenario. It is quite interesting how this can possibly happen, and what can be done to prevent it.
Somebody made the argument that the rise of civilization is the rise of drill bits. Harder, more durable bits meant more metals mined per hour. More metal meant more durable goods like chariots, pots, spear points, wagons. Durable goods meant more efficient commerce. And on up the food chain. Anyway I remember a graph of bit hardness through history vs city size or some such.
Guess this argument can also be made the other way round - larger communities > more interaction and knowledge transfer > harder drill bits. I doubt that human progress can be boiled down to a few factors but has to be seen as a chaotic system with each node influencing every other.
Decline does not have to follow the scenario in which knowledge, capability, or even access to resources are lost. The scenario of a living hell for most people and a slow, global decline can come about simply by reducing access to the benefits of technology and resources, and by making use of extreme methods of control.
For example: what if the priority of a small segment of the population becomes putting maximum resources into some long-term project which only benefits that group? 90% of the population could be directed toward working on that massive project, by delusion, coercion, or other methods. The pyramids of Egypt and North Korea now are smaller-scale examples of this.
Just an FYI: I've been hearing more and more recently that the Pyramids of Egypt were built by free engineers, not slaves. It has been recently theorized that methodologies they used to hurl great stones across the desert required far fewer laborers. By pouring water in front of the stones, the desert sand would become slippery, and thus much much fewer people would be needed for the task. (Early estimates was that it took ~100k people to build a pyramid. Modern estimates point at only ~10k, based on newly found food consumption records)
IE: The Pyramids were more akin to the Great Cathedrals across Europe: 100 or 200 year projects completed by generations of engineers, architects, and artists. Not necessarily the elites of society... but at least those with free will. Some motivated by Religion, others motivated by simple glory and the joy of building great things.
I think people think that Egyptian Slaves built the pyramids... Sometimes people think they were Jewish Slaves: some time between Joseph and Moses in Biblical times. But if they were simple slaves, why were the workers of the Pyramids buried either near, or even within the Pyramids? This was a great honor that was normally reserved only for Pharaohs!
Interesting-- if the pyramids took a long time (as cathedrals did) then the work and resource-use may have not had such a terrible impact on the common people.
Thinking on it, I do hope that the current global lifestyle of productivity, consumerism, and reproduction can be wrought into something truly sustainable. Science and technology will help with this, but then again the same could create new forms of resource-intensive consumerism.
Without changes in how people live, a star trek economy, or large numbers of people choosing to not reproduce, we might over-convert the Earth's resources into finished components of our inane, evolution-driven desires.
I say the long goal is "interesting things", but doing so while minimizing destruction.
Perhaps a far-future goal could involve evolving our minds and activities beyond our evolutionary orientation, yet still be able to somehow retain them to respect our history, or just our strange beauty?
A large part of the problem is centralization- if oil refineries or manufacturing hubs halfway across the world fail, we're suddenly out of supplies needed to function. The sort of collapse talked about here is scary precisely because it could happen everywhere at once. Likewise, because oil drilling depends on technology sourced from all over the world, you can't have a "local restart"- you'd have to reboot the entire planet all in one go.
Historically, the "fall of Rome" wasn't as big a setback in human history as its made out to be- cultural progress shifted over to Persian countries, who kept on doing science and other nice stuff until Europe got itself back together. Meanwhile China ticked along unconcerned about the whole thing. It wouldn't work the same way in today's world, but that's a solvable problem.
I've read about wood gas as a possible energy source for a non-global economy- non-ideal stuff in many respects, and requires the survival of a good amount of know-how, but providing a locally-sourced oil alternative in a pinch.
I wrote this in response to much of the hoopla around the 'threat' of A.I super-intelligence, and other unlikely disasters.
The future of humanity as a species is worth thinking about, but we also ought to be considering the very real possibility of getting stuck in the awful conditions humans endured in the past, alongside more farfetched (and less likely) scenarios.
I wish I had some unique idea about how to do that, but I suppose discussion is just as good.
Whether this particular scenario is plausible or not, the overarching concept behind it is path dependence. Path dependence implies that humankind may only have one chance to go through certain transformations. For instance, the Internet, begun in 1966, took nearly 30 years to become the behemoth force it is now. No single corporation could have pulled it off because of the timeline alone.
Consider however that we've been doing this transformation thing since we began as a species.
I'm glad you're thinking about the long-term prospects of humanity. Your point is well-made that a technological regression, at this point, would be exceedingly difficult for future humans to reverse.
If humans do not spread life beyond earth, then life will not survive long-term. It does feel like we must succeed in this first great technological expansion, or we might never succeed, and fall prey to Fermi's Demon.
Consider that disasters scale fractally. You're much more likely to see a failed AI superintelligence destroy the one supplier of critical infrastructure product XYZ, than directly destroying the world. Its fractal self similar on a smaller scale of hubris, bad specification, bad planning, bad back out plans, poor estimation, bad competitive risk taking.
The AI super intelligence that destroys our only global manufacturer of XYZ is very likely to be written in Excel in a spreadsheet convincing the execs that selling junk bonds and opening an ambitious new plant is a great idea, fast forward five years and humanity literally has no CPUs, or no flash memory, or no spark plug insulators. Its very unlikely to be black and white mad 1950s movie mad scientist writes bad LISP code that makes a humanoid-ish robot self aware making it crack open the smallpox vials intentionally. Its also worth considering that "we" are pretty good at destruction with existing dumb tools, it seems unlikely we'd need intelligent tools when simpler cheaper dumb tools cause equal chaos. It didn't take an AI to destroy HP, or Bell Labs, for example.
Its statistically possible that the first AI super intelligence failure will instantly vaporize the planet in the same sense that its thermodynamically possible for all molecules in a box of air to randomly be found on one side of the box, however unlikely. In a similar way its very likely we'll have AI destroying individuals, companies, industries, markets, countries, long before it destroys the species.
Ultimately, and what the OP ignores, is rebuilding society [and resource exploitation to do so in general] like can be done with the rubble of the existing civilization.
It will be a long, slow road to recovery but all you really need to be able to do is setup one hydroelectric dam and you have enough energy to rebuild civilization.
We could build those with 1800s technology and, frankly, many of the innovations of the 1800s today are possible by just recycling the bones of a large city.
So I don't think this doomsday scenario will be as bad as the OP fears. We just may get booted down to the 1700-1800s and take 300 years rebuilding.
Yeah... I'm more inclined towards your view. It's hard to know for certain, but I think people would redevelop things -- after all, they already developed them once, from a significantly weaker position.
There's the possibility that I'm succumbing to the anthropic principle (if people hadn't developed these things, I wouldn't be here to remark they had), but IMO that's offset by independent reinventions of several basic technologies by civilisations starting from much less than our "post-apocalypse" civilisation would have.
The problem is, with just one hydroelectric dam, others will want it. Thus others will fight you for it, and in the process, it will be destroyed or the people that know how to maintain it will be killed.
1800s-era Hydroelectric Dams are buildable with basically nothing but very basic technology and alot of manpower.
No one would kill for them in the general case simply because other people could build it.
This isn't some SUPERMODERNTECH but a very basic one we've been exploiting for thousands of years, just the ability to convert it to electricity wasn't really possible until the 1800s.
If books and blueprints for advanced technology were available, I wonder what is the potential for leapfrogging technologies. So instead of burning coal to power steam factories, we could skip straight to powering electric generators via dams and turbines. Water power fortunately is not going anywhere, and provides a considerable amount of electricity.
The big issue is energy for transportation and for growing crops. I wonder what the prospects for things like switchgrass based ethanol are. I see promising news reports, but nothing has come of it yet.
Dams = concrete, concrete = a whole lotta cement and other energy inputs; concrete production is one of our big unsung CO2 emission sources right now.
Turbines and generators = refined metal, including copper (dynamo windings) and steel (turbine blades) -- again, lots of energy required.
Okay, we can back it off a level and go for water wheels in rivers -- a Roman to mediaeval technology -- driving the dynamos; but it's still not going to work without refined metals (energy intensive) and waterproof insulators, which means gutta-percha or rubber or refined organic polymers -- all of which mean long-haul shipping or again, energy-intensive chemical industry.
These obstacles aren't insuperable, as long as we don't get knocked back to dark ages/monasteries preserving books and knowledge but no actual lights-on/wheels-turning infrastructure. If we get knocked back that far in a post-carbon-extraction world, it'd be devilishly hard to build back up again.
No, it is much easier than you'd think. Energy breeds energy. You start with a smaller energy source, use it to extract resources and make parts for the bigger energy generator, rinse, repeat.
Wind turbines and small hydroelectric dams are simple. Megaprojects are harder, but it is easy to start small and extend from here. It is easy to make electric energy, and with enough electricity, everything is possible.
Oil drilling is one of the most technologically advanced industries that exist, if not the most, and the existence of such technology is predicated (currently) upon the existence of globally-available cheap energy. Non-oil-based development would likely hit a ceiling where there just isn't enough energy to progress long before you could recreate such an elaborate system.
We only burn oil for energy because it is dirt cheap (yes, thanks to the economy of scale). Electricity can be produced by many other ways if oil becomes too expensive for that.
So, the idea is to create a society that survives even this scenario.
Distributed Library's,that survive Alexandria.
Rugged Tech, that can be made even with just sun power and leftovers.
Social networks, that allow for continued cooperation even during times of enforced isolation by disease and war.
Social networks that enforce social behavior, by the world bearing witness.
Tech that evades the control of the temporary mighty, warlords and priest castes, that try to "stabilize" society by freezing it between the holy cycles of overpopulation and civil war.
Lots of work.
Doable Work.
Almost certainly not the case that coal and oil are prerequisites, though things might have been harder without them, progress wouldn't necessarily have halted. Different approaches would have been taken. Growing fuel crops, for example, rather than building more mines. The important point here is that improvement in wealth and life expectancy in England started well before the industrial revolution. That revolution was enabled by this growth in wealth and longevity, and thus greater investment in technology, not vice versa.
"We claim that the exogenous decline of adult mortality at the end of the seventeenth century can be one of the causes driving both the decline of interest rate and the increase in agricultural production per acre in preindustrial England. Following the intuition of the life-cycle hypothesis, we show that the increase in adult life expectancy must have implied less farmer impatience and it could have caused more investment in nitrogen stock and land fertility, and higher production per acre. We analyse this dynamic interaction using an overlapping generation model and show that the evolution of agricultural production and capital rates of return predicted by the model coincide fairly well with their empirical pattern."
I don't understand this. There is no way we would lose all knowledge in the described situations. So why couldn't existing knowledge be used to re-boot society? Sure it might take 100-200 years, but it would happen
It took a very long time the first time. Having knowledge, and having the resources to make use of it, are far far apart.
To drill for oil, you need drill bits. They require exotic steel. That requires high-temperature furnaces. Which requires 100 other things - ceramics, high-density fuels (not oil!), presses and breaks and grinders etc that in turn require other exotic machines to make and ingredients to find or make.
The chain of dominoes can be hundreds long. And there you are, standing in the rubble with a rock and some sticks. And you're hungry. And so is everybody else, and they want your food.
I can imagine another thousand years to get anywhere near where we are now. And it'd look completely different - combining photolithography for circuits with steam engines and slavery. All the inventions of history combined in ways that the new reality required.
Sure, if you have wire, and magnets, and cement, and electronic controls, and pipe (for hydro). And the tools and processes to make those things.
If we imagine there's some infrastructure remaining, then there'd be a small window of opportunity to rebuild using that, before it decayed. A couple of years maybe.
And civil engineers qualified for hydro dam design, and diesel powered bulldozers and cranes, and their operators, and the mechanics and and and...
The bigger picture is not just binary today or 50000 BC, but somewhat more likely getting stuck in weird local minima / local maxima. A lot of peculiarity of USA vs euro lifestyle boils down to just the luck of the draw that we're at one local minima of public transit where they happened to fall into a local maxima, or a zillion other variables.
John Michael Greer's blog has a lot of commentary about realistic lifestyle on the downslope. One common theme is something like the secular Amish.
This "loss" and "all" talk is too binary. The real world is all analog spectrum.
Aside from "lose", there are also cultural outlook problems WRT what "knowing" means, not to mention what "power" is. And then there's topics of "education" and "wisdom"...
For example, although this sounds like a joke, its actually serious: Consider a population 200 years ago the same size as HN this afternoon trying to build a barn, a classic barn raising party. Now consider the population of HN, today, starting five minutes from now, trying to host a barn raising party presumably via twitter and flash mobs and watching youtube videos while sipping lattes at the local makerspace. For all intents and purposes we have lost the 1800s technology of community barn raising. At least we as defined as HN participants.
In theory, in about as much time as it takes a child to grow up, HN as a team, could successfully get our act together and git commit 2.0 of barn-1820. In practice it would be easier / quicker / more reliable to do the ole ageism thing and grow a new batch of fresh teenagers to build the barn. Less old ideas in the way.
That's how you lose a technology for all practical purposes.
For another example, its pretty easy to take my backyard, and industrially mechanically farm corn on it. People do this all the time a couple miles away. On the other hand we've pretty well lost the technology of growing a couple rows of carrots at a backyard scale, if you define "we" as more than single digit percentage.
My thesis is that the knowledge on its own wouldn't be enough. To reflect a later commenter, suppose we still knew how to make shale gas. We still need all the infrastructure involved in getting shale gas - which needs other infrastructure, which needs us to have some other easily available resources. All those resources might be gone.
It doesn't take some disastrous event for this to happen. It just takes time. The USGS has resource estimates for all major minerals.[1] For each one, there's an estimate of world resources. Check out iron ore.[2] "World resources are estimated to exceed 230 billion tons of iron contained within greater than 800 billion tons of crude ore." That sounds huge, until you see that current mining is 3.2 billion tons a year. There's less than a century of iron left.
It's like that for most minerals. There are not millennia of supply left. In most cases, it's a century or two.
The original article says "From about 1760 onwards we've improved our situation dramatically." I've been saying something similar for a while, but I usually date the start of the industrial revolution from 1825, when the first steam-powered railroad started carrying goods and passengers commercially. This was the moment when the industrial revolution got out of beta. It was also when humans started making serious dents in mined natural resources. Before steam power, mining was a small-time activity. Nobody could dig much, and nobody could move much. Today, as mentioned above, 3.2 billion tons a year of iron alone are mined. That's all in less than 200 years.
It's not going to go on for another 200 years. The highest grade minerals in easily accessible areas were mined out decades ago. Most mining today is already working low-grade ore. Going for even lower grade ore is possible, but that just postpones the end.
Minerals are more of a problem than energy. There are many energy sources, some of which are renewable. Minerals can be recycled, but you lose some at each go-round. Asteroid mining might help some day, but not unless we find some far cheaper way of operating in space.
Heavy industrial civilization has a finite lifespan.
I can't see how it could happen - and the reason is even more depressing: world war 3. That will cause "the reset" and after few hundred years the civilization will start growing again. Some of the current knowledge will be lost but it will be reinvented again but better.
And then again, we could develop a benevolent AI which is able to design a practical fusion reactor solution which gives the human race essentially unlimited power resources to apply toward cleaning up the planet.
Why is that so much harder to consider than the bad scenarios? I suppose evolutionarily the optimistic people probably didn't reproduce as often as the pessimistic people did but still.
I don't think a benevolent AI is impossible, or even unlikely, but I do think that as soon as the benevolent AI exists, there are a lot of people who will work very hard to find a way to exploit the tech for military purposes. So to me, the bad scenarios are essentially inevitable, even if the good scenario comes about.
That doesn't mean that the bad scenarios are as bad as people make out, but whether the AI is itself seeking to destroy humanity or just being used by militaristic people to plan simultaneous preemptive wars against everyone they see as a threat, the technology looks very dangerous to me. We will try to build it, that is just the human way, but like with the creation of other dangerous tech, we should be thinking about the dangers and how we will cope with them.
The problem there is, such a benevolent AI us not a robust technology. If we are thrown back into the pre-industrial era, there's a good chance that the AI would stop functioning.
Make no mistake, I'm optimistic about technology in general, I just want to observe that this is a unique moment in history an we should treat it as anything but inevitable that we'll make it.
As I read the author's point it was that some set of events would materialize that pull us back into the stone age (the air tight box) which denies the possibility that something we're already doing has already counteracted that. There are no fewer than 6 reasonably well funded fusion research programs which have some possibility of producing massive amounts of easily consumed energy while eliminating large parts of the negative spiral (say carbon emissions), not to mention that with sufficiently inexpensive energy you can pull carbon out of the air and turn it into what every hydro carbon you want.
The world has seen doomsday predictions forever, and while there is always the possibility we'll kill ourselves off, one has to recognize that there is also the possibility that we won't. Good and bad things seem to happen in about equal measure when you look at it on a larger time scale.
Doomsday thinking is counter productive to getting stuff done anyway as it tends to sap people energy (why invest time in what you're doing if you're all going to die anyway? sorts of reasoning).
Mad Max was originally about peak oil collapse. Nuclear holocaust was retconned in circa Thunderdome.
The author apparently doesn't know that fissioning the thorium and uranium in an average rock yields 1,000 times the energy needed to melt it.[1] The ground you walk on is literally beaming with energy, and it's incredibly easy to have out.
[1] 2000 GJ and 2 GJ per cubic meter, respectively.
Without understanding anything about it, it's natural to assume it must be difficult. Otherwise, why wouldn't more people be doing it? As noted in the article, coal mostly sat in the ground for thousands of years before 1750, when all of a sudden mining it was the obvious thing to do. Why?
Why would that be best? As opposed to for example finally figuring out fusion power and using that power to dig and sustain deep underground farms and dwellings?
There's whole lot of space underground that's currently inhabited by at most bacteria.
Why is that? With abundant clean power you can have everything under the surface: space, daylight, clean water, air, food, wood. I'm guessing more could be recycled if energy is not a concern. Surface could be the place kept nice where you go for vacation, all other activity could move underground.
The oil industry is filled with examples of people not "knowing" exactly what they were doing. Choosing to drill in a given location often included the possibility that there was actually no oil/gas to be had...that the well would end up being a "dry hole."
Please stop underestimating A.I. power of destruction. A.I. is not danger itself, it's people using A.I. are. Like nuclear power, we use it for clear energy and we use it for destruction. Real AGI is 1000x more deadly than nuclear power in wrong hands..
The most likely scenario for destroying civilization through AI is also quite boring. Just getting rid of semi-truck drivers replacing them with autonomous trucks would put unemployment at levels that precipitated WWII, but now we have tactical nukes...
I sincerely believe that decoupling human living standards from human labour will be a good thing in the long term, though we'll have to adjust to the idea of no one ever having to work ever again.
There are a lot of assumptions you have to make to get to the point where it doesn't have the capability to be dangerous:
* No improvement beyond human level (correlation between IQ and job-performance is variously estimated at around 0.5 and IQ largely comes down to executive function and working memory - so we have to assume an AI design that doesn't allow adding more working memory for some reason)
* No extra individual capabilities (eg human-level but with mental access to Google Scholar)
* No extra group capabilities (eg human-level but with telepathy between AIs - look at the difference the internet is making to human productivity eg Linux)
* No horizontal scaling over time (eg one human-level AI now, one million once Moores law kicks in - that's a lot of scientists/strategists/politicians/advertisers)
So in the situation where we make an AI that is somehow structurally limited to human-level intelligence and doesn't benefit from extra interfaces and we somehow can't afford to ever make many of them, then, sure, I can't think of a way that it could be dangerous.
In a few minutes of cursory thought I came up with four assumptions that were necessary for assuming that they won't be more capable than humans. What are the odds that all of those hold?
[EDIT isn't dangerous -> doesn't have the capability to be dangerous]
Those four assumptions have nothing to do with danger and everything to do with capability.
I think a more interesting question is "how do we make an intelligence, artificial or human, that isn't potentially dangerous?"
Humans have a bunch of low-level primate aggression that seems to be wired in; we can modulate it (we're largely self-domesticating) but we can't totally get rid of it and more importantly, we can't look at a baby's genome (and nurturing environment) and say "yup, this one's going to grow up to be Hitler", or "this 'un's going to be a pacifist, altruist, and general benefactor".
On the other hand we can probably add a bunch of monitoring code to any AI we develop rather than grow: if nothing else, consider the possibility of simulation runs looking for an empathy deficiency.
Also, to a first approximation capability == danger. Just look at what even well-meaning humans have managed to do - imagine an AI that could eg design memes more persuasive than Marxism.
We can't even get humans to agree on ethical values, let alone explain them to an AI, so even a perfectly altruistic AI could still be a disaster if it has the wrong value system, or if we don't perfectly understand the consequences of our own values systems when fully enacted.
You're right that the interesting question is how to make a friendly intelligence, but all the groups who are trying to draw attention to that problem are running into the wall of doubt about capability.
The OP said:
> I'm not sure I understand how exactly a human-equivalent AGI would be more dangerous than an actual human.
Since an actual human might be out to get me already the question is not motives but capability - should I fear evil AIs more than I fear evil humans? Or an evil human with a friendly leashed AI?
The argument typically revolves around whether or not AI would self-improve and lead to runaway intelligence, but I think that that's a distraction - there are plenty of other ways AI could be dangerous that are easier to persuade people with without running into the that-sounds-like-scifi filter.
We are constrained by organic capabilities of our brains. AGI will not be. It will see things we can't, it will discover things which would take us centuries on our own. It's analytical capabilities will be unmatched for any group of humans. If hitler would have real AGI, there would be only one country in the world now.
Consider: by definition, the human brain has the equivalent 'intelligence' of a human-equivalent A.I. That means we are able to understand the A.I precisely as well as it can understand itself.
If the A.I can self-improve, then we can also improve it. We're still equivalent.
Hitler had literally millions of human-equivalent-intelligences - they even had excellent sensor and actuator suites - and it didn't work out so well for him in any case.
Because a human brain is not field programmable, and improving cognitive ability isn't as simple as plugging in another stick of RAM. The worry about AI is the bootstrap scenario, where an AI gains sufficient intelligence to modify and upgrade itself.
Sure, you're right in that humans don't understand enough about our own brains to perform such modifications, but that may simply be down to a lack of knowledge, rather than not having sufficient cognitive ability to comprehend the brain.
Additionally the hardware which an AI can run on is generally assumed to be significantly more malleable than our limited gray matter confined within our cranium
I think the underlying assumption you're making is that a sufficiently powerful computer could emulate the chemical processes in the brain, and that this is therefore equivalent to running a brain on a computer, decoupling cognition from brain meat.
The problem is there are hard physical limits you run into in computation - it's more efficient to just do the chemistry than it is to simulate it. Our meat brains might be more efficient in terms of density and power efficiency than any physically feasible digital computer.
Further, computing power can't be increased arbitrarily, especially when networks and switching come into play. Even if your simulated brain worked out how to improve itself, the resource cost to actually do so might be prohibitively high.
Because it can think so much quicker, so a few seconds will be generations in their time.
And on top of that, if they have human intelligence, they'd be able to improve it in those few seconds (because we had and they'd have access to our research), make smarter AI, and on and on exponentially fast.
I am not sure I agree that it would necessarily be quicker. Evolutionarily speaking, a rat's brain's neurons switch just as fast as a humans. The number and density of neurons seem to define the level of cognition, not the switching speed.
I have a hunch that the constraint on switching speed actually results in intelligence rather than constrains it. Consider seizures in human brains for example.
Secondly, consider that humans have had the same brains for millennia and yet we still cannot improve them. We barely understand the first thing about them. I don't see how making the process faster, at the same level of cognition, would help.
There is no valid way of estimating in invented technologies that rely on understanding things that humanity does not yet understand. The only possible arguments are emotional, which makes reason an unwelcome visitor, which makes the conversations both fraught and boring.
The industrial revolution was sparkled not by sudden availability of raw resources (they were always there, including coal), but by advances in physics and philosophy (probably the most important person founding the change was Thomas Aquinas, who defined the separation between secular and religious pursuits, and declared that scientific understanding of the world is actually something good in its own right — but it took nearly 500 years for his ideas to finally take off). The coal, on the other hand, was equally available in 1200 as well as in 1760.
If there will be no more coal (doubtful — at least 600 years of proven reserves exist), humanity will find the replacement. Energy resources are so cheap that we have to actually restrain ourselves to limit our footprint on the planet (otherwise, nobody would care). Oil is dirty cheap and still actually flows right from the earth in some lucky places. Uranium is so cheap we are happy with ~1% utilisation rate in "classical" nuclear reactors (breeder reactors would allow full utilisation, but they cost more — so why bother if uranium is so darn cheap? With breeders, we have enough uranium for MILLIONS of years with our current energy consumption rates).
It is knowledge that must be preserved, but utter destruction of human knowledge is no longer a "boring" apocalypse scenario. It is quite interesting how this can possibly happen, and what can be done to prevent it.