"Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much."
This. That's a good way to put it. I've mentioned Fred Hoyle's line on that, "Science is prediction, not explanation" (which is from one of Hoyle's novels), but Sagan's line is better.
Key point: unfalsifiable theories do not lead to useful technology. Engineering requires predictability.
This is a problem with shaken baby syndrome, which asserts that infants presenting a certain pattern of medical findings (mostly subdural and retinal hemorrhage) MUST have been violently shaken if no other alternative medical or accidental explanation is found. It is a diagnosis by default, which raises a number of legal issues. [1, 2]
This medical diagnosis is "certain" even with no witness, no admission of violence, no antecedents of violence, no sign of trauma on the baby's body. There is absolutely no way to falsify the idea that this gesture occurred (and no way to prove one's innocence — it's a presumption of guilt). Yet, this theory is qualified as "scientific" by its proponents, but it does not lead to any predictions, only explanations.
That shaking must have occurred on a given child at a particular moment, while the only person present with the child denies any violence, is an untestable and unfalsifiable proposition that is totally anti-scientific.
Falsifiability is critical for engineering, but not all science is for engineering. The further you get from engineering, the fuzzier the line between science and non science.
We try to apply the tools of science to things like astrophysics and paleontology, but they're practically never going to have engineering consequences. That's fine. It's great that we want to know stuff just because we like knowing -- even if "knowing" sometimes gets hard to define.
Falsifiability is a great tool for being able to say some things don't work. But it's important not to let it limit curiosity.
If you can't explain how the statement could be falsified, at least in principle (although it may be not practical to do so), then you are admitting that reality will always be the exact same regardless of whether your hypothesis is true. And in that case, it's really not useful or meaningful for any sort of reasoning at all.
A concept which will in no way ever make a difference, is meaningless.
MWI of quantum physics is most likely unfalsifiable. But if I believe that there are other versions of me that will definitely win the lottery if I am the sort of person to enter it in particular circumstances then that may well change my behaviour.
I would argue that occams razor is unfalsifiable but is still a great principle.
> I would argue that occams razor is unfalsifiable but is still a great principle.
Occam's razor is provable, in the sense it's a heuristic based on probability. Out of the two alternative hypotheses explaining the same observations to the same degree, the simpler one is more likely to be true.
That is not what occam razor means. It doesn’t say anything about probabilities. It says that if two models give the same predictions, you should chose the simpler. If the models make the same predictions, neither can be said to be more true than the other. But if there is a difference in falsifiable predictions, this can be used to determine which model is best - but then occams razor does not apply. The most correct model wins, whether or not it is the simplest.
> It doesn’t say anything about probabilities. It says that if two models give the same predictions, you should chose the simpler.
Which is where it becomes probabilistic. The answer to what it means for one model to be "simpler" than another is part of information theory, and that is essentially the flip side of probability theory. Information is (unsurprisingly) counted in bits, and one way of defining "a bit" is as amount of information that cuts your uncertainty by half.
> If the models make the same predictions, neither can be said to be more true than the other. But if there is a difference in falsifiable predictions, this can be used to determine which model is best - but then occams razor does not apply. The most correct model wins, whether or not it is the simplest.
The point of applying the razor is that, if there's no evidence to support one model over other, your best bet is to chose the simpler one now, because when new evidence comes to light that will favor one over other, it's more likely to favor the simpler one.
> The point of applying the razor is that, if there's no evidence to support one model over other, your best bet is to chose the simpler one now, because when new evidence comes to light that will favor one over other, it's more likely to favor the simpler one.
I highly doubt that. The history of science has plenty of examples where more complex hypotheses turn out to be more correct. E.g Einsteins relativity is more complex than Newtonian mechanics, but nevertheless better matches observations. The current model of the universe is far more complex than the Ptolmaic model, the periodic system is more conplex than the four elements etc.
Occams razor applies if two models make the same predictions and therefore neither can be more correct than the other.
> The history of science has plenty of examples where more complex hypotheses turn out to be more correct. E.g Einsteins relativity is more complex than Newtonian mechanics, but nevertheless better matches observations.
Yes, but that's after people made observations that couldn't be explained by Newtonian mechanics. If you were living at the time the latter were formulated, and someone came to you with relativistic equations, showing that their result all line up perfectly with Newtonian ones, but otherwise offering no example of divergence, nor any explanation why the person chose these particular equations - you'd be right to call them a kook and ignore them. After all, there would be no way to distinguish between Newtonian formulation, Einstein's formulation, and an infinite amount of other formulations that also give the exact same results.
Ockham's razor makes sense only if some two models in question, which today make the same predictions, can also make divergent predictions that you expect to be testable in the future. The Razor tells you to stick with the simpler one, because it's most likely to remain the best model. The more complex model has more moving parts, requiring more bits of information to identify among many possible variations of same or greater complexity - bits you don't have, because if you had them, you could use them to disprove the simpler model.
In other words: the Razor worked for Newtonian mechanics despite Einstein, because a Newton's contemporary couldn't just randomly come up with the exact formulation of relativity we use today, given evidence available then. So whatever theory they would propose, it would overwhelmingly likely be wrong - as in, made bad predictions where Newtonian mechanics still made good ones.
> Occams razor applies if two models make the same predictions and therefore neither can be more correct than the other.
So to reiterate: Ockham's razor is future-facing; it applies to theories that can generate divergent predictions, but which you can't test just yet. If you expect the two models to always give the same predictions, now and in the future, then... they're literally the same thing, just expressed in different ways. There is no meaningful difference there, and you may just pick whichever one you like more, or whichever is easier to work with.
Maybe you are thinking of a different principle? What you are saying sounds reasonable, but it is just not Occams Razor.
Occams Razor litterally just says that you should eliminate unnecessary entities from an explanation. It is a philosophical principle and not a statment about the natural world.
But the statments “the simplest of two theories is most likelily to be true” on the other hand is itself a hypothesis about the world which can be examined empirically. But I very much doubt this have been proven true for real world scientific theories. Perhaps for randomly generated theories it would be true, but theories are not typically generated at random. To follow the theme of the article - how would you falsify this hypothesis?
> Occams Razor litterally just says that you should eliminate unnecessary entities from an explanation. It is a philosophical principle and not a statment about the natural world.
"Eliminate unnecessary entities" is a nice way of saying "minimize information-theoretic complexity" without having the formal framework to express it in. The intuition behind "counting entities" points in the right direction.
As for the philosophical part - once you take a piece of philosophy seriously and try to refine it into purity, it tends to turn into either mathematical theorem or a natural science hypothesis.
> But the statments “the simplest of two theories is most likelily to be true” on the other hand is itself a hypothesis about the world which can be examined empirically. But I very much doubt this have been proven true for real world scientific theories. Perhaps for randomly generated theories it would be true, but theories are not typically generated at random. To follow the theme of the article - how would you falsify this hypothesis?
This is exactly what information theory and probability theory deal with, among other things. They give a formal framework to define a measure of simplicity for a hypothesis, and to study its relation with probability of a hypothesis being correct. That framework can deal with correlated hypotheses just fine. So to answer your final question: when you strip away the vagueness, Occam's razor becomes a mathematical theorem, which you can prove or falsify using the tools of mathematics.
As for whether there is any reason for mathematics to apply to the real world, this ultimately follows from the basic axiom that the universe around us follows rules that we can infer from observations. If you accept it, you can use the tools of mathematics - including Ockham's razor - to understand the world. If you don't - well, if that axiom is wrong, then reality becomes completely arbitrary - nothing makes any sense whatsoever, nor it can ever make any sense, and we're better off giving up on the whole "thinking" thing, moving back into caves, and spending our days hunting and gathering and finger-painting stick figures on cave walls.
It is important to understand the limitations of logic and maths. They are fundamentally tautological systems - they can’t give you genuinely new knowledge about the world. Logic can’t tell you if purple swans exist or whether planetary orbits are circular or elliptical. You need empirical evidence for that.
So information theory might give you a measure of the complexity of a theory, but can’t (on its own) say anyting about whether it is true or not.
But hypothetically there could (as you suggest) be a correlation between the complexity of competing scientific theories and which theory turns out to best match evidence. But is there any empirical evidence for this to be the case? Just because it would be nice does not make it true, and it is not something you can prove mathematically.
I see how it makes sense, given it comes from medicine - the very field dedicated to literally the single most complex system we know of: the human body. But I don't see it as an opposite to Occam's razor - rather, a missing lower bound. That is, I see Hickam's dictum as a reminder that some hypotheses may be too simple.
My guess at how one would formalize this is, when you're comparing hypotheses explaining something in a given domain (such as "behavior of things being thrown", or "health of a human body"), there is a level of complexity inherent to the domain. A hypothesis that's so simple as to fall below that level is too simple - it doesn't have enough bits to express what's happening within the domain. The further below the complexity threshold it is, the more likely it is to be falsified by new evidence. In contrast, hypotheses above the domain complexity level are all capable of explaining the domain fully; however, the more complex a hypothesis, the more likely it is to be at least partially wrong.
This gives us the following takes:
- Occam's razor: for hypotheses above the domain complexity threshold, the least complex one is most likely to be true.
- Hickam's dictum: your hypothesis is way below the domain complexity threshold - which you didn't notice, because you don't appreciate how complex the domain is in the first place.
Reconciliation:
- The closer a hypothesis is to the domain complexity level, the more likely it is to correctly explain new evidence. The best hypothesis matches the complexity of the domain. Above it, hypotheses gain superfluous parts, which are either redundant (unlikely), or wrong (very likely). Below it, hypotheses are always wrong - they're too simple to account for all possible predictions, so new evidence will eventually falsify them. The tricky part is - even though we both postulate hypotheses and define their domain, we tend to hand-wave the latter a lot, so in some cases (like medicine) we may not realize that our hypotheses are too simple.
This is an interesting way of thinking about it, but then it becomes essential to determine what the 'domain complexity level' is for any domain, and the possibility of unending argument on what that that level is will almost certainly destroy any value that either dictum or razor have.
But it can be hard to know whether a concept will make a difference in the future. String theory may for example provide some testable hypotheses at some point. Theory building is important and original thoughts may have unpredicted consequences some way down the road
I like Sagan, but he didn’t make these rules for a post truth world, unfortunately. There were certainly well funded lies in his day, but right now we are seeing massive investment, easily rivalling a large tech corporation or a Disney media empire, in to creating “news” orgs who’s specific goal is to lie, muddy waters, sow discord, and otherwise mislead the public, and who are seen as independent bodies of verification.
It’s much more difficult now to find real facts, which is actually insane given that real facts should be available with the click of a button these days.
I think the increase in spending on spreading/amplifying certain kinds of information is because there's a greater depth and breath of information available now that people can and are using to derive their own opinion.
Influencing people was less expensive when media/news/information/etc was relatively limited.
Yeah, but what Sagan is trying to give is a framework for forming opinions based on reality. While it’s a good start, it’s missing imperative steps like “if Fox News is your independent verification, it’s probably a lie”.
As stated, we are in a post-truth world, and a framework from Sagan era for forming opinions with a factual justification is much more difficult.
H. R. McMaster credits Putin's people with that concept. Until recently, propaganda was promoting your position. Russia came up with the breakthrough of just promoting extremists of all stripes to reduce the credibility of all news. This is a form of tactical misdirection. "To sow confusion and reap inaction", as Willie Sutton, the bank robber and prison escapee, put it.
It wasn't really feasible at scale before the Internet, because it takes a large number of anonymous sources to make it go.
This is correct, Falsifiability was origionally concieved of by Karl Popper. I would also add, that while Popper had the best of intentions and good ideas in creating falsifiability theory, it is not generally accepted today by people who study philosophy of science. The consensus, which I agree with, is that it doesn't refelct how science atually works and it eleminatates many things that we do consider science. I mean many people may disagree with string theory, but it's hard for me to accept that it's science at all. Evolution also didn't pass Mr. Popper's tests, but he eventually recanted, not by changing his views on how evolution fits with his theories but just because his friends convinced him that it was "really important."
Popper made contributions to science in that he was one of the main thought leaders that began the field of philosophy of science. That said, I don't think falsifiability by itself is a very good criterion for science or truth.
I don't think the wikipedia article on this is very good. The encyclopedia of philosophy has a much better overview of his theory and where it stands currently
> I mean many people may disagree with string theory, but it's hard for me to accept that it's science at all.
Science is a method, not a thing. I think most scientists, including those looking into string theory, would agree that the hypothesis is not science. It's a hypothesis. Maybe not even that, depending on whether or not you think it's falsifiable. But either way, it's an intriguing idea that scientists are looking at. That's a legitimate part of the process as well.
I always cringe when I hear evolution brought up as some invented method or mere developmental happenstance. It's a cold, unavoidable consequence of variation, selection and inheritance, in the absolutely most general way possible.
The universe evolves, life just found a way to make it quicker. Then we invented sexual dimorphism to make several selections within a single generation. Our human intelligence allows us to evolve ideas at an even faster rate.
The effects of evolution are in the realms of science, but evolution itself is the superstructure science, and anything else, has "evolved" in. It's time.
I don't agree it's the simplest. Creationism is actually simpler and that's why it's appealing to so many people. To be clear, I don't support Creationism, I'm just counter arguing.
I suppose it may seem simpler because we tend to anthropomorphize god, so "god simply snapped his fingers and everything came into existence" angle doesn't sound too complicated.
But then you start asking questions like where did god come from and why did he do what he did and the simplicity starts to fall apart.
Really, the moment the words "there's an omnipotent being that..." are uttered, simplicity goes out of the window.
Am I misunderstanding something about falsifiability? In principle, evolution can be falsified by observing no incremental development of species over time.
Falsifiability just means there's some possible way for it not to be true?
He thought it was untestable because it was based on a series unique one time unrepeatable events in the distant past.
Parts of the theory like Mendelian genetics are testable but on the whole... Evolution happens so slowly that we can't observe it or test it in a lab.
To me it seems like with Popper inquiry into how species came to be can never fit into science. It's a much narrower view than what we commonly understand as science today.
Of course evolution is falsifiable, as are falsifiable the big bang theory and the iron kernel theory.
To falsify a theory does not require to reproduce it in a lab. It means that the theory predicts something that you can actually observe, or not observe and thus disprove it.
For instance, we can falsify the theory like this: let's grow bacteria and subject them to some poisonous chemical. The theory predicts that, after many generations, if the colony survive then it should become resistant to that poison.
Here is an example of a theory that is not falsifiable: after they die, people's souls go to a place that is impossible to observe when you are alive.
Falsifiability is about "impossible to observe", not about repeatability.
This is just a factually incorrect claim by someone who clearly has no background in biology.
Evolution is frequently observed in the lab, particularly in organisms the reproduce quickly and prolifically as in most microorganisms. You might want to google "antibiotic resistance", which is literal evolution creating a public health hazard.
Evolution has withstood well over a century of experiments. It's the inevitable consequence of the facts of inheritable mutations and natural selection. Biology doesn't even make sense without it.
Perhaps you meant to day "science doesn't work by p-hacking and data falsification". The great thing about science is that it involves a process of building models and hypotheses, and then running experiments to produce results that support the models and hypotheses. To the extent that p-hacking and data falsification produce results that are not correct, longer term they will have a difficult time distorting our understanding of the underlying phenomena. Short-term, certainly they can be distracting, but we care about reproducibility because scientists are constantly trying to reproduce (possibly by building on older) results. Results that cannot be reproduced do not contribute to science.
I think you're right to identify two distinct things in the air, but it's the other guy doing the mixing. What science is and the extent to which that is approximated in practice are indeed two things. Neither is inherently more real or true than the other, or necessarily more right, depending on the context.
But I take this particular context to be about Sagan's comments on distinguishing truth from "baloney" via science by expressing principles that best express the scientific ideal. I don't take that to be "corrected" by appealing to science as practiced, which is where the mixing up of the two comes in.
I think there's a subtlety which isn't implicit in what you say. There is the ideal of science - which I think there's a case to be made that it's not something that can be fixed, but constantly evolves - and the practice.
But I think there's also a separate distinction between good faith and successful enough attempts to do science, and people just gaming the system. The fact that there's plenty of the latter must be acknowledged (in fact, I think the people who discovered this current set of issues and made a big deal of it are from the same group of academics as the ones abusing this loophole, and also, it's likely to be a perennial problem), but the former also exists and is not the same as the abstract ideal you bring up.
I'm not sure I would characterize Sagan's system here as specifically best expressing the scientific ideal. I think it's a great list to think carefully about to help with clear thinking, but I'm not sure applying the label of 'science' to all these points is the right way to think about them, although I'm not sure it isn't either.
I've been wondering if falsifiability is misunderstood, based on what a few people have commented about it. The alternative version is only that if claims are made that a system has made specific predictions that proved to be true, then it should be falsifiable on that basis. This was Popper's criticism of Marxists claiming Marxism was scientific - they would constantly claim that Marxism can predict what already happened, but it's bogus to claim this in retrospect, they repeatedly claimed this for occurrances only after they had occurred. Is this the same as demanding every theory be falsifiable?
Sean Carroll talks pretty positively about string theory. I think he paints the picture that although the popular view is it's not falsifiable (or not yet), therefore it's somewhere between very suspicious and junk, but actual theoretical physicists are much more positive about it.
Popper did have the idea of combating communism's claim of being a science or scientific materialism. And that's a pretty noble goal in my mind. His heart was in the right place.
I don't think Popper was going for that soft falsifiability - but if you modify his theory to do that, say it has to make some predictions, and only require that it should be falsifiable on the basis of it's predictions, you let in a lot of stuff in that obvious isn't science.
To take the obvious example of using exactly what Popper was trying to oppose. the current Chinese communist party's claim that capitalism will eventually transition into socialism once a certain level of development is achieved, it is a prediction, it will be falsifiable later. Clearly it is still not science.
But even if we give Popper the falsifiability angle and imagine there is some workable version of falsifiability, I still don't thik his theory works, it's just not a good reflection of day to day science and scientists.
> To take the obvious example of using exactly what Popper was trying to oppose. the current Chinese communist party's claim that capitalism will eventually transition into socialism once a certain level of development is achieved, it is a prediction, it will be falsifiable later. Clearly it is still not science.
I think this is the point on this issue, right? It can't count as falsifiable later, or still not science, unless they describe what they mean by "socialism" specifically enough in advance - which I think is a pretty nebulous constraint? Not sure about it. Definitely, there's a lot of (reasonable IMO) theoretical controversy about what 'welfare capitalism' has to do with proper socialist ideas, even though it's usually given the label 'socialist'.
I've also been wondering things like what working scientists do, and what this thing is that we call science generally - non research related stuff like teaching, or using science to do better engineering, and what the connection is between the two.
Is there still a place for any variation of falsifiability?
Bonus cheeky request: do you have any recommendations on modern philosophy of science?
Fyerabend is the top dog right now I wouldn't recommend investing too much in him, he basically thinks there is no demarcation and Voodoo and Einstein are equally valid.
I accept Quine who I admit isn't much better, he thinks it's all about creating a coherent world view. I think he's onto something but he's missing truth which I think is a problem - we want to think science gives us truths about the world, or at least we want a way to get to truth.
There's another view that says science starts with a set of untested assumptions which I haven't gotten around to reading much about
I like some of what I think I understand about Feyerabend's ideas, but I didn't get on with his actual books. I expected something ethnographic-y or observation based, but only found incredibly abstract stuff.
I listened to a few interviews with Liam Kofi Bright, and his ideas about what truth is are pretty interesting. The perspective of 'giving us truths' or 'getting to the truth', is unsatisfying to me.
I think there has to be a core of some kind of 'predict what will happen when we do something, then see how close we got, tweak it in response to these observations, then repeat'.
How can we disprove the hypothesis “If we can’t disprove it - it isn’t science.”?
The scientific metaphysic relies on so many declarative/prescriptive statements which are themselves exempt from the criteria for science and are thus self-defeating on their own terms.
It is so peculiar when scientists are so dogmatic about science.
Are the formal sciences (logic/mathematics/computer science) not science? The testability/falsifiability criterion certainly excludes them from being sciences.
> How can we disprove the hypothesis “If we can’t disprove it - it isn’t science.”?
That statement isn't science. It's a definition. It's philosophy of science. It's the briefest summary of Karl Popper's definition of the scientific method. According to him, science can never be proven, only disproven.
In this context, most of computer science is more a form of applied mathematics.
Of course there are different ways to look at science, like making a distinction between analytical (or empirical) science, and synthetic science; the science that makes stuff, rather than analysing it. Not sure if that's really a good distinction; the latter is really technology, isn't it?
There is such a thing as computer science, but the majority of what gets called that is really engineering, not science. People often get those two things confused because they have a fair bit of overlap in the Venn diagram, but they are two different things.
Math is itself indeed not science. It is the language of science. It follows different rules than empirical sciences. But note that word "empirical" there; Popper was really only talking about empirical science, and according to him, that was the only real science. You could argue that there are non-empirical sciences.
Another problem with Popper is probably that outside of physics and chemistry, there are a lot of less exact sciences where predictions and refutations of a theory are never that clear cut. Like his issues with the theory of evolution.
Ultimately, I guess science is also simply "getting to stuff that works by trial and error".
How much engineering could we do without Mathematics? How much commerce?
I don't see it as exclusionary. You won't find many scientists in doubt about the fact that everything they do is built upon Logic and Mathematics, in addition to observation.
But don't we need a word to group fields that try to systematically describe, understand, and make predictions about the physical world? (Rather than seeking to explore and characterise idealised logical constructs?). What would you suggest?
You may not see it as exclusionary but many people do. Just look at the comments!
It's precisely the grouping I am talking about.
If you group science in such a way so that logic/mathematics/computer science falls outside the group then isn't that an erroneous grouping?
Isn't that a silly definition?
True and False are idealized logical constructs. It's the idea; and the idealization of the notion that there is a difference between Truth and Falsehood. Or if you want to get biblical - there is a difference between Right and Wrong.
We need a grouping to make it clear that some fields produce theories and others produce theorems.
We need theory-producers to be more humble and provisional in their statements. We need theory-producers to forever remain open for their theories to be falsified or refined (whilst not being paralysed by doubt about theories that have stood the test of time). In other words, we need a slightly different culture.
But we also need a way to rebut someone who says "OK, but can you prove we're not living in a perfect simulation of reality with a fabricated history that was created yesterday?". In science, the rebuttal is "No, I can't prove that, science depends falsification rather than proof. Can you suggest a way I could falsify it? If not, then I'm going to get on with my work because it doesn't make a difference to my field either way"
We who? Don't "we" also need a grouping to make it even clearer that some fields can't produce any falsifiable theories if other fields don't produce at least some unfalsifiable theorems? A terra firma of sorts.
It's like a dependency graph. Or something.
Your insistence on "making a difference" seems to echo the sentiment of many pragmatists:
It is astonishing to see how many philosophical disputes collapse into insignificance the moment you subject them to this simple test of tracing a concrete consequence. There can be no difference anywhere that doesn’t make a difference elsewhere – no difference in abstract truth that doesn’t express itself in a difference in concrete fact and in conduct consequent upon that fact, imposed on somebody, somehow, somewhere, and somewhen. The whole function of philosophy ought to be to find out what definite difference it will make to you and me, at definite instants of our life, if this world-formula or that world-formula be the true one. --William James
Does falsifiability make any difference? If something is only falsifiable in principle (e.g in theory), but not in practice then is it really falsifiable? On pragmatism - it's not a difference that makes any practical difference. And yet you insist on differentiating. Why?
Is "All humans are mortal." falsifiable or unfalsifiable?
It sure is falsifiable in theory, but unfalsifiable in practice. Any living human is potentially immortal until they actually die.
Any running process is potentially non-halting, until it actually halts.
If falsifiability doesn't make a difference in practice (and it doesn't!) then I guess we can all get on with whatever scientific discipline we are busy practicing.
So, I'm going to carry on my life knowing at least one unfalsifiable scientific truth: the theorem known as The Halting Problem.
It's not even wrong, because it's right.
Anybody who insists the Halting Problem is falsifiable (even in principle) is welcome to solve it in principle.
> Don't "we" also need a grouping to make it even clearer that some fields can't produce any falsifiable theories if other fields don't produce any unfalsifiable theorems?
Sure. And I suspect a subset of pure mathematicians would want terminology to make clear that they produce theorems out of intellectual curiosity rather than because they have any regard for whether those theorems can be applied by other fields. Fortunately we can categorize things in multiple ways. I'm open to suggestions on the semantics, but something more widely understood and less clunky than my own theorem/theory-producers would be good! Perhaps "Natural Sciences" or "Empirical Sciences" might be more specific terms for fields that produce theories, if you like.
I differentiate simply because seems possible to do so. And as I said, because it's worth considering whether different processes and cultures are useful. I'm intrigued as to why you object so strongly.
I am afraid my intellect isn't quite up to the application of scientific principles to the philosophy of science itself this morning. I'll have to think harder about whether that's even a valid thing to do.
I don't think you've shown that falsifiability makes no difference in practice. The fact that it's possible to come up with some borderline or problematic examples (which themselves aren't terribly practical) doesn't mean it's not a useful criterion for a scientific theory. Falsifiability is a valuable filter for ideas that the natural sciences are not able to speak to. String theory has been criticized as unfalsifiable. I think a good string theorist would accept that it's a serious accusation that requires an answer.
To be honest I'm quite happy to say "All humans are mortal" is not a well-stated scientific theory. "Human lifespan is limited to 180 years" is better, as it may one day be falsified.
It's pointless to speak of usefulness without specifying a utility function.
It is just as possible to differentiate as it is to integrate.
If it is determined a priori that unfalsifiable propositions are not useful, then knowing the result of the Halting Problem is not useful. Isn't that silly?
I strongly object to categorizations which discriminate against valid science (knowledge? truth? understanding? reasoning? Useful facts?). Is all.
The human process of trying to udnerstand reality is continuous, not discrete, so it's silly to reason about it in terms of discrete categories. It necessarily leads to confusion; and the sort of gatekeeping and self-justification Carl Sagan is guilty of.
Science benefits much more from being defined too broadly; than being defined too narrowly.
I'd rather be too permissive then ignore the junk; than be too restrictive and never even encounter good ideas which were erroneously discarded as junk.
I don't think I said unfalsifiable propositions are not useful! A proven theorem is sacred!
Of course, until the laws of thermodynamics are revised we can provisionally say that all programs actually running in nature will indeed stop at some point, no matter what is proven about idealized Turing machines.
And before I'm misunderstood. There are many ways the laws of thermodynamics can be tested. This prediction, unfortunately, cannot be tested. But it is a predicted consequence of the simplest known theory that explains of all sorts of observations about thermodynamics. Which is the limit of what the natural sciences aim to do here. Provisional truth based on observation vs. proven truth based on stated axioms.
I am explicitly not claiming that one truth is to be valued more than the other. I honestly don't think that. Merely noting, again, that the distinction is there to be made. I may be "discriminating between", but I'm certainly not "discriminating against".
It may or may not be a continuum. Curious researchers on both sides can certainly be informed and inspired by each others work, and can use the same techniques and tools. But even if only as an academic exercise can't we describe these two modes of discovery. And isn't it worth being clear about their respective limits?
You seem to be missing the point. Ignoring for a second that the laws of thermodynamics themselves are based upon a handful of idealizations (the idealization of "thermal equilibrium", the idealization of "perfectly isolated system", the idealization of "perfect zero)...the laws of nature are encoded as formalisms/equations. Symbolic computations.
If you have no formalisms you can't compute any consequences - there is nothing to test. You have no science.
So treating Mathematics and science as "separate disciplines", even though they function as one symbiotic whole - that's the conceptual error.
Interesting. I actually would group cars and engines separately. I'm always fascinated to get a peek at a different way of looking at things, thank you.
> How can we disprove the hypothesis “If we can’t disprove it - it isn’t science.”?
You can't. That's an axiom. Welcome to Philosophy of Science.
Science, at bottom, has some axioms.
1) Cause and effect
The same causes always create the same effects is an axiom. We assume that the God or the Devil don't change all the rules every other Thursday. If there is a being who arbitrarily shifts the rules, science loses a lot of its predictive power. Science will adjust to that, but it makes science much less useful.
2) Continuity
The rules "today" are the same as the rules "yesterday" are the same as the rules "tomorrow". The rules "here" are the same as the rules "there".
This is a little spicier as we do try to test that the rules haven't changed. We try to test whether or not the fundamental constants have shifted with time, for example. We try to see if things are behaving the same in our galaxy are the same as in otehr galaxies.
In fact, practically everything which defines "science" is about the ability to predict and quantify.
A) Side: "math" is NOT "science". Math, while certainly falsifiable, is neither quantitative nor predictive.
This, in fact, has provoked quite a bit of discussion: See: The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Is it the "philosophy of science"?
What is now called "science" was once called "natural philosophy"?
Maybe it's the science of science?
Maybe it's the philosophy of philosophy?
Maybe it's the science of philosophy?
Maybe it's the philosophy of science?
Maybe it's all the same under naturalism?
Studying science (itself a natural process) using our computational understanding of what a "process" is and does sure fits the Oxford definition of "science".
Firstly there's no need whatsoever to be rude even if I'm wrong. It doesn't help the discussion and isn't nice. You also don't know anything whatsoever about what I do and don't know about maths or the definitions of words.
Secondly prospective theorems are absolutely falsifiable. Since a theorem is a statement that has been proven to be true yes they are unfalsifiable by definition - they have already passed that test. That doesn't really generalise to any sort of meaningful statement about the falsifiability of maths. Saying theorems are unfalsifiable is equivalent to saying "True statements can't be proven false". Well, yes.[1]
ie If I say Sean Hunter's theorem is that if you take a triangle with arbitrary sides a b c and angle opposite a of theta that
a^2 = b^2 + c^2 -42 b c cos theta
that statement is absolutely falsifiable (and false), which you can establish with basic geometry and trig[2]. When you demonstrate it not to be true it is not a theorem, so I was wrong to call it that. That is a demonstration of how maths is falsifiable.
[1] Even so it's often possible to make progress via proof by contradiction - showing that if this theorem were not true something else which we know to be true would be false. But in most of my maths books proving all of the theorems is the norm, so they are for sure falsifiable while you are trying to establish whether or not they are theorems.
[2] Drop an altitude from one of the angles at b and c and then use pythagoras and a bunch of cancelling. You will prove that a^2 = b^2 + c^2 -2bc cos theta of course. My statement is only true if a is the hypotenuse of a right triangle meaning cos theta is zero and my incorrect coefficient doesn't matter.
I merely attempting to reciprocate/mirror your tone. You are the one (self)identifying it as "rude".
I have some idea about what you do and don't know about definition and definability (in general) given the words you've said so far and the way you've used them.
Prospective theorems are not theorems until a proof is presented. At which point they become retrospective theorems.
All that "falsification" and counter-examples prove is that the so-called "proof" of a "theorem" wasn't. If you have indeed provided a counter-example that's a proof of negation which raises questions: what was wrong with the original "proof" of the theorem? Since proofs are programs - there must have been a bug in the proof. Better type-check that proof/program...
The presence of a counter-example to Sean Hunter's "theorem" simply demonstrates that it's not a theorem. It's a misnomer. Theorems are exactly those Mathematical stataments for which no proof of negation exists.
You seem to be presupposing some particular kind of mathematics. I am talking about all possible Mathematics in general; of which the particular Mathematics you are currently using is just one particular instance. A historical and cultural coincidence.
There's a Mathematical paradigm in which proof-by-contradiction is a valid proof method e.g mathematics founded upon classical logic.
And there's a Mathematical paradigm in which proof-by-contradiction is not a valid proof method e.g mathematics founded upon intuitionistic logic. This is basically what we call Computer Science. It has fewer axioms than Classical Mathematics (e.g the axiom of choice is severely restricted) and so it's a much stronger proof-system. You could even say Intuitionistic Mathematics (which is basically CS) is "more foundational" (it is much closer to the foundations?) than Mathematics.
The fact that you are admitting proof-by-contradiction in your methodology tells me about your choice of foundations, but so what? There's a foundation which axiomatically pre-supposes choice; and a foundation which doesn't.
And in the foundations where choice is not axiomatic "proof" by contradiction is not a valid proof.
The reasoning goes something like this:
1. Choice implies excluded middle.
2. Excluded middle implies all proposition are either true or false.
3. Excluded middle implies that proof by contradiction is valid.
Rejecting 1 results in the rejection of 2 and 3 also.
> Prospective theorems are not theorems until a proof is presented. At which point they become retrospective theorems.
...
> The presence of a counter-example to Sean Hunter's "theorem" simply demonstrates that it's not a theorem. It's a misnomer.
That is what I said. I showed a mathematical statement and I showed how you could falsify it. Since you said "mathematics is not falsifiable" I have shown your statement is not true. Do you see why?
You were the one who decided that the distinction between conjectures and theorems is important. I have now shown two examples of mathematics that was falsified.
Unless you're trying to say neither me nor Euler was a mathematician in which case we can agree about me but not about Euler.
Your failure to understand what I am saying is abysmal.
>That is what I said. I showed a mathematical statement and I showed how you could falsify it.
>Since you said "mathematics is not falsifiable" I have shown your statement is not true.
You have taken it upon yourself to interpret "Mathematics is not falsifiable" as broadly as needed in order to confirm your own biases; and then proceeded to attack a strawman instead of a steelman. That's the lack of charity...
>You were the one who decided that the distinction between conjectures and theorems is important.
And you were the one who decided that it isn't; so you falsely equated them.
What you have demonstrated is the falsification of the statement "X is a theorem"; not the falsification of "mathematics is not falsifiable." - a hasty generalization fallacy.
Which doesn't demonstrate anything of import or relevance whatsoever. Obviously a non-theorem is not a theorem. This is no more interesting than demonstrating that non-Mathematics is not Mathematics.
This in no way diminishes or falsifies my own claim that theorems are unfalsifiable! And neither is Mathematics.
Because if you do falsify it - then it was never a theorem. By definition. Theorems are true, not false. A false theorem is a contradiction in terms. A misconception. An error in reasoning.
Maybe Euler wasn't a Mathematician either. Who knows? Those sort of questions are undecidable.
"[Aristotle] claims that each science studies a unified genus, but denies that there is a single genus for all beings". You're applying tautology without really understanding the construction of your own question.
The study of being qua being; or science qua science; or
mathematics qua mathematics; or X qua X for any X.
Metaphysics.
Or as it is commonly referred to in computer science: function self-application. One example of which being the Y combinator (as in the name of this very forum).
I am applying a tautology in exactly the mathematical sense of a tautology; and I understand my construction just fine.
Had you been more charitable you would’ve addressed my argument; not your strawman of my argument.
I'm charitable by trying to teach you with examples.
> Or as it is commonly referred to in computer science: function self-application
No, that's recursion, not metaphysics.
Is computer science the same as programming, no. Computer science is the study of programming, not programming. You learn this in your first year of CS.
If you're smart enough to _really_ understand what a Y combinator is, this should be a piece of cake.
I am struggling to spot the charity in all your condescension.
metaphysics
/ˌmɛtəˈfɪzɪks/
noun
the branch of philosophy that deals with the first principles of things, including abstract concepts such as being, knowing, identity, time, and space.
First principles? Like logical/mathematical axioms? Sprinkle abstraction. Identity? f(x) = x ?
Time? Space? Spacetime? Minkowsky space?
On a fuzzy-match that sounds ludicrously similar to the sort of stuff the formal sciences concern themselves with. Almost as if the distinction between science and philosophy is non-existence given the demarcation problem.
What makes this assertion? The literal definition of the term.
Are you the kid who thinks he's edgy by saying in philosophy class: "It depends on the meaning of the word 'X'" every time someone tries to explain X to you?
and you hit one of the main objections to the theory of falsifiability as the criterion of science. There are also other more serious ones, like the obvious fact that actual science does't seem to actually work this way. The idea is more to explain observations in a coherent way rather than to be falsifiable for example. One example is the big bang theory being proposed by a Catholic astronomer who didn't like the then prevaling idea that the universe did not have a beginning or end because it went against his religious beliefs. Or Kepler looking for planets at locations in accord with musical harmonies because he though it was consistent with the existence of a God
> One example is the big bang theory being proposed by a Catholic astronomer who didn't like the then prevaling idea that the universe did not have a beginning or end because it went against his religious beliefs.
That’s simply not true.
Fr. LeMaitre developed the theory to explain observed red shifts of galaxies (deriving Hubble’s Law prior to Hubble.) He felt his theory (and science in general) had no connection or contradiction to his faith.
> One example is the big bang theory being proposed by a Catholic astronomer who didn't like the then prevaling idea that the universe did not have a beginning or end because it went against his religious beliefs.
This actually came from the skeptics [0]. They were unwilling to believe a Catholic priest proposing a scientific theory too similar to his religious beliefs, about God creating the whole universe in an instant.
When the cosmic background radiation was discovered in 1964, the Big Bang was accepted by (mostly) everyone.
The reason that falsifiability is a core requirement of science is because if there is no way a proposition can possibly be falsified, then there is no way to objectively assess whether or not it is true.
This is not to say that the proposition is false. It's possible that things can be unfalsifiable and true nonetheless, but those things would still exist outside the range of the scientific method (at least until/unless our understanding of reality expands enough to devise a test). That's an intentional trade-off, in order to gain greater confidence in the truth of the things we can test.
Personally, I am weary of the notion of “actual science” since science is not a well-defined term. The demarcation problem isn’t solved; and philosophers like Fayerabend in his book “Against method” suggest that science is more of an anarchic enterprise than any particular set of methods.
Take any criterion and apply it too strictly and there is some scientific discovery/progress in history which violates the rules and wouldn’t pass for “science” given any definition…
Take any given methodological approach - and you will always find counter examples in scientific history.
sure, but I mean, it's pretty hard to call flat earthers or proponents of voodo unscientific if we have to admit that we haven't solved the demarcation problem. Also more importantly for Popper, he wanted to oppose communists' ideas of "scientific materialism."
There does seem to be this thing that good scientists are doing. Popper did seem to touch on some good aspects of it, like the willingness to be proven wrong.
I think maybe that's the part Popper got right, maybe Science is about an unbiased search for knoweldge with no other agenda other than a genuine curiousity. And maybe that's why demarcation is so hard, it's hard to tell a person's motives.
I donno, just throwing stuff out there. . . Still I mean at least we have a test that communists obviously fail which should make Popper happy.
I mean, the non-schizo proponents for flat earth does approach it with scepticism. It's just that the their required level of proof are unreasonable. Any experiment, no matter how genuinely designed, is exempt from flaws. Science works because the detractors doesn't have the energy to waste decades in the academic apparatus, unlike true curiosity.
> … he wanted to oppose communists' ideas of "scientific materialism."
> … that’s the part Popper got right, maybe science is about an unbiased search…
> …at least we have a test that communists obviously fail…
i could be misunderstanding what you’re implying and if so apologies, but Popper wasn’t some anti-communist nutbag, in fact, if he “wanted to oppose” communism, that would have been fundamentally counter to his ideal of keeping things “unbiased”
Popper was very open how much he admired Marx, he even tended to agree with Marx’ analysis of capitalism. where he disagreed was that 1) we were destined to be slaves to be servants if we 2) don’t have violent revolution.. He was quite clear that the state should absolutely be heavy handed to protect the lower classes from the wealthy’s constant tendencies to abuse the poor. Again, he agreed with much of Marx’s writing but where Marx thought it would require violent revolution, Popper believed we could use other methods such as “social engineering” to counter the rich. He was also concerned that so many people agreed about violent revolution being the only way out. He wrote about this admiration for Marx quite a bit:
> …a grandiose philosophic system, comparable or even superior to the holistic systems of Plato and Hegel. Marx was the last of the great holistic system builders.
and
> [Marx] made an honest attempt to apply rational methods to the most urgent problems of social life… His sincerity in his search for truth and his intellectual honesty distinguish him…
Popper was concerned that under unrestrained capitalism:
> ..the economically strong is free to bully one who is economically weak, and to rob him of his freedom,… Those who possess a surplus of food can force those who are starving into a ‘freely’ accepted servitude.”
Philosophy Now sums it up well, “Throughout his scrutiny of Marx, Popper treads a thin line between admiration and apprehension.” [0]
again, apologies if i misinterpreted what you were implying, just wanted to clarify that Popper wasn’t some kind of nutbag mccarthy style rabid anti-communist or whatever. he just thought we could “social engineer” our way away from psychotic nationalism and unchecked capitalism rather than requiring full blown revolution.
He wrote an autobiography - he was a young man who was a communist because he believed in scientific materialism. He later recanted after some of his friends were shot and killed by the police.
Popper said he noticed that scientific materialism proposed by communists or Freud's theories was very different from the lecture he heard by Einstein - Einstein looked more like science.
Communism whatever anyone thinks about it is obviously not science. They claimed to be science at first and proposed scientific materialism as the future.
Today even communists seem to have recanted this idea instead preferring to criticize capitalism and present themselves as the only alternative. We all know today it's not science.
I don't want to debate politics only to say Communism was never science, it's politics - Popper noticed that quickly and it was one of the imputus for his ideas based in his own autobiography
He also dedicated his book the poverty of historicism to the countless men and women who lost their lives to fascism and Communism and their false belief in historical destiny
Open society and it's enemies also contains a long critique of Marx and the idea that history follows certain laws that must play out a certain way.
The problem with all ideas is always their reification. Computers may be deterministic, but humans aren't. The same software/idea produces wildly different understandings; and behaviour in differnt humans.
What always seem like great ideas in theory, innevitably has to cope with the (mis)understanding; (mis)interpretation; and (mis)application of said idea by the mass population.
Because they have worked over time, empirically as opposed to a lot woo woo stuff proposed by religion, spirituality, metaphysics, mentally ill, etc. which can never be disproved but which really don't have any value in those areas where we apply science like technology and attempting to understand natural processes
You seem to be speaking from a place of greatly diminished self-awareness.
Notice how you are constantly appealing to abstract unobservables to make your claims. No shame in that - all science does it. Quantities, numbers, fields, processes etc. etc. etc.
That is precisely the metaphysical woo woo you are busy criticising. Formalism is all about turning that woo-woo into well-defined concepts.
What's a "process"? Show me one.
Only way I know how is to give you more metaphysical woo woo.
The central dogma of computational trinitarianism holds that Logic, Languages, and Categories are but three manifestations of one divine notion of computation. There is no preferred route to enlightenment: each aspect provides insights that comprise the experience of computation in our lives.
If you want to believe in fairy tales then enjoy them. I prefer materialism. We will never agree on this. You can't prove a God exists, so I simply don't care about the topic other than how it affects civilization negatively by promoting magical thinking and religious fanaticism/intolerance. I tolerate people who are religious, I don't wish them any harm; the opposite is quite untrue for a large proportion of the religious world for atheists/"infidels".
The deep irony in valuing matter more than valuing values is never wasted on me.
You still haven't figured out that "matter" is yet another man-made concept? An abstract idea. A collective noun. Itself a (very useful) "fairy tale".
A substance which posesses "rest mass" in a universe where nothing is ever at rest sure sounds like magical thinking (to me). And what do you even make of point-like particles in physics? They have no volume - so they are not matter. And what about anitmatter?
You haven't yet come to the self-realization that you are committing the reification fallacy by promoting a man-made concept to a totalizing/generalizing/all-encompassing ontological status.
Matter is your God. It's the abstraction you worship.
You are right in saying that we'll never agree; for if I were to agree with you I too would be wrong.
Science clearly works because humans have a genuine curiosity to find truth. While every person is flawed and prone to error, the curiosity points in the same direction, rising above the noise.
We've only recently had to even discuss this after the deployment of large scale truth poisoning making the errors non-uniform.
Deutsch also heavily argues for explanations as first-class existence on the level of matter. For him, predictions are simply a practical concern. I still fondly remember his dialogue between Socrates and Apollo in dream, where he convincingly makes the point that even there he can find true knowledge about the world.
Deutsch really argues for Popper's "conjecture and refutation" model of science (or as he renamed it, "conjecture and criticism") and one of the great points he makes is that both those processes can be done entirely in the mind. Physical evidence may be necessary for some kinds of criticism, but sometimes reason alone can produce useful criticism.
I also like his point (in conversation with Sean Carroll) that we have rigorous proof that the notion of explanation can't be formalized, yet explanation is at the core of the scientific endeavor anyway.
"Hard to vary" is, I think, some sort of isomorphism of "easy to refute". I found it difficult when reading Deutsch to pin down exactly what made something hard to vary.
Let's say we have 16 distinct phenomena. All of them are currently explained by a rooted tree T of explanations of depth 4. Say, we come across 8 new phenomena. Suddenly you can't rejig or shake your tree T anyhow. You have to preserve the original structure of T or at least relationships while constructing a new tree R such that it not only explains old 16 phenomena but also the new 8 as well. We might construct a new tree R such that tree T is a subtree of it. This new tree covers more ground and has more depth. In this sense, new explanations are constrained by whatever that came before and they are tasked with explaining more on top of that. There are very few ways to do it. In physics, they call the more general law to be valid at all time scales, energy scales and length scales, velocity scales and so on.
Not all useful tools for thinking must be considered science. I think it's quite fair to say the Drake equation isn't "science".
However, I think Popper's "conjecture and refutation" is a more interesting model for thinking about the increase of knowledge than falsifiability is.
Through that lens, you might say the Drake equation was a conjecture intended to stimulate refutations, which might lead to further productive theories.
Popper's essays on Parmenides are really interesting on this. Popper frames Parmenides' philosophy as essentially a conjecture that "change is an illusion", and the history of western science since then as various "research programmes" in response to that. I'm summarising terribly, but I think he illustrates well the value of conjectures that aren't necessarily "scientific" in stimulating thought.
The Drake equation is falsifiable. All you have to do is plug the values (which are all things that could technically be measured) into the equation, then count the number of civilizations in the Milky Way, and see if that's equal to what the equation predicted. it is possible that it's not, so the equation is falsifiable.
That it's impractical for us to actually do the experiment just means that we cannot perform the experiment and so it doesn't help us in practical terms. But it is a thing that could be done, strictly speaking.
That said, it's also a thought experiment, not a scientific postulate, and isn't meant to be "scientific".
Certainly useful for thinking, but its lack of falsifiability, predictive power or validation is indeed why there's so much controversy around it and why it remains a hypothesised model rather than widely accepted scientific fact. The best you can do is test the assumptions of the model itself, or its constituent terms.
What are you implying? Scientists can't make stars either, but they can make models describing stars and make new observations about these stars that test the validity of these models. Plenty of climate models have been disproven by this exact way. What has been remarkably consistent is that man-made greenhouse gases are causing climate change. For example, models predict that man-made global warming would cause the stratosphere to cool, which is exactly what they measured. By contrast, increased solar activity would have also heated up the stratosphere.
Why?
Climate science works with quantative models and makes concrete predictions.
So if the predictions don‘t come true the model is false.
E.g. this random blog I found googling compares IPCC predictions with actual outcomes: https://johncarlosbaez.wordpress.com/2012/03/27/the-1990-ipc...
[the link is just supposed to show how climate science is falsifiable. I have no idea whether the numbers in this specific source are trustworthy]
More comparisons here[0]. As is important, it's noticed that models even from the 80's made fairly accurate predictions. That's the real kicker: models accurately predict observable outcomes 20-40 years later (and have only improved since then).
I'm not sure why there's so much contention about climate science given that we've been observing this trend for nearly 50 years now. Scientist makes prediction about something 50 years later and 50 years later the prediction is shown to be accurate. It feels weird to not trust a source with such a good and observable track record...
The thing is - there are so many predictions made and a lot that don't come through, it's hard to find which are the "real" ones.
I also read somewhere that there are several different scenarios available for the IPCC climate models and a lot of the more "scary" predictions are based on the least likely model.
Finally - we saw the limitations of modelling in COVID in the UK, with adverse consequences.
Yes because you could simulate a small-scale atmosphere, and notice how changing the composition of the "atmosphere" affects average temperature or other features. John Von Neumann actually predicted climate change from first principles and described it as potentially more debilitating than nuclear weapons.
Well that's more evidence for it, not an attempt to falsify.
If we want to falsify, we should ask questions like: Was there a time in the past when the atmosphere contained even more CO2 than it did now? What was the temp and did life survive?
What evidence or experiment would convince you the theory is false? Or is that impossible?
>If we want to falsify, we should ask questions like: Was there a time in the past when the atmosphere contained even more CO2 than it did now? What was the temp and did life survive?
Do you believe that question isn't already asked and answered? Just going off the top of my head, I understand historical considerations to be a routine observation as part of your typical course or even presentation on the history of climate change. If you believe this counts, then your answer to your own question about the falsifiability of climate change is yes.
I also don't think that's the only kind of example - I think the routine things talked about on a day to day basis, including year by year and decade by decade temperature predictions would qualify, and these data are quite frequently discussed and quite visible.
I also also don't think the distinction between "evidence in favor" and "attempts to falsify" makes a whole lot of sense. These are two different sides of the same coin, and evidence in favor of something is the same as surviving a test of falsifiability.
It's pretty disingenuous to cut the graph at the beginning of the ice age cycles. This tricks people into believing that the current level of CO2 has never been seen before.
So I'd ask what parts of climate change are falsifiable?
- Is the climate changing? Of course, but it has also been changing for all of earth's history.
- Are humans and other organisms contributing? Yes but organisms have contributed to climate change for all of earth's history. Blue-green algae put oxygen into the atmosphere, plants sequestered huge amounts of carbon.
- Are we at a historically unprecedented level of carbon? No, the earth was previous at something like 200x the atmospheric CO2.
- Will 2x the atmospheric CO2 cause mass death? Probably not, we know life thrived in the past on a much warmer and more CO2 rich globe. Many more humans die of cold rather than heat.
Climate change is not a fundamental theory. It is derived from thermodynamics, the Navier Stokes equation, quantum mechanics and probably quite a few assumptions on what is negligible. Falsify any of those and you falsify climate change.
Depends on how valid you consider statistics and climate modelling. We're seeing increasing numbers of events that are incredibly statistically unlikely in a climate unaffected by anthropogenic climate change, so - can we say with absolute certainty that X was caused by climate change? That's hard. But we say that X was incredibly, unbelievably unlikely without climate change, and that's usually been a sufficient standard for accepting a hypothesis.
Certainly. E.g., set up a model greenhouse system and show that increasing CO2 doesn't increase the temperature.
Or demonstrate that convective phenomena aren't affected by changing temperature.
"Climate change" comprises a causal chain of well studied and testable hypothesis. Including disproving alternate hypothesis like "solar radiation is increasing".
The problem is climate change has become a motte and bailey (https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy). If we do come up with evidence that rising temps aren't all that correlated to CO2, then we just change the target to 'yeah but it causes ocean acidification' or 'natural disasters are increasing' or 'yeah but it kills coral reefs'.
At this point, I'm not sure there is any possible experiment or evidence that would actually change most people's mind on climate change. It has become a cause to support rather than a theory.
If I understand this paper correctly, it compares the temperature and CO2 levels on geologic time scales:
> Atmospheric CO2 concentration is correlated weakly but negatively with linearly-detrended T proxies over the last 425 million years.
> To estimate the integrity of temperature-proxy data, δ^18O values were averaged into bins of 2.5 million years (My)
It makes sense. Over long time scales, other factors are more important.
Does the same apply to shorter time scales, like thousands, hundreds, or tens of years?
> At this point, I'm not sure there is any possible experiment or evidence that would actually change most people's mind on climate change.
One possible way would be to continue releasing the CO2 at the current or larger rates. If then temperatures and the climate reverse to the state from 200 or 300 years ago then it means it's part of a natural cycle and humans had nothing to do with it.
> At this point, I'm not sure there is any possible experiment or evidence that would actually change most people's mind on climate change. It has become a cause to support rather than a theory.
Aren't you (your way of thinking about this) part of the problem here, or even the whole problem?
You've reduced climate change to politics - certainly there are many people (myself included) who don't agree that it is political.
There are people that say "we need to think about the climate" purely because they are liberal, but they can be ignored, and their views don't change the reality.
Yes. Make predictions. Wait. Observe. Measure difference between predicted outcome and actual outcome.
We've had scientists making predictions since the 80's, so we have some long term observations. I'll save you the read, historical models are fairly accurate and have become more accurate over time.
I'm not sure why anyone considers this a debate. It's not a debate. It's willingness to look at data or not.
Predictions have been made, and we can check if they were correct, but maybe not at the moment.
Some past short-term predictions, like predictions for 2020 made in 1990, may or may not be correct. We can look those up. Many other predictions are for what will happen in the coming decades, and we won't be able to assess with certainty whether they were correct until those future dates.
This. That's a good way to put it. I've mentioned Fred Hoyle's line on that, "Science is prediction, not explanation" (which is from one of Hoyle's novels), but Sagan's line is better.
Key point: unfalsifiable theories do not lead to useful technology. Engineering requires predictability.