Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know it's off topic but I'd love to learn more about how the modeling of COVID went wrong. You can pm me if you want to take the discussion elsewhere.


The models tended to overshoot the number of deaths by huge amounts. For example, the Imperial College of London estimated 40m deaths in 2020 instead of the 2m that occurred.

https://www.wsws.org/en/articles/2020/03/18/covi-m18.html

https://www.nature.com/articles/d41586-020-01003-6

https://www.imperial.ac.uk/news/196496/coronavirus-pandemic-...

The model authors have since argued that the data was correct, but people responded to the pandemic by changing the way we live. That's OP point: that feedback cycles and corrections exist and they make modeling dynamic systems very difficult.


This seems unsurprising, and the correct way to model a situation like this to me.

"Lots of people could die if you keep behaving as you currently are"

"Okay, lets behave differently"

And then less people die.

Trying to frame it as "they modelled it wrong" is nonsense. What even is the point of predictions like this if not to change behaviour - predicting outcomes based on everyone taking precautions and not telling people what might happen if they don't would be dangerous and irresponsible.


> Trying to frame it as "they modelled it wrong" is nonsense.

It's not that. It's that when the system you model responds to the existence of your model, it becomes anti-inductive. It's no longer like weather, but is now like the stock market[0]. Your model suddenly can't predict the system anymore[2], it can at best determine its operating envelope by estimating the degree to which the system can react to the existence of the model.

--

[0] - I use the term anti-inductive per LessWrong nomenclature[1], but I've also been reading "Sapiens" by Yuval Noah Harari, and there he uses terms "first order chaotic" for systems like weather, and "second order chaotic" for systems like the stock market.

[1] - Introduced in https://www.lesswrong.com/posts/h24JGbmweNpWZfBkM/markets-ar....

[2] - I think it becomes uncomputable in Turing sense, but I'm not smart enough to reduce this to the Halting Problem.


This is also known as the Lucas critique, dating from 1976

https://en.wikipedia.org/wiki/Lucas_critique

and also https://en.wikipedia.org/wiki/Campbell%27s_law

https://en.wikipedia.org/wiki/Goodhart%27s_law

Edit: I think this is not the first time the good people from lesswrong dug up some well known idea and gave it a new name. Good thing, too, giving this important concept more attention. Too often we forget how many people have dealt with the problem of modeling complex systems in the past. And while we can not read everything, it's often a good idea to have at least a glance at where they failed!

Too often I read/review some new "revolutionary" paper based on the idea that hey, we can model this process (involving people) like XYZ from physics, where this stuff works great! Surely, this is better than the plebian approaches in the literature! And then, to the shock of all involved, it doesn't work great....

Also: https://xkcd.com/793/


fun fact comment, Asimov thought about this, I'm quoting the short part that is relevant from this article on wikipedia :https://en.wikipedia.org/wiki/Foundation_series "One key feature of Seldon's theory, which has proved influential in real-world social science,[3] is the uncertainty principle: if a population gains knowledge of its predicted behavior, its self-aware collective actions become unpredictable."


The issue is that the example nostromo gave (40m) was not intended to be predictive of what would actually happen. It was based on a worst case / left unchecked scenario (useful for establishing an upper bound), and therefore irrelevant w/r/t the system responding to the model.


Also many early models were based on SARS and MERS because we had no comparable illnesses, and these were worst case respiratory diseases.


You're missing that they modeled all of those scenarios out, "do nothing" was just one of their models.

And even their best case scenarios overshot the mark -- and by a lot.

This isn't to criticize modeling -- it's only to point out how hard it is to get right.


You claimed:

> The models tended to overshoot the number of deaths by huge amounts. For example, the Imperial College of London estimated 40m deaths in 2020 instead of the 2m that occurred.

The very article you cited pointed out that the 40m figure was based on a "left unchecked" scenario. It was not an attempt to predict the actual number of deaths that would occur. Claiming that this is indicative of overshooting because the actual number of deaths is 2m is completely wrong.


But they never learn either, the ICL react studies are still getting it spectacularly wrong, 4 weeks ago they claimed cases were rising in the UK, that R was above 1.

Even a cursory glance at the actual data, even the data available at the time, shows they were completely and utterly wrong.


> The model authors have since argued that the data was correct, but people responded to the pandemic by changing the way we live. That's OP point: that feedback cycles and corrections exist and they make modeling dynamic systems very difficult.

Were they actually so naïve that their model did not allow for the possibility that human beings change their behavior in fear of death by pandemic?


Assuming present trends continue, even when things are clearly heading for terrible outcomes, is the basis for most doomsday predictions.


Quanta had a decent write-up of challenges faced by some researchers about a month ago: https://www.quantamagazine.org/the-hard-lessons-of-modeling-...

Of course there are many ways that things went wrong, and not everyone made the same mistakes.


I would like to read about that too.


Seconded!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: