I think the article focused on base rates because they're a relatively unusual and legible "trick" to coming up with a forecast, but really they're only one element of a forecast; typically a forecaster will think about many different ways to "attack" a question and synthesize them (somehow!). Choice of denominator for your base rate is very important also and can radically change the answer you get.
The sites which host these forecasting competitions correct for the bias against rare events through what's called "proper scoring" rules -- there's some specific maths to it, but the short version is that you're exponentially rewarded for being a correct contrarian and exponentially punished for being confidently wrong.
There are limits to that too, of course -- the folks in the article will "only" have made on the order of mid hundreds to low thousands of predictions, so roughly speaking, you can expect these people to be calibrated for 1% or 0.5% odds but probably not 0.1% odds.
Base rates work pretty well, at least for all cause mortality... hurricane counts per year, and financial markets (over very short time periods). I was using the Good Judgment project as motivation to practice R programming for a while, until one day I saw that literally EVERY person forecasting the ending value of the Hang Seng index tied to my probabilities. Therefore, EVERYONE was calculating base cases from historical market data and entering those results.
The sites which host these forecasting competitions correct for the bias against rare events through what's called "proper scoring" rules -- there's some specific maths to it, but the short version is that you're exponentially rewarded for being a correct contrarian and exponentially punished for being confidently wrong.
There are limits to that too, of course -- the folks in the article will "only" have made on the order of mid hundreds to low thousands of predictions, so roughly speaking, you can expect these people to be calibrated for 1% or 0.5% odds but probably not 0.1% odds.