Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On the data journalism mailing list, we've had a small discussion about how Nate could be replaced by any of the other data-minded journalists...it's not because his work was simple, but Silver himself has said that his analysis is straightforward and the math is accessible...the difference is that Silver gives a damn about context and analysis In a methodological way.

He definitely brings in good writing talents and a possibly unmatched inquisitiveness...but his methods and predictive analysis aren't irreplaceable. And yes, part of the draw is the brand that he's worked tirelessly to build...but this is a brand built on verifying results...I.e. being conclusively right....if the Times were to bring in another blogger who predicts the 2014 Congressional race to a T, and explains his/her methods and shows a real love for it, I'll subscribe to that blog and not give a whit about how many years of blogging they've done.

Contrast this with the irreplaceable Roger Ebert. He was incomparable as a writer, but his brand was built on something very subjective...and thus, once he's gone, it's hard to justify going back to RogerEbert.com, no matter how great of critics there are to replace him. Roger's brand is based more on long-built loyalty...Silver's brand is more based on making verifiable hypotheses and being correct...time and time again.



Which mailing list? There seem to be several on this topic.

Also, you're probably right, there are at least three other prominent electoral college forecasters who write well, were as accurate as Nate, and who could possibly be contracted to provide the same analysis and commentary: Andrew Tannenbaum [1], Drew Linzer [2], and Sam Wang [3].

Unlike Nate, forecasting is not their primary career, but it's clearly a labor of love for them and who knows what could be worked out with the right offer. If I were in charge at the NYT I'd be starting up conversations with these and any others doing similar work, stat.

[1]: http://electoral-vote.com/

[2]: http://votamatic.org/

[3]: http://election.princeton.edu/


National Institute for Computer Assisted Reporting

http://www.ire.org/resource-center/listservs/subscribe-nicar...


Thanks!


I think the added value that any of these forecasters, including Silver, is greatly overstated. When their accuracy is touted it is usually based on the prediction the day before the election. Is that really useful?

More importantly you can get the same predictive power by just directly using recent polling data.


>When their accuracy is touted it is usually based on the prediction the day before the election.

That's not true in Nate's case at least. One of his biggest wins was debunking some of the BS "narratives" that pundits tried to spring during the election long before election day, the one about "Romney's momentum" in the final weeks of the election being the most memorable one.

Linzer also did a postmortem where he looked at, among other things, accuracy and predictive power of the model at different points in the election [1].

>Is that really useful?

An order of magnitude moreso than traditional punditry, not least because it's honest about what it can and can't tell you, namely "if the election were held today, this is the most likely outcome". No more, no less.

>More importantly you can get the same predictive power by just directly using recent polling data.

You do realize that's exactly what these guys do, right? The problem is, which polling data do you use? They provide a scientific answer to that question. Instead of cherry picking, they use it all, and weight it based on past accuracy and other factors.

[1]: http://votamatic.org/evaluating-the-forecasting-model/


I didn't communicate my point effectively. Let me try again.

I believe strongly in quantitative analysis of election data. I place zero value in punditry, especially from main stream media. What I don't believe is that some of these complicated models, Silver's in particular, are meaningfully superior to predicting the election by trivially applying recent polling data augmenting it perhaps with a simple weighted combination of polls based on recency or sample size.

I take as a given that Silver's model is better than punditry. I am skeptical that it is better than the trivial model which any undergraduate stats student would cook up.

> The problem is, which polling data do you use? They provide a scientific answer to that question. Instead of cherry picking, they use it all, and weight it based on past accuracy and other factors.

What I dispute is whether the "other factors", which is the secret-sauce that allows Silver to give the impression he has a uniquely predictive model, have any real value.

Thanks for the Linzer link. I have to take time to read it carefully but on first glance it again shows one of the things I take issue with: If you are going to claim that a model is accurate you should be asking, compared to what? How can you justify a complex model if you aren't even going to try to show that it is better than some trivial baseline?


> I take as a given that Silver's model is better than punditry. I am skeptical that it is better than the trivial model which any undergraduate stats student would cook up.

This seems quite easy to test. Has anyone done so yet?


I haven't seen it.


What would the trivial/baseline model be, just a straight average of all state polls for the last X days?

That would be interesting to know, but also easy to find out.


I wouldn't say there is a canonical baseline model but your example would be a reasonable place to start. Weighted average by sample size is a straightforward modification. Just using the most recent reliable poll would also be interesting.

I think most of the extra value you could add would be from analyzing the poll data to get a sense of which polls were unreliable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: