This paper highlights a paradox I’ve noticed too: the more people are made aware of misinformation risks, the more skeptical they become of everything — yet at the same time they lean harder on outlets they still trust.
I’ve been working on something along those lines: a platform where every post carries a credibility score. Instead of likes or shares, users support or refute posts with sources, and credibility updates across posts, users, and publishers in real time. The idea is to create a platform that is better suited to finding the truth than the other popularity-based algorithmic platforms.
I’d be especially curious how HN readers think a system like this could be gamed or biased — and whether surfacing credibility scores actually helps trust, or just shifts the problem somewhere else.
I’ve been working on something along those lines: a platform where every post carries a credibility score. Instead of likes or shares, users support or refute posts with sources, and credibility updates across posts, users, and publishers in real time. The idea is to create a platform that is better suited to finding the truth than the other popularity-based algorithmic platforms.
It’s here if anyone wants to take a look: https://noblenews.io
I’d be especially curious how HN readers think a system like this could be gamed or biased — and whether surfacing credibility scores actually helps trust, or just shifts the problem somewhere else.