Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It creates an incentive to improve moderation automation and reduce moderation costs. One way to do that is to verify users, which will help with our sock puppet and deliberate misinformation problems.


If you are concerned about the chilling effect of sites arbitrarily moderating user-generated content they find objectionable, just think about how much worse identity verification would be. Here on HN, people regularly create throwaway accounts so they can provide valuable insider accounts of things happening at companies where they work; that would never happen if they had to prove their identities first.

In fact, Twitter already does verify users for some accounts—that’s where the blue checks come from—and many of them are the most profligate misinformation peddlers. I think Twitter also requires new users to register with a phone number, and Google+ and Facebook both had/have real name policies, and none of that has done a thing to stem the flood of misinformation online.


Sure it could happen. The identity verification needn’t be public.

Facebook et al have no real barrier to fake accounts.


Automated moderation can’t possibly solve that problem. How do you automatically moderate libel?


It’s a problem Facebook is working on. Presumably signals like user reporting but yes, it’s not easy. If someone figures it out, they’ll make a lot of money.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: