Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're misunderstanding the AI use cases. Take propaganda, for instance. A human expert can craft much better flat earth propaganda posts than any AI can. The difference is that a human expert cannot engage in 10 million simultaneous relationships with Twitters humans, sustained for several years, in which is it periodically but very subtly suggested or implied that the Earth is actually flat. From the point of view of the human being targeted with this, what they're going to experience is that they have a bunch of online friendships that are actually fulfilling and rewarding, and over time they'll naturally come to adopt the values of their online friends (including on the specific subject of the shape of the Earth). And it cost so little to do this that you could just as easily have done it to tens or hundreds of millions of targets concurrently.

This is the type of scenario people are concerned about. I have my doubts about whether it would really work this cleanly, but I'm sure we'll be seeing people try to do this. They've probably already started.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: