Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> you can't say for sure, from where we're standing now.

We can already see how LLMs can be substantially worse - they can cosplay human thoughts and sentiment in a way that wasn’t previously possible.

So much of this debate seems focused on this conception of Skynet-style murderous AI - or at least manipulative and scheming HAL 9000 types. But an order of magnitude greater risk has already arrived just from scaling existing harms.

Phishing schemes and pig butchering scams have destroyed countless lives. They’re now easier and more scalable than ever. As are fake news and disinformation campaigns.

Several companies are productising AI girlfriends with predatory pricing models, capitalising the human desire for connection and intimacy in the way slot machines and sports betting monetise our desire for a better life. That’s new.

It may not be an order of magnitude worse for everyone yet - but for certain vulnerable groups, that future has arrived already.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: