Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Arguably, LLMs - or whatever systems succeed them - are only useful if they are not AGI. Given the evidence already collected about how willing humans are to make these systems "agentive", we pretty well have to worry about the possibility of an AGI using us instead. Even if there's some other logical barrier to recursive self-improvement ("hard takeoff") scenarios.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: