Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When we talk about making AI safer, we often slide into paternalistic frames where we dictate outcomes rather than enabling capabilities with appropriate guardrails. The distinction she makes between providing capabilities and forcing functions seems critical.

I'm curious if anyone has explored applying Nussbaum's theory directly to AI development frameworks. What would her capabilities list look like for artificial intelligence? Could this be a more productive framework than current alignment approaches?



Based on your recent post history where you consider some issue X, its negation, and then ask a question with “I’m curious if…”, these seem to be LLM generated. In which case: please don’t post that here.


This is such a strange comment. The parent has used that phrase in a _few_ of their comments, but not all. People sometimes re-use phrases in their speech. Please don't post unsubstantiated accusations here.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: