Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Alignment has a lot more to it than simply which answers an AI provides. In the future when agents are commonplace and when AI can do things in the physical world, alignment will be especially important because it will dictate how the AI chooses to accomplish the goals humans set out for it. Will it choose to accomplish them in a way that the human requestor does not want and did not anticipate, or will it choose to accomplish them in a way any human with common sense would choose?

Moreover, in the not so distant future if there is an AI that is acting totally autonomous and independent of human requests for long periods of time, weeks or months or longer, and it's doing good important things like medical research or environmental restoration, alignment will be incredibly important to ensure every single independent decision it makes is done in the way its designers would have intended.





The problem is you're overloading the word "alignment" with two different meanings.

The first is, does the thing actually work and do what the user wanted, or is it a piece of junk that does something useless or undesired by the user?

The second is, what the user wants is porn or drugs or a way to install apps on their iPhone without Apple's permission or military support for a fight that may or may not be sympathetic to you depending on who you are. And then does it do what the user wants or does it do what someone else wants? Is it a tool that decentralizes power or concentrates it?

Nobody is objecting to the first one.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: