Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As I understand it, AI ethical principles relate to the development of a superintelligence. Talking about unethical usage of narrow AI is like talking about the unethical usage of any other tool - there is no significant difference.

The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).

For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.

For a shorter read, see this blog post: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

For general information about AI ethical principles, see FHI's website, they have publications there that you could also read: https://www.fhi.ox.ac.uk/governance-ai-program/



AI ethical principles relate to the development of a superintelligence

This is not true; there are real-world ethical considerations right now with existing tech, infact have been since the most rudimentary AI was applied in commerce or government


Superintelligence is an interesting distraction from real world ethical issues.


Including real world Internet ethical issues, of which there are already plenty.


> don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen

> otherwise humanity dies in 12 hours

Well, it's been a nice ride :)


Just finished Life 3.0 - very good overview of ethics and possible futures.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: