As I understand it, AI ethical principles relate to the development of a superintelligence. Talking about unethical usage of narrow AI is like talking about the unethical usage of any other tool - there is no significant difference.
The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).
For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.
AI ethical principles relate to the development of a superintelligence
This is not true; there are real-world ethical considerations right now with existing tech, infact have been since the most rudimentary AI was applied in commerce or government
The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).
For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.
For a shorter read, see this blog post: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...
For general information about AI ethical principles, see FHI's website, they have publications there that you could also read: https://www.fhi.ox.ac.uk/governance-ai-program/