Hi. I work in the investment field. The firm I work at invests in AI companies from time to time. Without presuming to know investees' ethical situations better than they do, I would like to be in a situation to at least recommend best practices -- or conceivably on the more activist end of the spectrum require that investees acknowledge agreement with our ethical principles around AI. Before beginning a discussion around this within the firm I want to educate myself. We as a firm need to figure out what our principles are. What links and advice can you share for our reference?
For this purpose please interpret "AI" extremely broadly. Unfortunately I can't give specifics about the type of investments we make.
The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).
For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.
For a shorter read, see this blog post: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...
For general information about AI ethical principles, see FHI's website, they have publications there that you could also read: https://www.fhi.ox.ac.uk/governance-ai-program/