Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Examples of AI ethical principles?
39 points by throwaway40199 on Oct 7, 2018 | hide | past | favorite | 27 comments
Hi. I work in the investment field. The firm I work at invests in AI companies from time to time. Without presuming to know investees' ethical situations better than they do, I would like to be in a situation to at least recommend best practices -- or conceivably on the more activist end of the spectrum require that investees acknowledge agreement with our ethical principles around AI. Before beginning a discussion around this within the firm I want to educate myself. We as a firm need to figure out what our principles are. What links and advice can you share for our reference?

For this purpose please interpret "AI" extremely broadly. Unfortunately I can't give specifics about the type of investments we make.



As I understand it, AI ethical principles relate to the development of a superintelligence. Talking about unethical usage of narrow AI is like talking about the unethical usage of any other tool - there is no significant difference.

The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).

For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.

For a shorter read, see this blog post: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

For general information about AI ethical principles, see FHI's website, they have publications there that you could also read: https://www.fhi.ox.ac.uk/governance-ai-program/


AI ethical principles relate to the development of a superintelligence

This is not true; there are real-world ethical considerations right now with existing tech, infact have been since the most rudimentary AI was applied in commerce or government


Superintelligence is an interesting distraction from real world ethical issues.


Including real world Internet ethical issues, of which there are already plenty.


> don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen

> otherwise humanity dies in 12 hours

Well, it's been a nice ride :)


Just finished Life 3.0 - very good overview of ethics and possible futures.


- avoid implicit/explicit biases

- respect privacy of individuals

- avoid personal identification

Overall don’t do anything with AI that is ethically wrong to do it by other means (for example, you can replace AI with expensive human labor to do it).


To expand on tinkertellers comments, a lot of ml based ai is based on historical data. For example, identifying factors associated with hiring a good employee may identify colinear features that correlate but do not contribute to being a good employee. Because historically women/ people of color/ homosexuals were discriminated against, identifying these latent features as them being “bad employees” may carry biases that you are not wishing to use for an ai hiring task.


I think this list is great.

As AI begins to be applied to problems like driving that affect the real world instead of just information, I’d add:

Ethical use of AI should not create negative externalities - negative impacts on people other than those making the choice to use it.

This includes the example in another post about bias in hiring perpetuated by implicitly biased historical data sets.

It also includes pedestrians dying or being maimed because autonomous vehicle developers want to test their vehicles in real world conditions before they’re truly ready. And some of the proposals I’ve seen that would force pedestrians to change their behavior to accommodate autonomous vehicles or blame pedestrians for their own deaths if hit by a self-driving car while not carrying a special sensor.

An ethical AI would not be able to park itself illegally (snarling traffic because its double parked or parked in a bus lane; potentially endangering cyclists who have to ride around it if it’s in a bike lane, for instance) until police are sensed in the vicinity — even if the owner directed it to.

AI should not be used to triple Uber’s fleet size in NYC without hiring more drivers: doing so would bring already clogged streets to a standstill, costing billions in lost productivity [0]. (In 2009, when the streets were less congested than today, the externalized cost of driving a car into midtown during rush hour was $160 [1].)

[0] https://www.google.com/amp/s/www.bizjournals.com/newyork/new... [1] http://blogs.reuters.com/felix-salmon/2009/07/03/how-driving...


> As AI begins to be applied to problems like driving that affect the real world instead of just information, I’d add: > Ethical use of AI should not create negative externalities - negative impacts on people other than those making the choice to use it.

I’d go one step further and say that ethical AI should not create any negative externalities to humans period -- not even those that choose to use it.

An example that can be informative here is different than the traditional AI-driving “trolley problem” (i.e. do nothing and kill five people on the crosswalk, or swerve and kill only the passenger), it relates to a more subtle, quotidien challenge faced in autonomous cars: what happens to backseat driving? How will a car that is driven by an AI treat its passengers ethically and insure that it never does them any harm? Even a minor discomfort because, say, an acceleration pattern made them uncomfortable, is an ethical issue if the human has no recourse or if there is systematic prioritization of other goals over the human’s.

One solution is to make cars sensitive to humans’ feelings while they’re being driven, so that the car’s brain has an awareness that, within the right framework, lets it continuously adjust its behavior to serve its passengers better. Essentially, the aim is to make the car empathic, artificially giving it the type of human intuition that can help avoid unintended outcomes big or small. The more that AI systems in general can be sensitive to our human experience, the better I believe.

It is very hard, pragmatically speaking, to always balance the need for easy/cheap AND safe solutions in the real-world. Clearly defined goals and non-goals as well as intermediate steps for measuring against them at least seem to be a good start, since these provide the framework that lets fast feedback mechanisms guide AI to evolve in a way that’s coupled directly with human interest.


> Overall don’t do anything with AI that is ethically wrong to do it by other means (for example, you can replace AI with expensive human labor to do it)

Not sure I understand what you mean -- can you explain this?


If I understand correctly, he's saying "if it would be unethical if executed by a human, it's unethical if done by a machine".

Depending on where you draw the line here, things like military applications, insurance price calculation, or drawing convincing Photoshops of famous people in the nude could all be in that category.


There's a difference between "unethical to make a human do" and "create an unethical result": e.g. there are ethical benefits from the automation of risky jobs. The obvious one in ML and computers is image recognition of horrific images.

(Granted, there are then additional issues of where and how you deploy that technology.)


This is a great course on the subject, free if you don’t care about the cert at the end https://www.edx.org/course/ethics-and-law-in-data-and-analyt...

I took it as it was a prerequisite for something else expecting it to be fluff but it was surprisingly good

I am in no way affiliated with the lawyers in the video but they do consulting in this area if you wanted to get serious about audits of your investments


Sorry im posting so much, my silly opinions, but i interviewed with an AI company a while back that I didn't realize was in the military space and basically flat out told the interviewer once I read more about the company's projects that combining AI in military applications is basically the scariest sounding shit in the world to me and what are people even thinking with this kind of shit. Yes it's sexy to investors and warlords and the stuff cartoon villains dream of, but I dont want my kids (if i had any) having to live with weaponized AI flying around testing whether or not machine gunning things to then rank the outcome and reinforce an activity that is obviously always bad. Humans don't behave or make decisions that way. That system is not ever going be able to relate to the infinitely complex system called emotions which are highly involved in human behavior. They are probably so complex and influentual to behavior for a damn good reason and I bet it involves our survival up to this point. All the code and unit testing in the world isn't going to teach this technology to be friends with us. It's just code and equations. Even if the closest algorithmic proxy would have trouble evaluating us consistently. I feel like that's probably why we have emotions to begin with and they are too hard to solve using partial differential equations. And totally unique to every person individual.

I think we need to leave it that way. I think there's a kind of a misguided excitement about the benefits of technology coming from the tech majors who get hacked far too much to support linking our brains together to experience the misery of a virtualized existence in an advertising experiment.

Its a shame that there are not options in the AI space to do something other than harvest user data, trade stocks, or autonomously murder things.

With all the data scientists in the field. Someone plot out the direction this is going. I'm too opinionated now.


I do not think you should be worrying about superintelligence. If you are investing in AI firms that are going to make products that will be profitable, providing a return on investment, they should focus on making the products, not on making general AI.

(I mean read the book "Superintelligence" if you want, it's an interesting read, and may help you have conversations with certain people, but as someone else here said, it's an "interesting distraction".

There are more and more zealous true believers in the tech world, unfortunately; some of your investees may bring it up themselves!)

There are real ethical problems in systems that learn from data. I'm describing them that way rather than as "AI", because in fact these problems can show up anywhere along the spectrum from deep neural networks all the way down to the simplest linear models. These can arise from biases in the training data or in the learning process.

Others have posted Google's AI principles, which are good. The keyword for researchers working in this area is FAT -- Fairness, Accountability, Transparency. There are conferences [1] in this area that have a lot of good work. You may also want to see the syllabus for Moritz Hardt's course [2].

I would also add -- we're in an unsettling place right now. It's not clear that there are "best practices" that can be widely recommended. Many proposed definitions of fairness are inadequate or mutually contradictory.

[1]: https://fatconference.org/2018/index.html [1]: http://www.fatml.org/schedule/2018 [2]: https://fairmlclass.github.io/


I would add, as a compromise with the superintelligence true believers, "Concrete Problems in AI Safety" is quite good, and worth reading and thinking about. (It's especially relevant to RL systems that interact with the world.)

https://arxiv.org/pdf/1606.06565v2.pdf


I think this is what you are looking for. http://standards.ieee.org/develop/indconn/ec/auto_sys_form.h...


I think you probably don't need links to establish some basic ethical guidelines unless you wanted to model an ethical framework based on other people's discussion of why AI has some unique ethical liabilities to consider.

For example, you may not want to ever be the activist investor that took a majority share holder position in the AI company and that said it was a marketing firm, but somehow an engineer accidentally lost control of the pile of code and it taught itself to hack into the world's nuclear silos and simultaneously launch warheads all over the place and kind of create literal hell on earth because that was the optimal solution it came up with for turning the lights of in the office, and no one knows why...

The self explaining AI patch that was established as a best practice safeguard, "sort of, umm, told us to shut up when we asked what it was doing and to tell us why.."

Anyway, you get the idea..

I guess perhaps figure out whether you are an activist for or against exterminating all humans as a start.

For example, I have a feeling Besoz might be on the robot team.

I'm actually not sure if there's a company using AI right now for anything other than trying to do kind of antisocial sounding stuff. There's not much positive stuff. No ones cured any diseases or eradicated poverty and automation inherently removes the need for people.

Robots dont care if you pay them or eat. They are so cheap and efficient.

And they never sleep!

And they know what everyone is doing all the time and can see you right now...

Just kidding. Maybe?


Transparency. Artificial systems designed to work with and support human activity, should not have hidden agendas. And the reason for being transparent should not be to avoid bad press or fines, but to actually care about the ethics around continuous extreme data collection/processing or automation overkill or any of the other concerns with AI.


Tailored to ml ethical questions will arise! some considerations here https://freeandopenmachinelearning.readthedocs.io/en/latest/...


A helpful reminder might be that while AI is chugging along, a lot of new research has emerged in the computer-brain communication interface field. With successful transmission of visual signals from another person participating in the link and thought speech.

Guess what powers the signal processing that makes that possible?

Machine learning. That can alter your brain's functional integrity and capcity for rational thought.

The last defense. The organ than makes us laugh, cry, worry, think etc. could potentially be at risk for a stray elecromagnetically driven pulse that..

There are more potential issues than people thinking about them.

I dunno. Seems kinda like useless tech for any end user purposes, but a really clutch play for the scheming evil robot team.


Ps - it sounds like u already have an idea of what to do. I'm pretty sure the tech industry lost control of shit a while ago and people are scared.

At the end of the day it's just money. If it's going to usher in the apocalypse fcking dump the stock and have your lawyer field emails. Your investors probably have no idea what they've invested in or divested from let alone know what machine learning is other than whatever they read in the news. It's not like not being invested in something due to fears of where such investments might lead is a bad decision or one that assumes a loss at all. Find something else. Right? Or is it far more complicated than that?



For reference, Google’s AI principles, announced in response to the backlash around Project Maven: https://www.blog.google/technology/ai/ai-principles/amp/


Hi, I would like to draw attention to the need for trust in AI, in contrast to ethical principles. Ethical principles are a declaration of intent in a specific social context to a specific set of stakeholders (important people, people like us, not people who are "other"). Trust networks and systems provide affordances to people who are excluded from the debate. Individuals develop trust when the interaction that they have doesn't produce unexpected harm and when they can inspect the behaviour of systems without having to engage (reveal their needs) and expose themselves to the potential for harm. Complex behaviours can be very difficult for individuals to assess so they need a transparent system that provides proxies that they can understand - a good example is commercial flights - you don't know the pilot or ground crew, but the systems of training and qualification allow you to decide that this flight is likely safe. I think that distrust can emerge at a societal scale - see politics for an example - and can be due to generated perceptions.

My point is that a declaration of ethical principles is talking the talk - but creating a trusted system is walking the walk. I believe that we will likely see catastrophic failure of AI unless AI researchers and companies develop the infrastructure of trust. I point to Google Flu as an example of where a system failed, in that case the consequence was not painful, but the trust in the Google brand stopped the community from really seeing the failure for a long time. If and when it is discovered that the current generation of oncology and ophthalmic diagnostic algorithms have taken to quietly killing and blinding subsets of the population due to some wrinkle in a deep network I predict a staggering backlash.

At that point all the ethical declaration and hand wringing in the world isn't going to matter. Expect legislation that takes ML and AI off the table for a generation or more. I was a young fella when the web took off, and I was excited and starry eyed about it. I had no concept of the potential for harm, but this time I do, because I've lived it. We all do, because it's out there - and I think that a view that AI is different or that because we have good intent it will work out fine is just not good enough.

We have to build AI systems that demonstrably aren't harmful and can be controlled by the users and the community that protects the users. This extends into the capability of the infrastructure that is used to construct the artefact to support audit and other inspection affordances; it extends to the behaviours orientation and liability of the people using the infrastructure to make things and it extends to the production infrastructure and management system. At the moment I don't see any company anywhere doing this close to right; and it makes me really really mad. I feel especially angry because when I game out the consequences of a big car crash (possibly literally) all I can see is long term harm to the industry and the careers of people of good will, for the want of some short term cost and a bit of professionalism.

I have written further on this and I am taking proactive action at work (investing in a standards activity to develop a trustable infrastructure in my industry). But a lot of work is needed.

Well, I had a good rant. Back to work.


I was forwarded this discussion by a friend who’s familiar with my work and I’ve really been enjoying the posts. I think about ethics and AI a lot, and couldn’t help but want to contribute a few thoughts here. So here it goes, my first post on HN..

My main advice is beware of AI’s surprising creativity and proactively work to insure it stays aligned with human interest.

There was a fascinating crowd-sourced paper published this year that shares anecdotes about unexpected adaptations encountered by researchers working in artificial life and evolutionary computation[1]. These are the sort of stories that can be funny in one light (à la “taught an AI to fish and it figured out how to drain the lake”), and doomsdayish in another (“it drained the lake”).

The authors concluded that there is “potential for perverse outcomes from optimizing reward functions that appear sensible.” That’s researcher for ¯\_(ツ)_/¯ ...as Tad Friend wrote in his excellent piece in the New Yorker on this topic[2].

In other words, humans can’t safeguard AI systems solely by defining what they believe to be sensible reward functions.

Reward functions, regardless of whether they are sensible to humans or not, critically need to be mediated by additional regulatory mechanisms, like hard-set non-goals that aren’t just penalty terms relative to a specific reward function. The best non-goals are unequivocally defined and measurable against intermediates that are produced in the reward function optimization process. When done right, this sort of framework allows maladaptive processes to be detected reliably and effective interventions executed.

Tad makes two other points that I think are worth noting in this discussion: #1. “It will be much easier and cheaper to build the first A.G.I, than to build the first safe A.G.I.” #2. “Lacking human intuition, A.G.I. can do us harm in the effort to oblige us”

Given #1, when investing in AI companies, if you aim to be “on the more activist end of the spectrum” you’ll need to spend more money relative to market in order to support ethically responsible AI R&D programs, because they will necessarily be harder and more expensive than the irresponsible ones. Assuming you’re investing in AI companies for their products, and not as pure technology plays, this is simply a reality: the core functionalities needed for your portfolio company to sell product X will always be cheaper to develop than the core functionalities needed for your portfolio company to sell product X within a safe, secure framework.

There will be no point for your firm to have codified principles without also having the fortitude to support your AI companies, financially and otherwise, with development processes that are harder and more expensive precisely because they’re more ethical. Many of these costs are absorbed in getting architectures and system designs right, which serve the product anyways, but big costs also come from running unique tests that would be erroneous if ethics weren’t in consideration.

Before thinking about having investees agree to your ethical principles around AI, it may be good for your firm to think about whether you’re willing to pay more for those principles to be lived up to. If two identical companies pitch you with identical AI products, but one plans to take an extra 6 months and $10M to safeguard their technology before launching, while the other intends to capture 8% of the market in that time, who will you fund?

Point #2 relates closely to “perverse outcomes.” In other words, human harm can be unintentional by AIs. Aside from weaponized-AI that does harm intentionally and raises its own separate ethical dilemmas, daily life AI can do damage in a great number of ways without intending to or even knowing it.

The IEEE Standards Association together with the MIT Media Lab recently launched a global Council on Extended Intelligence[3] which addresses many of these issues. Joichi Ito, Director of the MIT Media Lab and a person on the more activist end of the spectrum himself, stresses that: “Instead of trying to control or design or even understand systems, it is more important to design systems that participate as responsible, aware and robust elements of even more complex systems.”

Disclaimer: I’m a co-founder of Arctop.

[1] https://arxiv.org/abs/1803.03453 [2] https://www.newyorker.com/magazine/2018/05/14/how-frightened... [3] https://globalcxi.org/#vision




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: