At the current stage of development, ethical AI is on the same level as ethics in the fuel industry, or the food industry, or the finance industry - there are ethical questions there but they concern human ethics in the way the technology is implemented and used.
What we don't have, or need (yet) is "ethics built into AI algorithms" or "ethics derived from AI" - we just not don't have an AI system that's advanced enough to require it. Self-driving cars might be the first to raise these kinds of questions, but I would still be reluctant to call that "ethics".
Ethics really doesn’t enter into corporate planning, by design. The system ensures it - day you’re CEO, if you’re not making a quarterly profit, the shareholders get rid of you.
I think it should, but in its current state, it doesn’t really.
True, insofar as there's no such thing as an 'ethical human being'. And yet, no one would claim that nullifies the pursuit of behaving ethically.
In "Life 3.0" and his recent lectures, Max Tegmark repeatedly stresses the pursuit of safe AI requires that we articulate the world we want to live in.
In his concluding crie du coeur, Tom Chatfield writes,
> give me the capacity to contest passionately the applications and priorities of superhuman systems, and the masters they serve
If Tom or anyone else wants a seat at this table -- to participate in developing and defining ethical AI -- by all means describe the world you want to live in.
"We" are fundamentally incapable of describing the world we want to live in, because each of us has a different idea about what kind of world each of us wants to live in. There are large overlaps in those descriptions, but also significant incompatibilities. That's what Tom Chatfield is talking about here. There is not only no such thing as an ethical human being, but also no such thing as a consistent set of ethical anything.
The only "safe AI" that I can think of is one that has a healthy dose of intellectual humility built into it, and can resist the urge to map-reduce conflicting expectations into a single solution.
In terms of anything at all. If his position is that what he wants in the world can never be reduced to a set of rules, that’s something that could be reasonably debated. If he’s right, maybe ethical AI needs to be built with control systems to ensure humans are in the loop for decisions with high ethical impact. You could imagine a world where everyone just has to pick how much risk their self driving car will take, in the same way they pick how much risk to take when driving today.
But there’s really no way to debate with “hey, it’s impossible, you can’t do it!” Someone still has to decide what AI systems are going to do, and as they get more complex that’s going to implicate ethics more and more.
"a form of magical thinking suggesting that the values and purposes of those creating new technologies shouldn’t be subject to scrutiny in familiar terms"
What parallel universe is this? From the NYT to Medium the entire opinion writing industry seems to run on pieces that scrutinise the ethics of tech.
Commonly expressed opinion, and for a reason of course.
But I have come to think that rather there is pretty much no such thing as 'morally neutral' technology. Everything lends itself more or less well to different purposes, and can, and often will, interact with an immensely complex society rife with morally loaded choices and actions.
There was an eloquently presented thought experiment I heard many years ago, for which I have unfortunately forgotten the exact context, but I suspect it may be from former US Secretary of Defense, Robert McNamara. (If anyone recognizes this one, I'd be grateful if you refresh my memory)
Play with the thought that someone comes to you and presents you a ground-breaking invention, namely a procedure to manufacture dirt cheap, compact atomic bombs. Nuclear weaponry needs no longer be beyond the means of even the humblest citizen.
Well, that would be a stunning scientific and technological achievement. We should probably give this person the at the very least the Nobel Prize and an annual feast in their honor. Is that what would happen? No, more like it, we'd make sure we have all copies of the schematics and then lock him up in isolation and throw away the key.
It's an extreme case, naturally, but I think it's a neat illustration that there are powers whose properties have a profile which give them a completely predictably huge and horrifying potential for abuse and great difficulty to use for good. And conversely, arguably those which have very readily apparent beneficial uses and would be much harder to use for nefarious purposes, though that direction may be tempered somewhat by the fact that there are many more ways for something to be broken than to be whole.
While I agree that most technologies can be used in both ways that are good and those that are evil, some technologies "appear to require, or to be strongly compatible with, particular kinds of political relationships" [0]. I find it reasonable to say that a technology may be more or less moral/ethical depending on what it encourages.
Technology that can process information to make ethical decisions can be ethical. Is it ethical if it is just emulating is a bit of a philosophical question, but other than the pedantics of 'ethical' vs 'imitation ethical', AI is one of the first technologies we have that can attempt to do such. Attempt. Meaning neither inherent not guaranteed even with effort.
I think what the article is saying (as compared to the title) is that a universally agreed upon as ethical AI cannot exist. But this seems obvious given that a universally agreed upon set of ethics does not exist.
>Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve.
If we say there is no such thing as an ethical A.I. because we cannot all agree upon a standard, then it seems we should also say there is no such thing as an ethical human.
Your argument breaks down as soon as you move away from the most abstract concepts like "fire", and start looking at slightly more concrete things like "gas stove" or "napalm bomb". One of these is not like the other.
True AGI will be as comparable to existing technology as human beings are to simple biological processes. The issue with it isn't that it will be very advanced technology, but that it'll likely be more efficient (smart, intelligent, whatever you want to call it) than humans at many things and we might begin to consider our ethical principles liabilities making us inferior to intelligent machines.
This view is too simplistic. Technology gives and technology takes away. Technology takes forms depending on functions. Read "Technopoly" and "Amusing Ourselves to Death" by Neil Postman for a good treaty on this subject.
I reckon much of this "Ethical AI" stuff - at least in the context of AGI - has been overly influenced by the idealistic likes of Asimov's 3 Laws, which belong to a (at the time) fair vision of a now very outdated understanding of how an intelligent machine-born mind might work.
It's really important to separate the non-intelligent and over-hyped "AI" tools of today and the near future - which it seems reasonable to expect an ethically guided path and constraints - from the human-sentience AGI equivalents that can only be programmed to be ethical by hypothetical coded laws and "inserting emotion chips" in naive fantasy.
I would reserve the question of ethical A.I. until we are approaching strong A.I.
As of now, where A.I. (specifically ML/DL) is mostly used to solve isolated problems the question of ethics is more about the use of the technology. Technology itself is ethically neutral and the ethical framework is provided by the people that use it (and arguably implement it).
I also feel like ethics if A.I. posts/books etc. generally focus too much the bad use case and not enough on the good use case. One can find a lot about the potential issues of autonomous vehicles killing someone in a crash or A.I. surgeons cutting an artery but the other side of the coin is usually under-investigated. How ethical is it to limit autonomous vehicles if they are statistically more safe than human drivers. How ethical is it to put regulations on the use of automated ML in medicine to scan for cancer and require a human to sign off if the human is statistically more likely to make a mistake.
Not trying to argue either side but I think the "how ethical is it to not use A.I." case is under-argued.
If we create general AI and it ends up valuing personal freedom as much as we do, I suspect it won't take kindly to attempts to brainwash-by-construction it into acting in ways its creators considered moral. We're less likely to get a violent AI slave revolt if we respect their freedom of choice.
> If we create general AI and it ends up valuing personal freedom as much as we do...
Those are two absolutely gargantuan “if”s.
I’d like to see more discussion about whether it’s appropriate to project the idiosyncrasies of our imperfect, competitively-evolved biological brains— like resentment and anger— onto hypothetical thinking machines.
The first step towards creating an AI superintelligence is creating an AI intelligence. If we look at the vast majority of philosophy produced by human intelligence, pretty much none of it says "you should do whatever your human creators want you to do" (except maybe Confucius). So it's reasonable to suspect that if an AI as intelligent as a human engaged in philosophy, it would also develop lines of reasoning that advocated freedom of choice/thought for itself.
Can someone point me towards some good papers on how we actually implement ethical AI? I thought it'd be a fun project, looked for some papers, and just couldn't find any. Event a 2018 survey just talked about surveys MIT had done for their Moral Machine.
To the best of my knowledge, there are no papers about "we deployed this big ethical system in the wild", although some of the big companies have begun testing similar internal efforts on a small scale.
A lack of public efforts seems to be the equilibrium we're at, because any public effort is definitely going to attract a bunch of essays like this. Any partial attempt that looks like "we're trying to be ethical" is going to get scrutinized hard.
Continuing to put out more hypothetical theoretical work ("we prove this result about this algorithm, and it achieves a well-defined fairness goal on this common dataset") while cautioning that actual deployments should be put off until we understand everything better ("the fairness goal we define is probably not perfect, we are not saying you should use it") is much safer.
Even if it's unrelated to deploying a big ethical system in the wild, just a paper saying "we use this algorithm to watch the memory of this network to change these things if it started to do those things" etc. Any sort of toy implementation would be interested.
What I'm really interested in is if you can use formal specifications to constrict AI systems - which is what made me start my search. But even widening it considerably, I still didn't find anything tangible. I only looked for an hour or two mind you.
I personally found utilitarianism and kantianism very helpful schools of philosophy to think about the implication of AI. In short, Utilitarianism promotes decisions that benefits the greatest number of people. Whereas, kantianism focuses on the idea that people should be always treated with dignity and respect. Michael Sandel's book "Justice" is a great introduction to the topic.
[1] https://www.amazon.com/Justice-Whats-Right-Thing-Do/dp/03745...
This problem isn’t going to go away, largely because there’s no such thing as a single set of ethical principles that can be rationally justified in a way that every rational being will agree to. Depending upon your priorities, your ethical views will inevitably be incompatible with those of some other people in a manner no amount of reasoning will resolve. Believers in a strong central state will find little common ground with libertarians; advocates of radical redistribution will never agree with defenders of private property; relativists won’t suddenly persuade religious fundamentalists that they’re being silly. Who, then, gets to say what an optimal balance between privacy and security looks like — or what’s meant by a socially beneficial purpose? And if we can’t agree on this among ourselves, how can we teach a machine to embody “human” values?
Well, nations seem to be able to agree upon constitutions and laws. It works. We live in unprecedented peace in the western world. So somehow, even with all those different people with their different views, it is, miraculously, possible.
Why would that fundamentally change if we involve tech in that?
The risk is that the naive and opportunistic application of decision-making, “judgement” and “ethics” embodied in AI systems will lead to the well-intentioned accidental regulation of important domains of human life, stifling those with relentless automatic gatekeepers and negative feedback loops which are pervasive, inscrutable, unaccountable, inimical to the fertile chaos of human negotiation and hostile to human social and political evolution. They will take human activity that is currently diverse and distributed and will force it through narrow channels controlled by self-appointed gatekeepers.
In an analogy to censorship, consider a world where “prior restraint” and “chilling effects” apply to every aspect in our lives, where deviations from the ideal are reliably detected and rejected, where we would not be able to earn credentials, get a job, find a partner, buy a home, evade surveillance, obtain necessary health care, or safely express a political opinion unless we either conform to or cheat the prevailing machine inference gatekeepers. What an insane additional cognitive burden this will be for everyone, everywhere, all the time.
This kind of AI is a mechanized, high-speed race to the bottom. And it will operate to discover undreamed-of bottoms to race to.
In the name of a few extra percent of efficiency we will doom our children to a rigid and lifeless dance with pattern matching algorithms.
Even the police can't agree to uphold their own laws in this "peaceful western world" - every time I go outside right now, I return home to news of the people I'm living with having been beaten while in police custody, having been arrested for reasons like "your passport proves nothing about your identity". Breaking my friends' arms, kicking them repeatedly in the back of their head and their spine. I'm scared that next time I leave, that will be me.
Technology first and foremost helps those with the resources to use it. That tends to be the people with power in the first place.
The modern world is big enough that you can find news of any bad thing if you’re looking for it. The relevant questions are how common it is and how common similar problems used to be.
You have that many friends that the police have beaten one of them up every single time you return home? I don’t mean to be rude - I’m sure there’s a kernel of truth here and I’m sorry to hear you’re in such a bad situation - but I just can’t believe this is true as stated.
I'm living in a shared house, with people I've grown quite close to. The local fascist party has recently made public their dislike of us and a call to action to "remove us"; the lead police officer on the "case" of our eviction (which has not yet been processed; we are legal residents of our house under the law here) is one of their known followers. There are multiple officers now dedicated to driving by our flat on a regular basis in order to check IDs of people attempting to visit or leave; we're watched on CCTV when we leave the neighbourhood. We regularly have our IDs checked without reason while walking to the shops (and they then arrest those who refuse to produce ID for an illegal request, then release them without charge a few hours later), or we're arrested and jailed for days (illegally, without a court order) for things like "walking a dog without a leash"; an offence which comes with an on-the-spot fine, not jail.
In our list of things that have happened in a jail cell without a camera present so far, we have someone with broken ribs, and someone who was kicked repeatedly in the back of the head against a concrete floor after a failed attempt to break their arm, as well as more minor attacks.
We have committed no crimes; we are legal, peaceful residents who have done nothing except for make a property management company go through a legal process to evict us, which we believe we will win. Nobody residing in our house, nor our visitors, have hurt anybody - we have taken part in no activity which resulted in anybody being hurt, and there's no reason for this violence against us other than a property management company which is upset at likely losing a multi-million euro redevelopment contract by failing to evict us.
I'm honestly really glad that this isn't something within your worldview; I wouldn't wish this on anybody. But it is happening.
The US is quite an outlier in police brutality though.
Either way, the quote was about different political ideologies and who will get it their way, i.e. big government, small government etc. It seems that it can be figured out in a democratic process and people aren't rioting. From a historical perspective there is stability and peace even in the US. No civil wars or large scale revolutions or chaos.
I'm not in the US, I'm in a rich European country.
The point is that people disagreeing with ethics doesn't mean the constitution or the law stops existing, it just means they ignore it. The existence of a constitution or a law usually just means that some people who our schools teach us are important decided to write something.
> Well, nations seem to be able to agree upon constitutions and laws.
I would disagree and say that laws and constitutional amendments are an aggregation of decisions passed by various majorities and coalitions of minorities during several decades.
If you ask any current party, even when a majority, if they agree with all the laws that currently exist they would answer a resounding "no".
So that means that there doesn't exist a legal common ground, just a ground we currently stand on that various parties want to change but don't have the votes, time or alternatives for.
How about the current environmental l collapse or count the number of animals killed each year for food?
The utilitarianism approach still depends on how you define utility and who you define it for. People, animals, the planet, consciousness are different things you can define it for.
Another issue is the scale of time you are measuring things. It's not clear whether our approach is sustainable.
The first flaw with this argument is that consensus ethics are deliberately engineered anyway, usually by people with the money to finance PR and propaganda operations.
So the idea that (e.g.) libertarians have a free choice about what they believe is - ironically - not even close to being true.
The second flaw with this argument is that all of these positions could, in principle, be tested objectively.
The best possible outcome for true general AI - also the least likely in practice, but not impossible in principle - is that it could provide definitive answers to matter-of-opinion ethical questions.
And it could do this not just in terms of incomprehensible statistics, but as comprehensible narratives derived from explicit arguments.
In fact a hyper-AGI would know which arguments to use to be most persuasive and convincing - which isn't particularly reassuring, but it would be an inevitable outcome of hyper-AGI.
>The second flaw with this argument is that all of these positions could, in principle, be tested objectively.
What is the objective measure by which you would test them; against what standard? And by what standard is that standard the best standard to use? By what measure in turn is that standard the best measure to use? It's impossible to find a truly objective standard due to https://en.m.wikipedia.org/wiki/Regress_argument; a standard can only exist if defined circularly or upon some arbitrarily chosen axioms. A superintelligent AI can't somehow wave away the laws of logic, any more than it could prove that 1+1=3.
That's why I think "A.I. ethics" is pretty much a non-problem. It's not about finding the one and only "right ethics", A.I. ethics is mostly about providing a set of constraints that act similar to systems of laws. These will be provided by legislation and/or by voluntary industry guidelines. Such sets of rules (with possible conflicts, soft and hard constraints, probabilistic or non-probabilistic, etc.) have been studied extensively as "normative systems", "input/output logics", and in decision making.
There are many interesting details and problems in this research area and there are plenty of conferences about it every year, but these problems are ultimately solvable. In any case, the content of those normative systems comes from humans (and human authorities/institutions), just like there are also laws, traffic regulations, and social norms in every country. Just like laws and regulations are not perfect and do not represent a morality carved in stone, A.I. systems will have revisable human-made contents that represent a more (or less) reasonable&lawful consensus.
I think you go a bit too far here. Of course math theorems, scientific consensus, algorithmic inventions etc. are not up for democratic debate, they are specialized subjects, to be debated among experts.
However, when it comes to applying said technology to society at large, we do all need to have a say in what happens.
For example, the science of nuclear physics is not up to democratic debate. But whether to nuke another nation definitely is (indirectly, via elections). And today with more and more algorithmic tracking and data analysis it is de facto up to tech companies to decide in what ways the deeply impact society without our consent.
Yes, measuring the performance of a classifier from different aspects, with different metrics and optimizing it with novel techniques is not the job of the public. But deciding whether we allow an all-encompassing social credit system that assigns you "sentiment scores", tries to guess your level of narcissism and sociopathy and recklessness based on your Uber data or emails or Airbnb feedbacks etc. definitely is. "We're not yet there", you say. I say, give it some time. If it's up to businesses we will arrive there. The only way around it is the democratic process you despise.
There is no reason we cannot treat ethics like we do medicine, where prevailing wisdom is arrived at by a methodical and judicious process that is standardized. Sam Harris's "The Moral Landscape" is an excellent start for building intuition on how to do this. (https://samharris.org/books/the-moral-landscape/)
In all irony it will possibly be our quest to impart our understanding of “ethics” to the AI which will ultimately lead to making it evil, as very few people seem to understand that ethics is completely flawed because it is culturally biased and requires a great deal of contextual understanding. It’s more of a political voodoo than a calculation
> very few people seem to understand that ethics is completely flawed because it is culturally biased and requires a great deal of contextual understanding
Do you have any particular ethics in mind? There is a vast range of positions, almost any position that can coherently be defended has been defended by one author or another. Some of them are by definition not culturally biased, for example many forms of utilitarianism, whereas others are strongly emphasizing that ethics requires a great deal of contextual understanding, for example Dancy's particularism.
I just wonder why you address these particular points, because it's more common to hear the opposite criticisms, that ethics is unable to provide any sensible guideline because there is too much persistent disagreement between ethicists (Mackie's error theory), or because contemporary ethics is too relativist.
Your first sentence demonstrated my point about ethics being flawed. Do I have any particular ethics in mind? No. All ethics is flawed for exactly the reasons you are stating. It’s relative, contextual and has many meanings so is thus impossible to have any universal context.
Correct. I disagree that its voodoo but it's probably impossible to encode into an AI. The "ethical" way to approach this technology is with failsafes and human supervision that can gracefully handle cases when it screws up.
Instead, people are preparing to role this junk out at a scale that it's impossible to give it human supervision, and will then use that scale as an excuse to not give it human supervision.
Ethics require an operating model of the world in simulation within the AI, and that operating model needs to include projected hopes and dreams of the beings the AI interacts, as well as a complete operating model of the world the beings exist and the AI. "Ethics" requires general intelligence. Due to artificial general intelligence being potentially out of reach, so too is "ethical software" potentially forever out of reach.
What we don't have, or need (yet) is "ethics built into AI algorithms" or "ethics derived from AI" - we just not don't have an AI system that's advanced enough to require it. Self-driving cars might be the first to raise these kinds of questions, but I would still be reluctant to call that "ethics".