Hi, I would like to draw attention to the need for trust in AI, in contrast to ethical principles. Ethical principles are a declaration of intent in a specific social context to a specific set of stakeholders (important people, people like us, not people who are "other"). Trust networks and systems provide affordances to people who are excluded from the debate. Individuals develop trust when the interaction that they have doesn't produce unexpected harm and when they can inspect the behaviour of systems without having to engage (reveal their needs) and expose themselves to the potential for harm. Complex behaviours can be very difficult for individuals to assess so they need a transparent system that provides proxies that they can understand - a good example is commercial flights - you don't know the pilot or ground crew, but the systems of training and qualification allow you to decide that this flight is likely safe. I think that distrust can emerge at a societal scale - see politics for an example - and can be due to generated perceptions.
My point is that a declaration of ethical principles is talking the talk - but creating a trusted system is walking the walk. I believe that we will likely see catastrophic failure of AI unless AI researchers and companies develop the infrastructure of trust. I point to Google Flu as an example of where a system failed, in that case the consequence was not painful, but the trust in the Google brand stopped the community from really seeing the failure for a long time. If and when it is discovered that the current generation of oncology and ophthalmic diagnostic algorithms have taken to quietly killing and blinding subsets of the population due to some wrinkle in a deep network I predict a staggering backlash.
At that point all the ethical declaration and hand wringing in the world isn't going to matter. Expect legislation that takes ML and AI off the table for a generation or more. I was a young fella when the web took off, and I was excited and starry eyed about it. I had no concept of the potential for harm, but this time I do, because I've lived it. We all do, because it's out there - and I think that a view that AI is different or that because we have good intent it will work out fine is just not good enough.
We have to build AI systems that demonstrably aren't harmful and can be controlled by the users and the community that protects the users. This extends into the capability of the infrastructure that is used to construct the artefact to support audit and other inspection affordances; it extends to the behaviours orientation and liability of the people using the infrastructure to make things and it extends to the production infrastructure and management system. At the moment I don't see any company anywhere doing this close to right; and it makes me really really mad. I feel especially angry because when I game out the consequences of a big car crash (possibly literally) all I can see is long term harm to the industry and the careers of people of good will, for the want of some short term cost and a bit of professionalism.
I have written further on this and I am taking proactive action at work (investing in a standards activity to develop a trustable infrastructure in my industry). But a lot of work is needed.
My point is that a declaration of ethical principles is talking the talk - but creating a trusted system is walking the walk. I believe that we will likely see catastrophic failure of AI unless AI researchers and companies develop the infrastructure of trust. I point to Google Flu as an example of where a system failed, in that case the consequence was not painful, but the trust in the Google brand stopped the community from really seeing the failure for a long time. If and when it is discovered that the current generation of oncology and ophthalmic diagnostic algorithms have taken to quietly killing and blinding subsets of the population due to some wrinkle in a deep network I predict a staggering backlash.
At that point all the ethical declaration and hand wringing in the world isn't going to matter. Expect legislation that takes ML and AI off the table for a generation or more. I was a young fella when the web took off, and I was excited and starry eyed about it. I had no concept of the potential for harm, but this time I do, because I've lived it. We all do, because it's out there - and I think that a view that AI is different or that because we have good intent it will work out fine is just not good enough.
We have to build AI systems that demonstrably aren't harmful and can be controlled by the users and the community that protects the users. This extends into the capability of the infrastructure that is used to construct the artefact to support audit and other inspection affordances; it extends to the behaviours orientation and liability of the people using the infrastructure to make things and it extends to the production infrastructure and management system. At the moment I don't see any company anywhere doing this close to right; and it makes me really really mad. I feel especially angry because when I game out the consequences of a big car crash (possibly literally) all I can see is long term harm to the industry and the careers of people of good will, for the want of some short term cost and a bit of professionalism.
I have written further on this and I am taking proactive action at work (investing in a standards activity to develop a trustable infrastructure in my industry). But a lot of work is needed.
Well, I had a good rant. Back to work.