
10 Apr 2018
Could Meticulous Transparency be the route to ethical Artificial Intelligence?
Subscribe to our newsletter. Enter your details below.
The rise and rise of AI is raising ethical questions that demand an answer. At a recent AI event co-sponsored by Odgers Berndtson, David Benrimoh, CEO of aifred Health, laid out his vision for a rigorous assessment framework designed to make ethical considerations integral to AI development.
Undoubtedly, the potential benefits, economic and humanitarian of AI, are great. Dig underneath this optimism, however, and a more cautious narrative emerges.
It is one buoyed by the reality of elections swung by big data, anxiety over mass job losses, a death caused by a self-driving car, and the recurring fear of autonomous weapons.
The essence of this narrative is that humanity is starting to use technologies that will fundamentally change societies on many levels at once. However, we are essentially going at this blind, and in a mostly unregulated manner.
Outdated legal frameworks
Individual groups, sometimes very small ones, have the power to deeply influence human behavior using AI and the data gleaned from the individual digital footprint. We find ourselves in a situation where the rapid advancement of AI technology cannot be coherently constrained or directed using existing legal and ethical frameworks. Creating new application-specific legal and ethical codes will likely always be slower than the pace of technological progress.
We at aifred Health, a mental health AI start-up, have proposed one way to deal with the need to better regulate AI. Planned AI projects would be required go through the equivalent of an institutional review board (IRB).
An IRB is a panel of experts, ethicists, and members of the public who could evaluate proposed AI applications and monitor their implementation. Whether these IRB’s are located in a national or international organization or are constituted by companies, one thing is clear: they will need a coherent assessment framework that they can use to evaluate and understand AI. With this assessment, IRB’s could then refer to the general ethical or legal principles that govern their society and decide whether or not the AI application they are considering is ethical.
What is Meticulous Transparency?
We have developed such an assessment framework, called “Meticulous Transparency” (MT). MT assessments are meticulous because the developers of an AI application must provide an in-depth explanation of several aspects of their proposed project, with the amount for detail growing as the impact and autonomy of the AI application increases.
Which questions should we be asking?
The Meticulous Transparency assessment requires the developers to provide details on:
- Intentionality. Why is this product being made? What targets are the developers asking it to achieve, and are these ethical? Machines can inherit human biases, so understanding the intention of developers is key.
- Scope of use. The platforms and reach of the product should be carefully documented. For example, products aimed at children would have different implications from those aimed at adults. AI applications deployed in public spaces would likewise be considered differently than those deployed in homes or stores.
- Data sources and bias control. The provenance, quality, and characteristics of data used to train the model must be clearly laid out. This is key to control for the use of biased data that might train models that unfairly or dangerously target certain groups. For example, an advertising model trained on data from alcoholics might be ethically questionable if its goal was to sell more alcohol as it would be more likely to target those with substance use problems.
- Human interpretability. Much has been said about the ethical questions around “black box” solutions. These are AI applications that make decisions that may affect humans without being able to explain why a decision was made. While it is true that AI’s cannot simply relate a narrative account of why they made a decision, it is also the case that we can (for example) examine the input features that were most important in the making of a decision. As such, when using AI applications, developers should strive to use the best available interpretability technology and update this as technology improves.
- The projected risks and benefits of the product. Developers should demonstrate that they have made a genuine effort to understand the possible benefits their application may bring. Not just to their customers or their company, but to society. Similarly, they must honestly lay out expected risks. This would go beyond simply the risk of a data breach and would need to consider more nuanced risks, such as threats to human autonomy posed by applications designed to manipulate behavior.
- Monitoring and contingency plans for adverse events. This is akin to phase four trials of medications. Developers would need to present a plan for monitoring how their AI is being used, what mistakes it is making, and how a recall or other contingency plan would be executed in the case that the AI begins to make harmful mistakes.
For example, if an AI were to be developed to help sift through insurance claims, developers would need some way to watch to make sure the AI does not unfairly deal with subsets of claim types that it was not well-trained to handle. Plus, they should have a plan for rectifying this problem, beginning with appropriate notification and tracing all wrongly-handled claims.
In concert with efforts by international groups like the IEEE and the Montreal Declaration for a Responsible Development of Artificial Intelligence, we believe that MT can help create a rational path towards a future where we understand and direct the use and development of AI to maximize its benefits and minimize its harms.
We are the conscience
“The commercial adoption of AI outside of scientific or academic institutions is happening at scale right now. The biggest issue we face presently is balancing its vast potential with the possible detrimental elements of unchecked AI.” commented Michael Drew, Global Head of Odgers Berndtson Technology Practice.
“The concept of Meticulous Transparency that David outlines is a framework that has a great deal of merit.
“It’s essential that we are the conscience of AI technology because machines do not have that capability. Understanding its uses and possible misuses before AI is rolled out to the masses is absolutely vital, and thinking ahead allows us to imbed the necessary boundaries.
“Ethical or societal constraints aren’t about putting a harness on AI, but ensuring its widespread adoption is felt in the most positive ways and counter the likely negatives that always come with a technology revolution”.
David (MD, CM, McGill) is a physician currently training to be a psychiatrist. He is pursuing residency and neuroimaging research at McGill, as well as a Masters in Neuroscience with a focus on computation at University College London. He also has significant experience in advocacy and policy work. At aifred Health, he serves as the CEO, providing strategy and vision while serving as the bridge between the Research, Clinical and Machine Learning Teams.