Three questions every CEO should be asking about AI and ethics

19 Mar 2019

Three questions every CEO should be asking about AI and ethics

As artificial intelligence algorithms play larger roles in the modern economy, Mats-Ola Bydell, Partner and head of Odgers Berndtson’s Atlanta office raises the essential ethical and cultural questions for senior management.

Companies looking to drive innovation and competitive advantage by utilizing AI have to face up to the fact that even the most advanced AI systems are unable to explain how they make decisions.

The algorithmic sequences by which they learn, reason, perceive, infer, communicate, and make choices take place outside of oversight. But for ethical, reputational, and legal reasons, being able to understand an automated system’s decision-making process is vital.

For this reason, the research arm of the U.S. Department of Defense is coordinating an effort to build Explainable AI systems that can translate complex, algorithmic decisions into trustworthy and easily understandable language. This is clearly a pivotal issue for every type of organization and wider society.

Need for transparency

Companies and governments both need to know how the next generation of decision-makers (AI algorithms) sort and prioritize their options. Until this understanding is accomplished, either AI’s wholesale adoption will be delayed, or we will end up in an ethically troublesome world where humans submit to AI decision-makers even if the logic behind AI decisions does not conform to our need for transparency.

The three questions CEOs should ask

With this in mind, there are three questions that all CEOs should ask about the ethical management of AI:

  1. Does AI apply human values to your business?

    The answer, obviously, is yes: the manner in which you arrange and train your AI algorithm will define the extent to which it applies human values to its operations. The unknown factor is how the AI algorithm learns and applies its learning over time. What kind of human values will emerge, and how will this change things? Clearly, a regular review and revision of the AI algorithm will be necessary in order to be in control of the values it applies.

  2. What values will your AI apply in executing its function?

    Most organizations have a defined set of values. These definitions formalize the relationship between the firm’s behavior—both internally and externally—and a set of moral and cultural goals. At the same time, all human-led organizations are steered by informal human values that constitute an invisible but, nonetheless, rigorously-imposed framework for behavior.

    By contrast, artificial intelligence, however, does not by nature share either of these value systems. They have to be consciously designed-in, so to speak.

    In doing this, you need to decide which values are going to be the foundation of your AI systems. You need to allocate different weights to those important ethical and legal questions. How do you build an AI algorithm that considers both informal ethical considerations and rigidly-defined regulatory perspectives? These questions have no easy answer.

  3. What function is going to be responsible for overseeing the ethical operation of AI algorithms in your organization?

    Since its inception in the late 1970s, the Human Resource function has been largely responsible for defining and implementing a company’s ethical and cultural values. Now, as AI begins to make decisions once made by humans, does the task of securing ethical standards for AI algorithms still fall on HR? Or do ethical questions need to be exported to other teams or functions?

    If the answer to the latter is yes, who should be accountable for the decisions that AI algorithms make on behalf of your organization?
    AI is not physical and therefore is not an IT function.
    Fully deployed, it is a series of algorithms that make decisions in and for various points of your organization.

    The laws that regulate an AI algorithm’s decisions will, of course, be dependent on the type of decision in question. But because AI has no ready-made ethical limits, borderline regulatory issues are bound to arise, especially as AI algorithms gain in complexity and become more central to an organization’s day-to-day decision-making processes.
    Building a system for overseeing AI accountability across teams and functions will thus be necessary. This must be done sooner than later before the AI applications have taken over 1/3 or more of some of the work that today is done by humans, and the accountability and liability issues develop.

As we dive further into the era of artificial intelligence, we need to find a way to understand, measure and direct its implications for individuals and organizations. In addressing these implications, leaders across industries must:

  1. Recognize that these are important questions and that the search for answers starts at the top, requiring the full attention of your board and executive team. Questions about AI’s ethical dimensions need as much or more attention as questions about the opportunities that come from applying AI in your business.

  2. Lead a board-led internal audit with the goal of continuously reviewing and fine-tuning your company’s ethical and cultural commitments. Remember that your ability to master the ethical operation of your AI algorithms dictates your ability to manage AI risk.

  3. Decide who will have functional responsibility for AI ethics, and systemize this oversight.

Clear vision required

Insightful leaders understand how the rapid change of AI will be affecting their business. Beyond the obvious implications for operations and business, the first question should be about how AI will learn the ethics and culture of an organization.

By asking three seemingly simple questions, CEOs can clarify their vision around the future of AI at their companies and also their current internal values.

Similarly, boards must make three key decisions around how AI will be integrated and reviewed.

Companies that follow these steps will be better suited to navigate and harness the power of AI and sidestep the risk associated with such new technology.