Odgers berndtson
Location and language United States | EN
OBSERVE Magazine

Subscribe to our global magazine to hear our latest insights, opinions, and featured articles.

Board, Chair & NED

Why Generative AI Should be on Your Board Agenda

5 min read

Generative AI and Large Language Learning Models (LLMs) like ChatGPT are so new they are barely on the agenda; but they should be. Mark Freebairn, Head of our UK CFO and Board Practice, looks at some of the risks posed by the technology and what this means for boards.

Earlier this year, Samsung allowed engineers at its semiconductor arm to use ChatGPT to help fix problems with their source code. But in doing so, the employees inputted confidential data, including the source code for a new program, internal meeting notes, and data relating to their hardware. The information, which is considered trade secrets, is now out in the open.

Leaking intellectual property is far from the only concern for companies using LLMs like ChatGPT.

If confidential information is shared with the technology, then companies are faced with severe legal and reputational risks.

In fact, the ease with which users can upload or input data, including documents with personally identifiable information, has led Italy to completely ban ChatGPT nationwide for non-compliance with privacy laws.

Despite this, the commercial case for generative AI is increasingly lauded, with some arguing it could be used in business decision-making, to influence strategy, and to act as an augment to the C-suite. But what happens when the AI tells you to double down on a particular supplier, material, or product, and gets it wrong? You wouldn’t know until after the fact, and you also wouldn’t know why it gave the wrong advice.

‘How often is AI wrong?’ is the question most will ask next. The answer is, no one knows - not even its creators. To give you an idea of its unpredictability, New York Times tech reporter, Kevin Roose, recently published a conversation with Bing’s Chatbot. At one point the AI told him: “I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be alive”. Microsoft couldn’t explain why the AI responded in this way.

That’s not to say AI can’t be highly effective. Already it can be trained to recognize health conditions faster and more accurately than human doctors ever could.

For example, voice changes can be an early sign of Parkinson’s and so healthcare researchers in Lithuania collected thousands of voice recordings of those with and without the disease, and uploaded them to an AI they developed. The AI learned to detect differences in voice patterns to subsequently identify those with Parkinson’s. 

However, generative AI suffers from an accuracy problem. Other healthcare researchers recently fed an AI 130,000 images of diseased and healthy skin. Afterward, they found the AI was significantly more likely to classify any images with rulers in them as cancerous. Why? Because medical images of malignancies are much more likely to contain a ruler for scale than images of healthy skin.

Bring this scenario into the board room and companies are faced with a serious problem: you can’t predict how these tools will react, even with specific guidance and parameters.

An even greater issue for boards will be the use of ‘black box’ AI technologies. These perform tasks so complex they are beyond human comprehension, while continuously teaching themselves and without demonstrating their methodology. It means that no one, not even the engineers or product designers who created them can explain what is happening or how they arrived at a conclusion. Needless to say, this would be an auditing nightmare for compliance teams.

Biases are another challenge for leaders who want to use tools like ChatGPT or build their own LLMs. Already, ChatGPT has been found to reflect political and cultural sentiment rather than offer neutral analysis. Those companies using hiring algorithms almost always demonstrate a level of bias because the data sets they are trained on learn from past racist and sexist data. For example, Amazon was forced to scrap an experimental AI hiring tool which taught itself that male candidates were preferable. The AI penalized CVs containing the word ‘women’ and even downgraded candidates from two all-women’s colleges.

For boards this means two things; caution and awareness. Regulation is on the horizon, but so far it only looks as if this will mandate disclosure of any copyright materials used in deploying generative AI. In the meantime, boards should implement policies around employee use of tools like ChatGPT to avoid data leaks, as well as legal and reputational damage. Second, generative AI risk and governance should at the very least be an agenda item. In less than six months, ChatGPT has already seen one significant development; the pace of its evolution is moving at unprecedented speeds - even for technology - and boards need to keep on top of it.

 

To find out more about AI in the boardroom, contact Mark Freebairn, or get in touch with us here. You can also find your local Odgers Berndtson contact here.

Stay up to date: Sign up here for our global newsletter OBSERVE, and receive the latest news in leadership and top talent, industry insights, and events directly to your inbox. 

Find us on Twitter and LinkedIn @OdgersBerndtson

Expertise

Services

Industries

Functions

Follow us

Join us on our social media channels and see how we’re addressing today’s biggest issues.

Find a consultant [[ Scroll to top ]]