What happened when Microsoft launched ‘Tay’, a state-of-the-art artificial intelligence (AI) chatbot, designed to learn from its interactions with human beings via Twitter? In less than 24 hours, it became a racist, misogynist, neo-Nazi. And its creators unceremoniously pulled the plug.  

Shortly after the shutdown of Tay, Corporate Vice President of Microsoft Healthcare Peter Lee blogged: “We are deeply sorry for the unintended offensive and hurtful tweets which do not represent who we are or what we stand for”.

A light-hearted experiment gone wrong it may have been, but the incident and subsequent apology highlighted a big issue facing developers. How do we ensure AI systems hold acceptable cultural and ethical values?

In truth, Tay is one of many recent examples of the pitfalls of machine learning. In 2017, researchers from Princeton University and the University of Bath demonstrated that AI can learn human-like semantic biases from written texts. They studied millions of words online, looking specifically at their proximity to each other; the same approach used by automatic translation systems to learn the meaning of language.

Their findings showed that male names were more closely associated with maths-, science- and career-related terms. Female names were primarily linked with artistic- and family-related terms.

There were also strong links between European or American names and pleasant terms, while African-American names were often associated with unpleasant terms.

“Our work has implications for AI and machine learning because of the concern that these technologies may perpetuate cultural stereotypes,” said the researchers. “Already, popular online translation systems incorporate some of the biases we study but further concerns may arise as AI is given agency in our society.”

Garbage in, garbage out

Left unaddressed, this kind of bias has the power to significantly impact on people’s lives as the world’s most influential institutions ramp up their use of AI.

For instance, there are already claims that it has led to racial and gender discrimination against bank loan applicants, as a result of the systems drawing upon data that reflects social and cultural inequalities. After all, AI is only as good as the information it is fed. ‘Garbage in, garbage out’, so the saying goes.

Many financial services companies are now focusing their efforts on developing ethical AI, in the hope of setting new standards across the sector as a whole.

But, as Jiahao Chen, formerly a research scientist at Capital One and now VP of AI Research at JPMorgan Chase, has pointed out, it’s no easy prospect.

“The first clear challenge is demonstrating that a machine-learning model meant to make credit decisions complies with fair-lending laws. Laws like the [United States’] Equal Credit Opportunity Act require banks to show that the way they extend credit to customers does not discriminate on the basis of protected classes such as race, colour, religion, national origin, sex, marital status and age. However, translating these legal notions into precise mathematical statements immediately presents the problem of having multiple legal notions of fairness.

“There is ‘disparate treatment’, treating people differently based on their protected attribute, and also ‘disparate impact’, in which the outcome of a policy could be evidence of discrimination.

Banks want to be fair in both senses, with respect to the inputs to a decision as well as the outcomes of a decision.

“The challenge of fully mitigating both disparate treatment and disparate impact risks requires a discussion between business leaders, data scientists and legal experts to determine the best risk management strategy for each application. It also requires a decidedly human-centred approach to instil confidence that machine-generated decisions are being made with the customer’s interest in mind.”

Next generation enculturated AI

Fair-lending legalities are the tip of the iceberg when it comes to developing the next generation of ‘enculturated’ AI systems. Another key factor is cultural context; since ethics and values vary from community to community across the world, whose culture are we talking about when we discuss ethical AI? Or is it possible that AI systems could be designed to take cultural diversity into account?

Kenneth D Forbus, Walter P Murphy Professor of Computer Science and Professor of Education at Northwestern University in the US, is closely involved in research into the relationship between AI and culture. Recognising that people’s choices are rooted in their environment, upbringing and experience, he believes “creating AI systems that can take culturally influenced reasoning into account is crucial for creating accurate and effective computational supports for analysts, policymakers, consumers and citizens”.

Indeed, headway has already been made in developing AI that can do this by analysing cultural narratives. “Recent progress has provided systems that form the basis for a new analogy-based technology for AI. That is, given a new problem, a system can use a human-like retrieval process to find a similar prior situation and ascertain how it applies,” he explains.

Predicting reactions and reaching agreement

Forbus predicts that building cultural models via analogical learning from a culture’s narratives could eventually lead to AI systems that, for instance, would assist decision-makers in understanding how different cultural groups might react to new regulations, or help negotiators to find common ground.

“As AI systems become more intelligent and flexible, having them become full-fledged partners in our culture seems like a promising way to ensure that they are beneficial in their impacts,” he reasons.

So, as computer scientists and developers grapple with the issue of how to deploy objective yet culturally relevant AI in a complex human world, the future of machine intelligence, it seems, lies chiefly in its input.

Get the data quality right, and the possibilities could, in theory, be endless. Particularly in business.

“Consider for one moment a truly data-driven organisation,” said Jean-Philippe Courtois, Microsoft EVP and President of Global Sales, Marketing and Operations, at the 2018 World Economic Forum. “The ability to utilise information in real-time across the entire organisation to make fluid business decisions can be transformative to a company’s culture.”

This article is from the latest ‘Culture’ edition of the Odgers Berndtson global magazine, OBSERVE.

Register to download your free copy

Insights

Insight

How to write an outstanding executive CV

Taking the trouble to perfect your CV is a crucial step towards landing the executive job you des...

Insight

Choosing the right executive search firm

Leading executive search firms have the network and know-how to attract high-calibre candidates w...

Insight

The 10 most common CV errors to avoid

A polished CV helps you market yourself. Don’t diminish its power by making unprofessional mistak...