After our basic introduction to AI in healthcare, we’re exploring the very real ethical issues associated with a technology that has such enormous potential to disrupt just about every aspect of healthcare.

Knowledge gap

AI’s progress is rapid, but the stakes are as high as they get. We simply cannot afford to run before we can walk.

We clearly need to carry out much more research before we routinely interpose a machine between a medical professional and a patient. That includes the whole cycle of diagnosis, treatment decision-making or indeed the treatment itself, especially in circumstances where there’s actually little or no involvement of the human in the process.

There are plenty of minds trying to fill that knowledge gap. Recently, studies have been authorised by such learned institutions as Stanford University, (as published in NEJM 378 981), The Nuffield Council on Bioethics, The UK Government Centre for Data Ethics, The Commons Science and Technology Select Committee, The House of Lords, Wellcome and the Royal Society, to name just a few.

Good data

The machine learning that produces reliable, successful, safe outputs in the health context is utterly data dependent. If enough high-quality data can be gathered, stored, and shared, then a cast-iron algorithm can be written for different diseases or situations.

That requires access to the maximum amount of electronic medical records, with all the inbuilt problems of consent, patient confidentiality, data integrity, GDPR regulation and governance.

These are all problems the NHS has been working to address for many years.

Another layer of complexity is that the more we learn about personalised medicine and individual genomic mutations and differences within one broad area such as breast cancer, the more uncertainty there might arise about one generic therapeutic decision. To answer that uncertainty needs, yes, more data.

A successful machine learning process will have an insatiable appetite for good data.

That data will need to be constantly refined and updated from experiential learning. Will the NHS have the capacity to “feed the beast”?

Ethical questions

The recent report by Dr Eric Topol, author of The Creative Destruction of Medicine, commissioned by the UK Health and Social Care Secretary, concluded that inadequate algorithms can “hurt a lot of people really fast”.

Is it ethical to allow a physician to be guided in his or her diagnosis and treatment plan, or even replaced completely, by a robot – even one that learns from its successes and failures?

A key point here is that the doctor, and arguably to some extent the patient, must understand how the algorithm-based ‘Black Box’ has been constructed and why it makes the decisions it does. And, importantly, where any inbuilt weaknesses might lie.

Built-in bias

Many algorithms may have inherent biases based on old data, insufficient data for reliable machine learning or simply because of the bias of those that constructed the algorithm in the first place.

One algorithm might be successful perhaps for one disease area, say diabetes, in one country, but not in another. It could be inherently dangerous to have a universal diagnosis and treatment plan based on data gathered in one area on one type of patient with one genetic mutation, in a world where precision medicine is now the order of the day.

Consider these

Other ethical considerations around the construction of the algorithms that need further research are:

  • How can an AI system learn kindness, compassion and respect, and become trusted by the patient?

  • Can the AI system be programmed to take Human Rights, Regulatory and Data Protection legislation into consideration, as well as reflecting the caring and ethical values inherent within the NHS?

  • Can an AI system learn to be ethical and make ethical decisions?

  • Might there be a possibility of racial or socio-economic bias being built into the algorithm, exacerbating current health inequalities?

  • How can we set up an evaluation and monitoring system to see how a particular algorithm might influence doctors to diagnose or prescribe in a particular way?

  • If a patient dies as a result of a computer generated decision or treatment, who can we blame or sue? Is it possible to deconstruct the ‘Black Box’, not only to discover what went wrong and why, but also to find a ‘fix’? How would the GMC need to change?

  • How can we regulate and police the use of unethical algorithms and ensure that the values of the system designers are the same as the physicians who use the system?

  • Does a patient have a right to know if a doctor is using AI in their treatment and if so can they refuse to give their consent?

  • Are people with rare medical conditions liable to be disadvantaged because of a lack of sufficient data to populate a diagnostic and treatment algorithm?

Implications for an ageing population

Some of the key beneficiaries of an artificially intelligence-based health system should be the elderly. This might entail the exciting potential for constant remote monitoring and diagnosis, as well as early warning of a potential health emergency, without the need to book a GP appointment or dial 111.

But will it be ethical, let alone realistic to expect an ageing, possibly lonely and isolated population group to embrace this new technology? Explaining and gaining acceptance of this new smart technology will be vital across generations, but particularly for this part of society. Many senior citizens would not be comfortable accepting a diagnosis, treatment or prescription decisions made by a well-trained, widely experienced, intelligent, but impersonal machine. Most would prefer an empathetic, caring, listening and responsive doctor who understands the wider contextual, social or mental health aspects that may be involved in their particular case.

The way forward

This short article can only scratch the surface of an important issue that is already with us. It should nonetheless be clear that there are still many questions in many different areas of healthcare provision. They need an answer before AI can be universally and safely accepted across the NHS in all its possible applications.

Early indications are that this is indeed a problematic panacea. Governance, oversight and regulation are at the heart of achieving a successful way forward.

There is a need for regulation that is equipped and able to address the very broad issues raised here and in the multiple existing and ongoing studies.

Gaining acceptance

Education will also be vital as these new technologies become widely used and available. But the education of the patient is not the only consideration. Really, the only way to ensure that AI systems are used successfully, sensitively and sensibly, and accepted by the general public, will be if the healthcare professionals who use them fully understand them in order to optimise their output. More than that they must be able to explain them to their patients, ensuring trust is maintained on all sides.

All that said, there are today many areas of remote monitoring, health research, data gathering, diagnostics, pathology and robotic treatment where AI is already saving lives, safely and ethically. Right now, it’s providing a huge service to the NHS and to mankind in general.

Our urgent task is to devote sufficient resource internationally to address wider ethical issues thoroughly. Only then can the quite enormous future potential of AI, machine learning and neural networks be realised. And with that, the ultimate beneficiaries will come to see AI as a trusted and essential partner in their healthcare pathways, rather than an impersonal and uncaring spy in their sitting room.

Read our introductory article ‘What Every healthcare Leader should know about AI'.

Carmel Gibbons

Carmel Gibbons is a London based Partner and Head of the Healthcare practice. She is also a member of the Public Services and Not for Profit practice. Her work focuses mainly on senior appointments...

Insights

Insight

Getting your next role without relying on a headhunter

In part one of this series, Ed van der Sande, Managing Partner, Odgers Berndtson Amsterdam, expla...

Insight

Facing up to global disruption in the supply chain - Part 2

Lucy Harding and Pieter Ebeling, Partners in the Global Procurement and Supply Chain and Consumer...

Insight

Facing up to global disruption in the supply chain - Part 1

Lucy Harding and Pieter Ebeling, Partners in the Global Procurement and Supply Chain and Consumer...