Subscribe to Newsletter
Outside the Lab Regulation and standards, Software and hardware, Technology and innovation

The Robot May See You Now

Every journal, website, and conference is hailing artificial intelligence (AI) as the latest, greatest laboratory tool. AI advocates say no corner of laboratory medicine will go untouched by its influence. Upon its arrival, lab professionals will have unprecedented hours to spend on difficult cases, interact with patients, or even enjoy overdue downtime baking bread or simply taking a break. The impact of this technology will be immense and its effects everlasting. Much like the microscope, AI may leave pathologists incapable of imagining a world without it. That’s right; the AI train is coming – so jump on board or get off the tracks!

Despite my sarcastic techno-optimism, I should make it clear that I’m not anti-AI. The power of AI and related technologies is well-documented, and it's already in widespread use in our everyday lives – from the personal assistant on your smartphone to the way the bank assesses your credit score – and quite likely in many other aspects you’ve not even considered. But potential and promises do not absolve technologies of skepticism – especially when they are going to be highly integrated into healthcare settings. Though it might sound like a win-win situation for everyone involved, implementation of AI is a concept of extremes – at best, it has the potential to empower patients and professionals and increase healthcare equity; at worst, it could exacerbate the most miserable parts of healthcare in late-stage capitalism (1).

The ethical side of AI is often less-reported compared to its ability to transform healthcare, and when it does occasionally step into the limelight, some ethical dimensions take precedence over others, even in bioethic spheres. According to one study, an analysis of 85 ethical guidelines from across the globe showed that sustainability, dignity, and solidarity were significantly underrepresented compared to other ethical considerations (2).

Credit: Pexels

But what’s so bad about AI? Or, more accurately, what’s not-so-great about it? Certainly, it can be difficult for non-experts to dig out the answers from the PR packages and the slick speeches of wily copywriters. But, drowned out by the fanfare from Silicon Valley, more nuanced conversations are taking place – discussions that are less focused on what AI could do and more on what it shouldn’t.

Welcome to the world of AI ethics!

Interrogating the machine
 

Credit: Unsplash

“We realized that was a massive mistake.”

These are the words of Eric Brown, the IBM research scientist responsible for the creation of Watson (a supercomputer that beat Jeopardy’s best two players). Out of context, Brown’s bold statement might seem like some doomsayer declaration of the coming AI apocalypse, but he was actually referencing the fact that some of Watson’s dataset had been pulled from Urban Dictionary – the crowdsourced and sometimes dubious definition site (3).

Like many other question-answering systems, Watson had struggled with the fluid and often instinctive way people use slang and other non-standard word forms. Perhaps not fully aware that the site is known for ironic user submissions and a lax approach to profanity, the team sought to address Watson’s fluency issue by importing hundreds of thousands of entries from Urban Dictionary. They quickly noticed Watson’s new proclivity for inappropriate language after it rather snarkily answered a researcher’s question with the word “bullsh*t.” Unsurprisingly, IBM engineers washed Watson clean of the colorful data well before its television debut.

Watson’s short-lived potty mouth is a good example of the effect that human input has on AI. Supercomputers with attitude is one thing – algorithm bias is quite another. The risk of humans developing skewed systems is a big topic in the AI ethics discussion. And the risk isn’t just hypothetical. It’s no secret that algorithms used to decide which US inmates deserve parole have been shown to replicate known human biases (4). In this case, it means that Black defendants are deemed more likely to reoffend than the data indicate – and significantly more than their White counterparts. Can we afford to let such biases stand in healthcare?

The topic gets even more nuanced with “black box” AI, in which the mathematical models are incredibly difficult to understand, even for experts (5). What happens if the researchers asking the questions don’t understand how the AI has reached a particular conclusion? Surely, if one of the supposed benefits of human-free systems is the lack of bias and other human baggage, any AI tech that purports to be entirely objective must be placed under the greatest levels of scrutiny.

Five pillars of AI ethics
 

Bias is just one (big) area of ethical consideration when it comes to AI implementation. So what else do we need to think about when using the technology in pathology and laboratory medicine? The World Health Organization and the European Union, among others, have their own frameworks for ethical AI use but here we present five key aspects – starting with objectivity.

Objectivity

We’ve already outlined some of the biases that humans can impose on AI, but there are plenty of specific examples of AI bias affecting pathology. A 2021 paper showed that algorithms trained on public US datasets for chest X-rays had underdiagnosis biases consistent with underserved demographics, such as female, Black, and Hispanic patients, as well as those of low socioeconomic status (6). Another case saw Black patients missed for vital kidney transplants due to a race-based algorithm (7). In these cases and others like them, social factors influence the dataset, causing the AI to reinforce existing biases.

These weaknesses of an AI approach have historically been overshadowed by their otherwise exciting potential. Perhaps, like a technological honeymoon period, these biases that “[affect] the data and shape the design of the algorithm [are] now hidden by the promise of neutrality and [have] the power to unjustly discriminate at a much larger scale than biased individuals” (8).

Experts recognize this issue and recommend addressing it by involving pathologists in AI development – from the start of the project. And the role of the pathologist shouldn’t end at development; regular monitoring and quality control will be required if AI accuracy is to be truly reliable.

Privacy

Technology and privacy are more and more entangled every day. Gone are the days when common advice was to keep anything personal off the Internet. In a futuristic world where all kinds of technologies are implemented across every level of healthcare, what protections will there be with respect to data collection, surveillance, and consent? Who has the general public’s best interests in mind? In our modern world, information is often one of the most lucrative (and sought-after) assets. Without strict protection from loopholes and bad actors, there are genuine concerns for the buying and selling of personal medical data. As author and speaker Bernard Marr poignantly put it, “Unregulated data-mining causes a whole different set of problems – privacy issues as well as the imbalance of power which is caused by information being in the hands of the few, rather than the many,” (9). Interestingly, the WHO highlights that “even informed consent may be insufficient to compensate for the power dissymmetry between the collectors of data and the individuals who are the sources,” (10). Many have suggested that AI governance in pathology and healthcare should be established at national and institutional levels to safeguard patient interests.

Credit: Unsplash

Privacy may become a bigger issue as we progress into multimodal AI, where systems pull in data from many different sources; everything from extensive biobanks, health records, and even your smart watch. In an effort to stay impartial, let's call this whirlpool of information a “privacy bad dream” rather than a full-blown nightmare. Protection does exist, of course – for example, the Health Insurance Portability and Accountability Act in the US – but it currently does not extend to all types of healthcare data, such as the user-generated and de-identified kinds. European legislation is further reaching; the General Data Protection Regulation casts a much wider net, including a public release on how AI systems use and process people’s data to make decisions (11).

Solutions for this technical tangle of privacy protections are already being developed. Some propose federated learning, where algorithms are trained on data supplied by multiple decentralized servers. Others suggest differential privacy, where general patterns of information are shared, but individual identities are kept hidden. Methods like this seem to address the problem, but obscuring the granular accuracy and detail of data in the name of ethics does ultimately pose a difficult question: what do we value more – performance or privacy?

Transparency

Another major issue for healthcare AI is transparency. When, if ever, is it appropriate to let a patient know their diagnosis was determined using AI? Does the patient have a right to know? What if the system simply confirmed the pathologist’s initial impressions? Perhaps the answers depend on the diagnosis and the patient’s level of involvement.

But it’s hard to ignore the potential damage to patient–practitioner trust caused by a failure to explicitly disclose the use of AI support. On the other hand, it is important to remember that most patients are not AI experts. To gain informed consent from patients, practitioners first need to be able to provide them with the knowledge they need to make informed decisions in the most appropriate and useful format.

Accountability

Is AI considered a product? The answer is unclear – and this means that the question of who is liable if AI does not work as intended is a complicated one. The urge to pin liability on the developers of AI software may provide patients with a route for compensation but may ultimately encourage companies to leave the field rather than shoulder the financial risk. Similarly, holding practitioners liable for the failings of third-party software feels unfair and would almost certainly minimize widespread use by medical professionals. Should we leave the debate for the courts to decide – enjoying an agreeable status quo while it lasts?

Credit: Unsplash

Sustainability

With a future of continued anthropogenic climate change ahead, there are nearly endless ways we can (and must) adapt to become better stewards of the planet. Healthcare is responsible for its fair share of resource-guzzling and emissions-belching, but more intensive technologies often need increasing amounts of energy to run – and AI is no exception. The promise of a technological future is often sobered by its potential environmental impact on an already strained planet.

What carbon emissions lie at the foot of AI? It’s hard to gauge, as the numbers are highly dependent on location, time, and energy sources. But one study found that carbon dioxide emissions for one AI system reached 28 kilograms in a single month (12). Another concluded that training an AI model can emit more carbon emissions than six cars across their lifetimes, even factoring in the emissions used to manufacture them (13). But how do we decide which parts of healthcare are worth the emissions needed to fuel them? Although more pathologists are becoming aware of the impact of their work, and the move toward more sustainable labs is laudable, we may run the risk of over-egging the size of healthcare emissions compared to other, perhaps less altruistic, sectors.

Speaking on the matter in its ethical guidelines, the WHO said that “AI systems should be designed to minimize their environmental consequences and increase energy efficiency. That is, use of AI should be consistent with global efforts to reduce the impact of human beings on the Earth’s environment, ecosystems and climate (10).”

Though our five-point primer is far from exhaustive, I hope it at least offers a springboard for further enlightening discussion.

Let’s not throw the bot out with the bathwater…
 

Despite me starting this piece with a sarcastic tone, it really does feel like AI will change the face of pathology. Its potential is far too great to ignore – so our question must become: How can we ensure that AI usage is both useful and ethical?

Well, if I’ve learned anything in my research into this topic, it’s that AI implementation needs to be a two-way street. First, any company who is active in this space must reach out to pathologists and laboratory medicine professionals to understand their daily workflows, needs, and pain points in as much detail as possible. Second, pathologists, laboratory medicine professionals, and educators must all play their important part – willingly offering their time and expertise when it is sought or proactively getting involved. And finally, it’s clear that there is an imbalanced focus on certain issues – with privacy, respect, and sustainability falling by the wayside.

Pathology is a field stymied by increasingly high volumes ofwork being loaded onto an already strained workforce. AI tools, if properly, safely – and ethically –  integrated into existing workflows, could help pathologists better manage better ones, creating more time for patients, difficult cases, and academic research.

Pathology’s AI adoption and its integration with genomics, radiology, and clinician notetaking could enable precision medicine in the truest sense of the buzzword – ushering in an era that really does look like the future of healthcare. But as the ethical foundations for artificial intelligence are still being laid, we are probably best to remember that AI is just a tool – and it doesn’t always have to come out of the bag.

We hope this discussion offers you a new way of thinking about AI. Coincidentally, our sister publication, The Ophthalmologist, published a piece on ethics in AI back in 2020. What is most striking is that two years and one pandemic later the topic doesn’t seem to have gained much ground. It’s becoming clearer that these discussions on ethics need to leave the hypothetical and enter the actual. So let’s start: do you have any thoughts, questions, or concerns about AI in the lab? This truly is only just the beginning of a much longer conversation, and we want to hear your side of it. Please speak your mind by email – [email protected] – or using the comments section below.

Receive content, products, events as well as relevant industry updates from The Pathologist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. D Leslie et al., “Does ‘AI’ stand for augmenting inequality in the era of covid-19 healthcare? BMJ, 372, 304 (2021). PMID: 33722847
  2. A Jobin et al., “Artificial Intelligence: the global landscape of ethics guidelines,” Nat Mach Intell 1, 389 (2019).
  3. D Hanson, “Urban Dictionary used in IBM Watson technology?” (2011). Available at: https://bit.ly/3dqXMJB.
  4. J Larson et al., “How We Analyzed the COMPAS Recidivism Algorithm” (2016). Available at: https://bit.ly/3SEpmmy.
  5. W Knight, “The Dark Secret at the Heart of AI” (2017). Available at: https://bit.ly/3Afuq9Y.
  6. L Seyyed-Kalantari et al., “Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations,” Nat Med, 27, 2176 (2021). PMID: 34893776.
  7. T Simonite, “ How an Algorithm Blocked Kidney Transplants to Black Patients” (2020). Available at: https://bit.ly/3TbJKv1.
  8. R Benjamin, “Assessing risk, automating racism,” Science, 336 (2019). PMID: 31649182.
  9. B Marr, “Here’s Why Data Is Not The New Oil,” (2018). Available at: https://bit.ly/3fFlVNz.
  10. World Health Organization, “Ethics and governance of artificial intelligence for health,” (2021). Available at: https://bit.ly/3fGDXPf.
  11. JN Acosta et al., “Multimodal biomedical AI” Nat Med (2022). PMID: 36109635.
  12. AI’s carbon footprint and a DNA nanomotor – the week in infographics,” Nature, [Online ahead of print] (2022) PMID: 35896665.
  13. K Hao, “Training a single AI model can emit as much carbon as five cars in their lifetimes” (2019). Available at: https://bit.ly/3pfZwYx.
About the Author
George Francis Lee

Deputy Editor, The Pathologist

Interested in how disease interacts with our world. Writing stories covering subjects like politics, society, and climate change.

Register to The Pathologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Pathologist magazine

Register