Subscribe to Newsletter
Outside the Lab Guidelines and recommendations, Quality assurance and quality control, Regulation and standards

Fending Off the AI Winter

To take a closer look at the history of AI, click the timeline above to download the infographic.

In the early 1970s, the field of AI froze over. An academic paper known as the Lighthill Report emerged, filled with scathing criticism of AI development and its history of overhype and under-delivery. The report marked the start of the first AI winter – a period in which interest, funding, and general faith in AI development hit an all-time low. It would take several years for this technological ice age to thaw and for AI to once again bask in the warm glow of positive public opinion – until it found itself in a second winter of frozen progress in the late 1980s (1).

Thus far, AI’s hype/hate cycles follow a recurring theme: broken promises – the reality of AI’s potential overshadowing its actual ability. In a world where celebrities are embarrassed to have been associated with NFTs, Elon Musk’s image as mega-rich tech-lord is disintegrating, and Meta insists that its Metaverse is definitely – 100 percent – going to be a big deal, it doesn’t seem all that unlikely that public opinion of AI could once again turn sour.

Could this history of broken promises be tackled with education on AI to give us more realistic expectations? It seems possible. To most, AI is a nebulous concept. Do you know what it is? Is it the same as deep learning? Natural language processing? And what’s all this you’ve been hearing about neural networks? Don’t worry if you’re confused. According to one survey on AI in digital pathology, there was “a uniformly low perceived knowledge of AI” among respondents (2). In another study, 44.1 percent of respondents felt that training from an AI platform representative would be of help for its future implementation in the lab, and 29 percent believed that a dedicated course or workshop would be necessary (3).

This shouldn’t come as a surprise; pathologists are experts in healthcare, not technology. However, the gap in knowledge on the pathologist level leaves AI-powered healthcare susceptible to forces both well-intentioned and nefarious – forces that could be deliberate or completely unconscious.

“Pathologists no doubt want what’s best for their patients and making sure AI tools are safe and ethical is essential for that,” says Francis McKay, Research Fellow in ethics and social implications of digital health at the University of Oxford. McKay is something of an AI activist. He has recently co-authored a paper on the ethics challenges of AI in pathology (4), and co-curated digitisingdisease.com, a website-cum-exhibition designed to bring critical AI studies to the general public. “That said, there may not be a deep or applied understanding of what the ethical issues are or how to solve them, which is understandable given their novelty and complexity,” he continues. “Admittedly, whether pathologists actually need such knowledge for their work is unclear – there are a lot of potential ethical issues and some may be more relevant than others depending on the work they do. A broad overview of key issues and solutions will help situate their work in context, however, and allow them to use AI with confidence going forward.”

To find out how education may lead to a more ethical AI future, I sat down with McKay to discuss critical AI studies and how they relate to pathology and laboratory medicine. I wanted to know, can we keep the perceptions of excited (and informed) professionals warm enough to fend off another AI winter?

First of all, what motivates your interest in this topic?
 

It’s clear that healthcare is going to be an important domain for AI research over the coming years, but we know from ongoing work in critical AI studies that it can pose significant ethical challenges for individuals and communities. We must learn from that work and investigate how it applies to healthcare.

I’m also motivated by the general need to improve professional and public understanding of applied ethical issues around medical AI. For instance, research suggests that there is a lack of understanding among pathologists about the ethical challenges of digital pathology, as well as a desire for more understanding (5). Based on my own research with patients and the public, I’d say there’s also a sense of poor understanding and acceptance of AI in the general public – even more so when it comes to its application to healthcare – and a need to be reassured over its use.

The National Pathology Imaging Cooperative has been working to develop an infrastructure for ethical development of AI in pathology over the past few years. We thought it would be a good idea to communicate some of the things we’ve learned from that process, both to help improve that understanding and to be transparent in our own ethical decision-making. Our most recent article (4) is a general summary of the key issues we encountered and how we responded to them to help guide others interested in developing similar systems.

With such a strong push toward AI and digital technology, are ethical factors a big enough part of the equation?
 

I think increasing numbers of people are interested in AI, both in pathology and in healthcare more generally. Whether the focus on ethics is “enough” is hard to say given the novelty of the situation. We must remember that AI is relatively new as medical technologies go (indeed, in many ways, it is more of a promissory technology in the hospital at the moment than one patients and professionals encounter regularly). So it is bound to take time for ethical awareness to mature.

That said, I don’t think ethical factors have been entirely absent; rather, certain ethical issues may have dominated the conversation more than others. For instance, there is a strong discourse on data privacy and there are numerous media narratives about the apocalyptic and existential threats of AI. Both have done much to steer public and professional understanding of the ethics in a particular direction and have consequently led, especially in the case of data privacy, to multiple ways of addressing them. But these can sometimes eclipse awareness of other ethical issues concerning the downstream social impacts of AI, such as how it might contribute to bias or the appropriate limits of commercial involvement. Part of the concern is that, if we don’t widen the discourse, we might curtail the kinds of ethical interventions we can develop.

In 2021, the WHO outlined six ethical principles for health AI. Clearly, conversations are being had. Are they being heard?
 

There are a lot of ethical frameworks out there (6) and it can be easy to get lost in it all. Moreover, those general principles can be pitched at such a level of abstraction that it can be hard to figure out how they apply to everyday contexts. And that’s why pathology-specific frameworks are useful; it allows those frameworks to be translated to applied domains with which pathologists are more familiar. Our article hopefully responds to that need by providing a general heuristic of the key issues to prioritize right now (that is, as digital pathology systems are being deployed). The novelty, complexity, and dominance of certain ethical narratives also affects our ability to develop that ethical awareness. And we probably shouldn’t overlook a possible impact of the pandemic, which has arguably focused our attention on some issues over others and left us with little capacity to reflect at length on AI ethics.

What effects might AI ethics have on the patient journey or ultimate outcomes?
 

I think it’s fair to say that we won’t be able to get professional and public support for AI without being able to show that medical AI tools are developed safely and ethically. In that sense, it’s an essential part of the care infrastructure – just as important as putting scanners into hospitals or training AI on histopathology data. On that point, it should be noted that there’s also a real problem around obtaining a social license for big data and AI-driven health research in general, which we’ve seen in responses to things like care.data and General Practice Data for Planning and Research. All this means that evidencing the ethical underpinnings of the work is essential for patients and professionals to accept these new technologies. Without that social license, we may see another AI winter in which no public benefit can be derived from medical AI because researchers are too wary of developing them or funding dries up.

That answer addresses the most general level, but there are also more specific possibilities; for instance, if part of developing ethical AI is ensuring equity of service by limiting things like algorithmic bias, it will have a direct effect on ensuring that all communities, not just a subset, can share in the technology.

How much responsibility for awareness of these issues falls on pathologists and laboratory medicine professionals?
 

I see ethics as distributed across the care system. Some issues may more directly relate to individual pathologists and laboratory professionals; others might be more relevant to data scientists, data access committees, and so on. That said, there is value in pathologists and laboratory professionals cultivating an awareness of the ethical issues for a couple of reasons. One is to help further the discourse of the ethical issues from an applied perspective. In many ways, we don’t know what all the issues are, so pathologists can play a role by highlighting other problem spaces or developing more nuanced solutions once they reflect on ethical challenges. Another reason is to communicate with and reassure others, including patients and the public, about the use of AI in the service of healthcare. There’s a great deal of public concern around AI, and there’s also a great deal of discussion on the importance of explainable AI – in other words, making its internal workings clear to provide the public with the reassurance they need. But how much information do patients need to be informed in their own care options? And do healthcare professionals currently possess enough knowledge of AI to explain it when asked? Greater technical and ethical understanding of AI can only support efforts to communicate with others who need reassurance.

How do we effectively safeguard in terms of misdiagnosis or misleading conclusions by AI?
 

It’s not yet clear what role AI will play in diagnosis. There’s a spectrum of possibilities from providing optional overlay information when assessing a histopathology slide all the way up to full automation. As far as I see it, AI is most likely to be used as an assistive technology, rather than a fully automated process, though some lightweight tasks may be automated with little concern. Nonetheless, keeping a human in the loop is one crucial way to prevent machine misdiagnosis. Interestingly, however, it works both ways – pathologists are also fallible in their diagnoses and can have differences in opinion in their clinical judgments. AI could play a role in that regard by standardizing diagnosis, suggesting alternatives, or highlighting things that might otherwise be overlooked.

In addition to having both a human and AI in the loop, ensuring that datasets are representative of the population and the range of cancers and rigorously validating AI tools will go far toward safeguarding against misdiagnosis.

What one thing can AI users (or future users) do now to address potential ethical issues?
 

The answer to that question depends on who is considered an “AI user.” In some ways, it’s everyone – health data researchers, pathologists, patients, and more. In that case, there’s probably no one thing that captures them all, because there are different ethical demands based on different users’ relationships to the technology. All that said, a general first step is to be informed about what the ethical issues are and to reflect on what emergent issues might be. Fortunately, there’s a growing body of literature on the topic. Our article offers one entryway specifically for pathologists and our website serves a non-expert audience. And for anyone who wants to take it further, a whole field of critical AI studies awaits!

When it comes to the ethics of AI, do you have opinions or concerns to share? Please join the conversation by email – [email protected] – or using the comments section below.

Receive content, products, events as well as relevant industry updates from The Pathologist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. B Lutkevich, “AI winter” (2019). Available at: https://bit.ly/3bXgqrQ.
  2. MR Giovagnoli et al., Healthcare, 9, 1347 (2021). PMID: 34683027.
  3. S Sarwar et al., NPJ Digit Med, 2, 28 (2019). PMID: 31304375.
  4. F McKay et al., J Pathol Clin Res, 8, 209 (2022). PMID: 35174655.
  5. C Coulter et al., J Pathol Clin Res, 8, 101 (2021). PMID: 34796679.
  6. Anna J et al., Nat Mach Intell, 1, 389 (2019).
About the Author
George Francis Lee

Deputy Editor, The Pathologist

Interested in how disease interacts with our world. Writing stories covering subjects like politics, society, and climate change.

Register to The Pathologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Pathologist magazine

Register