What Does My Lab Report Mean?
Can AI help patients understand more about their disease?
Michele Mitchell | | 8 min read | Opinion
The primary purpose of the 21st Century CURES Act in the USA is to make health data accessible to patients through electronic means, such as smartphones and patient web portals. The Act comes with data-sharing unblocking regulations barring healthcare providers, including laboratories, from engaging in any practice that may interfere with access, exchange, or use of electronic health information.
Donald Rucker, former National Coordinator for Health Information Technology at the US Department of Health, and Human Services’ Office of the National Coordinator for Health IT (ONC) led the formulation of the federal health IT strategy. In a 2020 podcast, he discussed the gap in healthcare and how increasing numbers of patients are investing in their health to stay well – not just get well – noting that health trackers, heart monitors, and symptom trackers are now the norm rather than the exception (1). Savvy patients are beginning to understand how unhealthy behaviors can affect their pocketbooks.
In addition, when patients become ill, they are learning more through online communities. Unfortunately, the internet lacks options to help patients understand the medical terminology within their test results. This gap creates opportunities for generative AI and large language models (LLM) to assist with patient-friendly translations. For example, patients can use ChatGPT to answer their questions while they wait for the ordering physician to reach out regarding test results. However, the retrieved data might occasionally be as distressing as the actual test findings…
The rise of messaging applications
Whilst electronic health record (EHR) software applications provide healthcare organizations with systems for managing patient data, records, and other clinical tasks, patient portals allow patients to access their health records, test results, and appointment schedules – and to communicate with their healthcare providers securely.
There has been a tremendous increase in messaging through patient portals in the US in recent years, driven by the CURES Act unblocking rule, staffing shortages in the healthcare system, and the rapid adoption of telehealth and digital medicine during the pandemic.
The usage of EHR messaging has considerably enhanced the accessibility and coordination of patient care; however, a 2017 study revealed that primary care physicians were devoting over half of their workday engaging with the EHR (2). In 2022 alone, physicians received over four million medical advice requests via the MyOchsner app (2).
To address this growing burden, solutions are starting to appear that integrate generative AI within portal functionality. This new feature provides physicians with draft responses to patient questions that come through the messaging feature in the patient portal. The doctor or staff member can choose either to accept the draft response as is or tailor the message, thus saving time overall.
Partnerships between EHR platforms and cloud storage services are also beginning to emerge. Such partnerships facilitate consolidation and analysis of vast amounts of EHR data very efficiently. Furthermore, by incorporating generative AI in these collaborations, they can improve the overall user experience and empower patients to take a more active role in managing their health and wellness.
Leveraging AI for patient interactions
The advent of patient portals (and greater access to the internet and social media) have massively expanded the information currently available to patients. Pathologists should seize this opportunity and quickly adapt to accommodate the needs of those who are eager to understand all they can about their health condition. But we must meet the patient on their own terms.
Following the almost instant dissemination of data onto a patient portal, pathologists, traditionally regarded as the “doctor's doctor,” have the perfect opportunity to extend their involvement in patient engagement. Numerous small-scale ventures have already emerged within healthcare institutions across the US, where pathologists establish pathology clinics to personally interact with the patients under their care (3). Services like these are ideal candidates for the enhancements that generative AI messaging could provide.
Should patients trust AI?
Concerns about generative AI apps include information overload and confusion for those seeking answers to complex medical issues. In addition, there are concerns regarding AI tools disseminating hallucinations – responses in which a model guesses at an answer by creating new information that doesn’t exist – and potentially rendering the information inaccurate in a way that may not be obvious to the patient.
Though the makers of ChatGPT have claimed they are trying to mitigate hallucinations, the company does include a disclaimer indicating that the tool should not be used to diagnose serious conditions, suggest treatments, or manage life-threatening issues (4).
Furthermore, the nature of AI training methods might perpetuate and amplify existing biases and stereotypes in society, leading to further discrimination and inequality. Suppose, for instance, a healthcare provider implements an AI-powered chatbot to assist patients in understanding their medical lab results. The chatbot is trained on a dataset of previous lab results and corresponding diagnoses to provide explanations and answer patient queries.
However, if the training data predominantly consists of cases where certain demographic groups received less attention or inadequate healthcare, the AI-assisted chatbot may inadvertently perpetuate biases in its explanations. If, historically, women’s symptoms were dismissed or downplayed more frequently than men’s, the chatbot might be more likely to offer dismissive or less informative explanations when discussing lab results with female patients.
This perpetuation of biases could lead to further disparities in healthcare outcomes. Patients from marginalized or underrepresented groups might receive incomplete or inaccurate explanations of their medical conditions, leading to delayed or inadequate treatment. Moreover, if patients feel misunderstood or dismissed by AI, they may be less likely to seek further medical assistance or follow through with treatment recommendations, exacerbating existing healthcare disparities.
To mitigate this risk, it’s crucial to train AI-driven applications on diverse and inclusive datasets that accurately represent the population’s demographics and healthcare experiences. Additionally, healthcare providers should continuously monitor and evaluate the AI’s performance to identify and address any biases or disparities in its explanations of medical information. Providing patients with additional resources for seeking clarification and encouraging open communication between patients and healthcare providers can also help mitigate the impact of biases perpetuated by AI in healthcare settings.
Furthermore, it remains uncertain whether individuals will accept receiving medical information from a chatbot instead of consulting a physician. Though automated systems offer benefits to patients, they can also create the perception of less personalized healthcare. A study found that users trusted the chatbot to answer simple questions, but the more complex the question became, the less willing they were to trust the chatbot’s diagnosis (4). This inherent suspicion will likely only fuel the desire for a more individualized doctor–patient experience among patients.
A vision for an AI-driven future
Generative AI also has the potential to expedite the analysis of lab results, leading to quicker and more accurate diagnoses. By using machine learning algorithms, AI can process large datasets and identify patterns that might not be immediately apparent to human analysts. This could mean faster detection of anomalies or diseases, allowing for earlier intervention and treatment, ultimately improving health outcomes.
However, it’s important to note that the integration of AI in healthcare raises concerns about patient privacy and data security. Though the use of generative AI can streamline processes and improve medical care, safeguarding sensitive patient information remains a crucial consideration.
Overall, the advancements in generative AI have the potential to significantly enhance the efficiency and effectiveness of healthcare services, potentially leading to improved health outcomes and better patient experiences. It’s important to note that the impact of generative AI on personal experiences can vary widely depending on the specific AI system, healthcare setting, and individual preferences. AI can offer many benefits in terms of accessibility, information dissemination, and efficiency, but it should complement, rather than replace, the valuable role of healthcare professionals in providing personalized care and support to patients and their families.
How can AI help the patient experience?
- Improved understanding. Generative AI can help individuals better understand their lab results by providing clear and accessible explanations in plain language, which can lead to more informed decisions about their health.
- Timely communication. AI-driven systems can provide patients with timely updates on the status of their tests and results, reducing anxiety and uncertainty while waiting for information.
- Convenient access. Patients and their families may have easier access to their health information through AI-powered apps or chatbots, allowing them to review results and ask questions at their convenience.
- Enhanced follow-up. AI-powered apps can assist in generating personalized follow-up recommendations, reminding individuals to schedule appointments or make necessary lifestyle changes based on their lab results.
- Language and accessibility. AI can bridge language barriers by offering translation services and making healthcare information more accessible to individuals with disabilities.
- Empowerment. Individuals and families may feel more empowered to take an active role in their healthcare when they have access to AI-generated educational materials and resources.
What caution should be exercised over AI systems?
- Privacy concerns. Concerns about the privacy and security of personal health information may arise with increased reliance on AI systems for lab-patient communication.
- Depersonalization. Some individuals may miss the personal touch of human interaction and the ability to ask questions directly to healthcare professionals.
- Adjustment period. The transition to AI-driven communication may require an adjustment period for individuals and families, especially those who are not familiar with using technology for healthcare interactions.
Case study: How generative AI helped me understand my cancer
- Simplifying medical terminology. I used AI to decipher complex medical jargon and translate it into more patient friendly terms. The AI’s prompt and user-friendly language were particularly impressive, providing me with a clear understanding within seconds.
- Decoding complex findings. Generative AI played a pivotal role in helping me understand intricate terminology embedded within the findings of my recent laboratory tests and CT scans. Its ability to swiftly decode technical language proved immensely beneficial.
- Facilitating question formulation. AI aided me in compiling a preliminary set of questions to pose to my surgeon for an upcoming appointment. By swiftly generating relevant questions, AI empowered me to engage more meaningfully with my healthcare provider.
- Enhancing diagnosis understanding. The AI system helped me grasp the intricacies of a friend’s recent medical diagnosis, providing comprehensive insights that facilitated her comprehension and decision-making process.
- Suggesting relevant studies. I relied on AI’s recommendations to identify pertinent studies and research materials aligned with a recent diagnosis. Its tailored suggestions assisted me in gaining a more comprehensive understanding of my health condition.
- Podcast: D Rucker, “Interoperability of healthcare systems what I need to know about the 21st Century CURES Act Final Rule” (2020). Available at: sma.org/podcasts/interoperability-of-healthcare-systems-what-i-need-to-know-about-the-21st-century-CURES-act-final-rule/
- Oschner Health, “Ochsner Health to integrate generative AI into patient messaging” (2023). news.ochsner.org/news-releases/ochsner-health-to-integrate-generative-ai-into-patient-messaging
- The Pathologist, “Easing the Panic – How a pathology message pool can help put patients’ minds at ease” (2022). Available at: thepathologist.com/outside-the-lab/easing-the-panic
- Scientific American, “AI Chatbots Can Diagnose Medical Conditions at Home. How Good Are They?” (2023). Available at: www.scientificamerican.com/article/ai-chatbots-can-diagnose-medical-conditions-at-home-how-good-are-they/
Patient Adviser and Co-Chair of the University of Michigan Department of Pathology’s Patient and Family Advisory Council, Ann Arbor, Michigan, USA.