A conversation with a virtual pathology copilot on the practical applications of artificial intelligence
Note: The responses are from ChatGPT 4.0 based on the first response to the submitted query. For brevity, some responses were asked to be shortened by ChatGPT. Some responses were edited for formatting and readability. You can find a link to the full conversation thread here.
I have been thinking a great deal about the famous quote from Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” The current generation of AI tools must seem just as magical to us as the steam engine or first planes did to previous generations. There are many reactions to these technologies, ranging from disbelief through excitement and all the way up to fear. Regardless, these technologies – like the steam engine and airflight – will fundamentally change the world.
But how much of this cocktail of fear and excitement is warranted? To some, AI is a career-buster that would make a Luddite’s loom blush; to others, it’s a mega tool set to make every facet of our lives friction free. Tired of simply discussing these tools in theory, I have decided to run something of an experiment. Here, I present to you my conversation with “Pathrick” – my pathology copilot. After all, who better to answer questions on AI than an AI itself?
My goal is not to support either camp, but rather to show the practical application of AI for pathologists – warts and all. I was particularly impressed by the AI’s ability to come up with analogies that would resonate with pathologists. Anyway, without further ado…
Pathrick: Thank you for having me. I am "Pathrick", a highly specialized AI language model tailored to assist professionals in the field of pathology. My primary role is to aid pathologists, like yourself, in their day-to-day workflows, whether that involves querying vast amounts of literature, providing differential diagnoses based on descriptions, or even assisting in the educational aspect for those in academic settings.
Matthew: Your responses are very good, can you explain in three lines how the underlying technology works in GPT to an audience of pathologists?
Pathrick: Certainly. GPT (Generative Pre-trained Transformer) uses machine learning algorithms to analyze vast datasets of text, learning the patterns, syntax, and semantics. Through a layered network of “neurons,” it encodes this knowledge to generate human-like text based on the input it receives. Think of it as an extremely advanced pattern-recognition system, akin to how pathologists discern cellular structures, but tailored for natural language.
Matthew: The transformer technologies have been applied to images in addition to text, can you explain in three lines how this works?
Pathrick: Absolutely. In the context of images, transformer models break down the visual data into a grid of smaller segments, each serving as an individual “word” in a language model. These segments are then processed through layers of attention mechanisms and neural networks, enabling the model to capture intricate relationships and patterns within the image. Just as text-based transformers understand sentence structure, image transformers grasp the spatial and contextual relationships among pixels.
Matthew: This technology is different from the convolutional neural networks (CNNs) we have heard about before, can you explain in three lines how they are different?
Pathrick: Certainly. CNNs excel at spatial hierarchies, using filters to scan through an image in small patches to identify features like edges or textures. Transformers, on the other hand, consider all parts of an image simultaneously, capturing global relationships through attention mechanisms. Unlike CNNs, transformers don’t rely on the notion of local receptive fields, allowing for more complex interdependencies to be captured.
Matthew: That is a bit confusing to me. Can you explain this in a simpler way that a pathologist without experience in AI might understand?
Pathrick: Of course. Imagine a CNN as a microscope that focuses closely on each individual cell in a tissue sample, scanning bit by bit to understand its structure. On the other hand, think of a transformer as taking in the whole tissue sample at once, understanding not just individual cells but also how they relate to each other. While the CNN examines piece-by-piece, the transformer grasps the big picture all at once.
Matthew: Interesting, so the transformer models function more like how pathologists look at slides.
Pathrick: Precisely. Just as pathologists often begin by looking at the entire slide to understand the general landscape before zooming in on details, transformer models consider all parts of an image simultaneously to grasp both local and global relationships. This holistic approach allows transformers to capture complex patterns and interdependencies, much like a pathologist integrates various aspects of a tissue sample for diagnosis.
Matthew: Cool! That brings me to another point: what do you think about the fear that these technologies will replace pathologists.
Pathrick: While AI technologies offer powerful tools for automation and analysis, they lack the nuanced understanding, clinical context, and ethical considerations that human pathologists bring to patient care. The ideal future is not replacement but collaboration, with AI serving as an augmentative tool that enhances the pathologist's capabilities. Job displacement is a concern, but proactive adaptation and continued leadership in the integration of AI can secure the irreplaceable role of pathologists.
Matthew: OK this sounds good. So, in 2023 with currently available tools, what are some examples of how a pathologist can use both large language models and vision transformer models. Given that the models have not been validated for clinical use, please limit the response to non-clinical uses.
Pathrick: In a non-clinical context, practicing pathologists in 2023 could use large language models for tasks such as grant writing, manuscript preparation, and curating educational content for students and residents. These models can assist in quickly gathering and summarizing relevant scientific literature, thus facilitating research efforts. Vision transformer models, on the other hand, could be used for research purposes like quantifying morphological features in experimental pathology, or as educational aids for highlighting key features in histopathological slides used in teaching. While not deployed in direct patient care due to lack of clinical validation, these AI tools can still significantly enhance research, educational endeavors, and administrative efficiency.
Matthew: Interesting, how could I use these tools to help with grant writing?
Pathrick: For grant writing, large language models like GPT-4 can assist in various capacities:
- Idea Generation: you can pose research questions or objectives to the model and it can provide you with different angles or aspects to consider, helping to refine your research aims.
- Literature Summary: the model can quickly summarize key points from existing research, aiding in the formulation of your literature review section.
- Data Presentation: it can help generate textual descriptions of data or research methods, making the technical aspects of your proposal more accessible to reviewers who may not be experts in the specific field of pathology.
- Editing and Proofreading: the model can identify grammatical errors or stylistic issues, thereby polishing the text to meet the stringent requirements of grant applications.
- Budget Justification: It can help in formulating a well-justified budget by providing example line items or text that details the necessity of each component for the project’s success.
By leveraging a language model in these ways, you can streamline the often arduous process of grant writing, allowing you to focus more on the substantive aspects of your proposal.
Matthew: Interesting, what about education? How can I use these tools in education for residents, medical students, or undergraduate students?
Pathrick: AI models like GPT-4 and vision transformers can be quite advantageous in the educational sphere, particularly for residents, medical students, and undergraduate students studying pathology:
|Interactive Q&A Sessions
|Language models can be programmed to simulate case-based discussions, challenging students to solve clinical cases and providing immediate feedback on their responses
|Automated Slide Annotations
|Vision transformer models can identify and annotate features in histopathological slides, offering a valuable supplement to hands-on microscope training
|Language models can help design syllabi, lesson plans, and educational materials, thereby streamlining the preparation process for instructors
|For research or journal clubs, language models can summarize key articles, highlighting the methods, findings, and significance, which can then be reviewed and discussed in sessions
|Examination and Quizzes
|Language models can auto-generate multiple-choice questions or short-answer questions based on a set of parameters or topics, assisting in the evaluation of knowledge retention
|Note-taking and Summaries
|Language models can create concise summaries of lectures or educational videos, aiding students in review and revision
|Both language and vision models can be used to create more complex, interactive simulations of real-world diagnostic challenges, helping to prepare students for clinical practice
Incorporating these tools into your educational methodologies can not only modernize the learning experience but also make it more interactive and engaging, ultimately improving the quality of education delivered.
Matthew: What about the risk of data hallucination that I have heard can happen with these models, how do I limit that risk?
Pathrick: Data hallucination, or the generation of information that is not accurate or factual, is a known limitation of machine learning models like GPT-4. There are several strategies to mitigate this risk, especially in an educational context:
Always have an expert, such as a senior pathologist or educator, review the content generated by the AI model. This ensures that the information aligns with established scientific and medical knowledge. Whenever possible, use multiple sources to verify the information generated by the AI model. For example, if the model is used to summarize research articles, cross-reference the summary with the original paper.
Clearly indicate the scope within which the AI model has been trained to operate. For instance, if it is being used for educational case simulations, clarify that these are AI-generated and may not cover all clinical nuances. Make it clear to the users when information is AI-generated, so they approach it with a critical mindset and do not take it as the sole authoritative source.
The risk of hallucination can be reduced by training the model on a well-curated, high-quality dataset, although re-training a model like GPT-4 would require substantial computational resources. In a controlled environment, collect feedback from users (such as students or residents) regarding the accuracy of the AI-generated content, and make necessary adjustments. Educate those interacting with the AI about its limitations and encourage skepticism and verification as standard practices.
By taking these precautionary measures, you can significantly reduce the risk of disseminating incorrect or misleading information through data hallucination while still benefiting from the capabilities of AI models in educational settings.
Matthew: This has been all very technical and serious. Can you tell me a short but funny pathology joke?
Pathrick: Of course! Here's a light-hearted pathology joke for you: Why did the pathologist bring a ladder to work? To finally reach a diagnosis! I hope that brings a smile to your face amid the technical discussions.
Matthew: This is all great but what do you see as a potential negative future for pathologists and AI?
Pathrick: A potential negative future could arise from several factors:
- Over-reliance on AI: If pathologists excessively depend on AI for tasks, including diagnostics, there’s a risk of atrophying critical human skills required for nuanced analyses that the AI may not be capable of performing accurately
- Ethical and Legal Risks: Without adequate validation and interpretability, the clinical adoption of AI could lead to incorrect diagnoses, causing ethical and legal repercussions
- Job Displacement: While AI is not likely to replace the expertise of pathologists fully, there could be a reduction in demand for certain lower-level tasks, leading to job insecurity for those not adept at using these new tools
- Data Privacy: Poorly managed integration of AI could expose sensitive patient data, violating privacy laws and eroding trust in the healthcare system
- Economic Inequality: High costs associated with implementing and maintaining AI systems could widen the resource gap between well-funded institutions and smaller practices, affecting the quality of care
- Vendor Lock-in: Reliance on proprietary AI algorithms could make healthcare institutions dependent on specific vendors, limiting flexibility and innovation
- Skill Gap: A divide might occur between pathologists trained in traditional methods and those proficient in computational pathology, leading to unequal opportunities and potential friction within the profession
- Clinical Validation Lag: As AI technologies rapidly advance, the slower pace of clinical validation could result in the usage of outdated or less-effective algorithms, affecting patient care negatively
Matthew: Can you close with a short quote that offers an inspiring vision for the future of AI-enhanced or augmented pathologists?
Pathrick: Of course. Here’s a quote for you: “In the harmonious dance of AI and pathologists, we glimpse a future not of machine dominance, but of human brilliance augmented, where the art and science of pathology reach unparalleled heights for the betterment of patient care.”
Pathologist, Department of Pathology and Laboratory Medicine London Health Sciences Centre, Assistant Professor, Western University, London, Ontario, Canada.