Conexiant
Login
  • The Analytical Scientist
  • The Cannabis Scientist
  • The Medicine Maker
  • The Ophthalmologist
  • The Pathologist
  • The Traditional Scientist
The Pathologist
  • Explore Pathology

    Explore

    • Latest
    • Insights
    • Case Studies
    • Opinion & Personal Narratives
    • Research & Innovations
    • Product Profiles

    Featured Topics

    • Molecular Pathology
    • Infectious Disease
    • Digital Pathology

    Issues

    • Latest Issue
    • Archive
  • Subspecialties
    • Oncology
    • Histology
    • Cytology
    • Hematology
    • Endocrinology
    • Neurology
    • Microbiology & Immunology
    • Forensics
    • Pathologists' Assistants
  • Training & Education

    Career Development

    • Professional Development
    • Career Pathways
    • Workforce Trends

    Educational Resources

    • Guidelines & Recommendations
    • App Notes

    Events

    • Webinars
    • Live Events
  • Events
    • Live Events
    • Webinars
  • Profiles & Community

    People & Profiles

    • Power List
    • Voices in the Community
    • Authors & Contributors
  • Multimedia
    • Video
    • Podcasts
Subscribe
Subscribe

False

The Pathologist / Issues / 2025 / May / Democratizing AI, Globally
Bioinformatics Precision medicine Software and hardware Technology and innovation Research and Innovations Digital Pathology Workforce Trends

Democratizing AI, Globally

A computational pathology and AI center is a must in every pathology department, says Hooman Rashidi

By Helen Bristow 05/06/2025 Future 12 min read

Share

Credit: UPMC

The Computational Pathology and AI Center of Excellence (CPACE) is an AI center in the University of Pittsburgh School of Medicine. With the aim of advancing the field of laboratory medicine by integrating cutting-edge AI technologies in clinical practice and research, the center also promotes continuous learning – and hopes to set the gold standard for innovation in the medical AI landscape.

We connected with Hooman Rashidi, Executive Director of CPACE, to learn more about the work of the center and how its innovations are impacting the practice of pathology.


Meet Hooman Rashidi

Before medical school, I was a graduate student with a focus on bioinformatics at the University of California (UC) San Diego – but I decided to switch tracks and pursue a career in medicine instead. The plan was to enhance medicine through my bioinformatics and machine learning skills. The reality was that I spent many years trying to persuade my institution to introduce bioinformatics and machine learning into clinical practice, but nobody was interested.

Then, about 10 years ago, things changed. I was at UC Davis, working in a group that was building automated machine learning tools. Some of them – such as the Machine Intelligence Learning Optimizer, MILO – have since become commercially available resources, incorporated into various healthcare systems. The CEO and Vice Chancellor of UC Davis invited me to become the AI Director for the whole health system. We built AI models for both laboratory and clinical applications, including sepsis and acute kidney injury. 

From there, I moved to Cleveland Clinic to set up an AI center there. Alongside that, we deployed the first no-code AI course, with the aim of democratizing AI literacy for the whole medical landscape. 

Finally, I was recruited by UPMC and University of Pittsburgh to create the new CPACE initiative. Alongside that – in my capacity as Associate Dean of AI at the School of Medicine – I was tasked with building an updated, more interactive version of the no-code course to maximize the AI engagement and literacy of our students, faculty, and staff.

Part of that process was writing a new eight-part AI review article series in collaboration with numerous other AI experts from across the country. The series accompanies the no-code e-training, covering everything from the different types of AI, to building custom models, and the regulatory and ethical considerations. Education is deeply rooted in the overall mission of CPACE.


How would you summarize the work of CPACE?

Liron Pantanowitz, Chair of Pathology at UPMC, puts it nicely – we’re built to be problem solvers. If people are looking for basic science research in AI, they should not come to us. We are all about translational research. We want the models we design and build to directly impact patient care and improve workflow efficiencies.

That intention is certainly evidenced by our output. Even though CPACE is relatively new to the AI space, we already have eight viable products in our catalog.

Could you elaborate on your machine learning solution for automating the whole-slide image analysis building process?

That is one of the tools that we have filed a patent on. It’s an automated machine learning framework for building, validating and deploying whole-slide image analysis for specific disciplines.

Traditionally, building a model for this kind of application – say, to detect prostate or colon cancer, or predict PD-L1 status – can take anywhere from several months to over a year. That’s because whole-slide images are massive files, and the process of creating triage models is extremely resource-intensive.

Given my background in automated machine learning, our goal was to streamline this entire pipeline – from model creation to clinical deployment – using a fully automated framework. We designed it to work across different pathology disciplines, with the ability to rapidly generate highly specialized models.

And that specialization matters. Rather than developing one generalized model to handle multiple cancer types – that might not perform as well – we build narrow, task-specific models. It’s the same logic we apply to human expertise: a specialist pathologist typically outperforms a generalist when it comes to their area of focus. The same should be true for AI. So, our tool builds specialist models that can deliver much stronger performance within their domain.

Because the process is now automated and runs on our high-performance computing infrastructure, what used to take months we can now complete in just a couple of days. Instead of producing four or five models a year, we’re capable of building hundreds. That scalability is a potential game changer for pathology, enabling more triage tools and AI-assisted workflows to support pathologists.

It’s important to stress that we see these tools as assistive, not autonomous. We believe firmly in a “human in the loop” model. These AI solutions help pathologists with the initial assessment, but the final interpretation is always made by a board-certified pathologist – often in conjunction with other diagnostic tests and clinical data.

That automated slide analysis tool is a great example of a non-generative AI application with real-world clinical utility. It's one of many innovations we’re working on to advance pathology with scalable, intelligent tools that make a meaningful difference.

What were the main considerations for CPACE in designing open-source chatbots for laboratory medicine?

We’ve developed several generative AI platforms, including one called Pitt-GPT+ and a newer one called Nebulon GPT. The biggest priority for both is patient privacy.

If you’ve ever used a chatbot like ChatGPT, you’ll know that your data is shared with a third-party cloud provider – whether it’s OpenAI, Google, or Microsoft via Azure. That’s fine for many use cases, but in healthcare or any setting involving sensitive data – like patient information or proprietary content – you have to be cautious. Anything you input could potentially be used to train future models. So, if privacy matters, that’s a major concern.

Now, let’s say privacy isn’t an issue and you're okay with your data being stored privately through Microsoft, for example. You still run into a second issue: cost. If you’re building a custom chatbot and hosting it through a commercial platform, every user interaction incurs an API or usage fee. That can become financially unsustainable, especially if you're scaling to hundreds of thousands of users.

The tech world follows a familiar playbook. At first, services are attractively priced to get you hooked. But over time, either costs creep up or your user base grows, and the total cost becomes prohibitive.

This is why open-source models are so exciting. Platforms released by companies like Meta are now incredibly capable – on par with some closed-source models. The open-source approach allows you to deploy the model on your own infrastructure, completely privately, with no API fees – just the cost of running your own servers. We’ve already built frameworks around this approach, offering fully private, cost-effective alternatives to commercial generative AI tools.

In addition to the above, responsible use of generative AI tools by individual users is also a must-have within most systems. At Pittsburgh, we’ve set some clear expectations around the responsible use of generative AI within the School of Medicine, thanks to a great framework created by my colleague, Jason Rosenstock. We call it the DVP rule:

D is for Disclose. If you’re using generative AI, be transparent about it. For example, if you used ChatGPT to generate an outline before writing your manuscript or to create an image, disclose that upfront. People should know which parts were AI-assisted and which parts were your own work.

V is for Verify. You must verify everything the AI outputs. These models are powerful, but they’re not perfect – they can and do make mistakes. It’s your responsibility to fact-check and ensure the information you use is accurate and trustworthy.

P is for Protect. Be mindful of what you upload. Don’t share sensitive data, copyrighted material, or anything that could violate patient privacy laws like HIPAA. If you’re in doubt, don’t upload it.

These principles are the foundation of how we teach our students, faculty, and staff to work with generative AI responsibly. And they’re part of why we’ve invested in private chatbot infrastructure. When the platform is private and well-contained, the risk of accidental breaches drops significantly. That’s where Pitt-GPT+ and Nebulon GPT stand apart – they offer control, security, and relevance in a way that traditional public models just can’t.

How are PITT GPT+ and Nebulon GPT used in clinical or operational settings?

With Pitt-GPT+, users can upload their own documents – say, 1,000 laboratory reports – and interact with them directly through the chatbot. It has a much lower tendency to stray into unrelated topics or pull in external information. If you ask something outside the scope of your documents, it will simply respond, “I don’t know.” That’s what makes it so powerful – it stays in its lane and focuses solely on your uploaded materials.The key features are privacy and cost – everything about it is designed to keep your data secure and the costs low.

Nebulon GPT is designed for more general-purpose queries. If you want to ask questions like you would with ChatGPT – based on global knowledge, not your own files – but still want to do so in a private environment, that’s where Nebulon GPT comes in. It is so called because, like a nebula in space, it’s dark and private. The whole idea is to allow users to run queries securely and confidentially without sharing data with external platforms.

That said, I want to be very clear: at this point we don’t advocate using these tools for direct patient care or allowing patients to ask clinical questions. These models, while incredibly useful, can hallucinate – they might present inaccurate information or cite unreliable sources. Pitt-GPT+ is slightly different in that it’s grounded in your own uploaded documents, which improves reliability. Still, these tools are meant for practitioners, not patients

CPACE has also released a machine learning tool for generating pathology reports. What impact is that having on laboratory workflows?

The first two users of the report-writing tool were actually me and my colleague, Matthew Hanna. He applied it in breast pathology and I used it in hematopathology – so I can speak from experience. The reason people really like this tool is because it addresses several long-standing challenges in pathology reporting. 

The first is time. Bone marrow reports, for instance, are lengthy – typically three to five pages – and they take a long time to write. Even for an experienced hematopathologist like me, a single case could take 25 to 30 minutes. That kind of time commitment adds up quickly, impacting work–life balance. Fatigue also becomes a real concern, and when people are tired, the risk of errors increases – which can have serious implications for patient safety.

Second, there’s a lack of standardization. Reports vary significantly between institutions and even between pathologists. While we try to keep formats consistent, in reality, the content is often scattered and inconsistent.

Third – and this is a big one – traditional dictation methods like Dragon software or transcriptionists don’t offer in-document quality control. This is especially relevant for longer reports. If I accidentally write, on page one, that a patient has 3 percent blasts, but, on page three of my microscopy section, the correct number of 37 percent is stated, there’s no internal system checking for that inconsistency. That’s a major risk, especially for patients on clinical trials. Mistakes like that can cause real harm and require report amendments.

This is where generative AI, combined with rule-based systems, comes in. With our model, what used to take 25–30 minutes now takes under five. It runs automatic quality checks throughout the report to catch inconsistencies. Just as importantly, it helps ensure that structured data is captured more accurately, which benefits downstream analysis and clinical decision-making.

Not everyone will adopt this immediately – and that’s okay. Some are naturally hesitant. But I believe adoption will grow as clinicians see their colleagues finishing work at 4:30 while they’re still typing away at 7:30 or 8:00. The time savings alone will drive adoption, but so will improvements in report quality and patient care.

Like any innovation, adoption follows a pattern: early adopters, early majority, late majority, and eventually the laggards. And I do believe even the laggards will come on board – especially once regulatory bodies start mandating these tools. That day is coming. But for now, we’re in the early adoption to early majority phase – and it’s exciting to see the momentum building.

How is the work of CPACE funded?

We’re not a startup; we are a purely academic venture, fully funded through grants and philanthropy. Thankfully we are well funded, which means that, despite all the recent freezes on research and hiring in the US, we can move forward with our mission. We also generate income from running industry-sponsored studies. 

Additionally, as our products become licensed, that will also generate a revenue stream for CPACE to continue and grow its mission.

What does the future hold for CPACE?

We plan to move ahead, full force, with democratizing AI globally. We want what we do here to be translated to other centers of excellence. We're firm believers that a computational pathology and AI center is a must in every pathology department. 

To that end, we’re very open to sharing our knowledge and expertise with other centers. Recently, we were visited by a group of colleagues from UC San Francisco, seeking advice on setting up their own AI center, which is very exciting to see. 

That’s one of our primary goals: sharing our knowledge and helping other places to establish similar setups. We want to make sure that our whole discipline stays ahead of the game.

Newsletters

Receive the latest pathology news, personalities, education, and career development – weekly to your inbox.

Newsletter Signup Image

About the Author(s)

Helen Bristow

Combining my dual backgrounds in science and communications to bring you compelling content in your speciality.

More Articles by Helen Bristow

Explore More in Pathology

Dive deeper into the world of pathology. Explore the latest articles, case studies, expert insights, and groundbreaking research.

False

Advertisement

Recommended

False

Related Content

A Patient Is More Than a Price Tag
Bioinformatics
A Patient Is More Than a Price Tag

November 17, 2016

1 min read

In patients with intellectual and metabolic differences, genome-wide sequencing can provide diagnoses and even potential routes to treatment

This Time, It’s Personal
Bioinformatics
This Time, It’s Personal

October 25, 2022

5 min read

Overcoming lung cancer treatment resistance will require predictive biomarkers that take into account significant patient variability

Sepsis Patient Risk Scores
Bioinformatics
A Calculated Risk

February 15, 2023

2 min read

How a personalized sepsis score aims to better stratify patients with acute infection

The Google Genome
Bioinformatics
The Google Genome

November 17, 2014

1 min read

The tech giant’s newest “moonshot” aims to create a complete genomic picture of the healthy human being

False

The Pathologist
Subscribe

About

  • About Us
  • Work at Conexiant Europe
  • Terms and Conditions
  • Privacy Policy
  • Advertise With Us
  • Contact Us

Copyright © 2025 Texere Publishing Limited (trading as Conexiant), with registered number 08113419 whose registered office is at Booths No. 1, Booths Park, Chelford Road, Knutsford, England, WA16 8GS.