Subscribe to Newsletter
Outside the Lab Profession, Quality assurance and quality control, Training and education

Driving Down Diagnostic Discrepancies

Making a correct diagnosis in surgical pathology depends on many things, including the pathologist’s knowledge and experience, clinical correlation, standardized diagnostic terms, confirmatory ancillary testing, and targeted case review. To help streamline the process, the College of American Pathologists (CAP) and the US Association of Directors of Anatomic and Surgical Pathology (ADASP) have released error-reducing guidelines for selective case reviews (1). The CAP-ADASP recommendation is that pathology departments should formalize a quality assurance-based review process – and for reviews to be completed in a timely manner to avoid any negative impact on patient care. Other recommendations include document case reviews that are relevant to their practice and to monitor and record results continuously. And, finally, if there is poor agreement within a defined case, there should be steps for improving the situation.

Clearly, case reviews can detect diagnostic discrepancies, some of which are real errors that could affect patient care. There is some evidence that targeting problematic areas in pathology may be more effective in detecting discrepancies than by performing random case reviews. Indeed, many pathology departments already have review policies for selected types of cases, such as esophageal dysplasia and glial neoplasms. The guideline’s main recommendation is for departments to evaluate the material they see and establish a policy to review cases as a proactive measure to reduce diagnostic error. This may be based on the specimen type (for example, breast biopsy) or on a disease process (for example, esophageal dysplasia).

Importantly, I’d like to draw attention to the following considerations for determining which – and how – cases should be reviewed:

  • Reviewing pathologists should formulate their diagnosis independently and without influence from others. 
  • Reviewing pathologists should have sufficient expertise in the organ system/diagnosis case that they are reviewing. 
  • Reviews should include negative cases to detect potential false-negative cases. 
  • Targeted case reviews of selected organ systems/diagnoses are more efficient at detecting discrepancies than random reviews.  
  • A review of cases before sign-out could be used to build collaborative teams and improve pathologists’ skills. This has also been shown to reduce amended report rates.

The type of review is highly dependent on the practice size and the expertise of the pathologists. A review system is easier to implement in an intermediate-sized group (approximately 10 to 15 pathologists), for example. In such a practice, most pathologists tend to be generalists with specific interest and involvement in a subspecialty area. Often, individuals with subspecialty interests will serve as the point person to their clinical counterpart and they will actively keep up with the literature for that organ/system. These individuals are excellent case reviewers because they are able to convey any new knowledge regarding peculiarities of reporting or new ancillary testing before sign-out. I would, however, recommend that before an “expert” signs-out any material, it is reviewed (hopefully) by another expert pathologist. Larger groups may have more flexibility to review cases depending on the number of specialists available within a particular subspecialty.

In smaller groups, expertise specific to an area (neuropathology, medical renal pathology, etc.) or specimen type (bone or soft tissue tumors, for example) may not be available and so they will send out some specimens to a consultant. However, smaller groups should develop some mechanisms for case reviews that they see and manage. For example, cancer cases will be signed out and referred to a larger institution for definitive therapy and review. I’d like to suggest that evaluating and documenting these external reviews could serve as a minimum together with those cases reviewed during in-house clinical conferences. Of course, in the future, pathologists may use digital pathology to share a case with others in real-time, giving them the opportunity to discuss it with another pathologist who is an expert in that specimen. In the meantime, there is still a lot of work to do in establishing robust case review procedures.

Receive content, products, events as well as relevant industry updates from The Pathologist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. RE Nakhleh et al., “Interpretive diagnostic error reduction in surgical pathology and cytology: Guideline from the College of American Pathologists Pathology and Laboratory Quality Center and the Association of Directors of Anatomic and Surgical Pathology,” Arch Pathol Lab Med, 140, 29–40 (2016).
About the Author
Raouf Nakhleh

Raouf Nakhleh is Professor of Pathology at the Mayo Clinic Florida, USA.

Register to The Pathologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Pathologist magazine

Register