Fewer than half of surveyed centers reported implementing defined quality control processes for digital reporting and display in the whole slide imaging workflow, according to a recent study.
A cross-sectional survey of 19 digitally active members of the Bigpicture consortium examined the use of quality control (QC) processes across the whole slide imaging workflow in digital histopathology. Published in the Journal of Clinical Pathology, the survey covered seven steps: pre-staining, staining, scanning, post-scan, digital reporting and display, reporting of metadata, and computational analysis/artificial intelligence (AI). Respondents represented sectors including clinical and healthcare, academia, and preclinical or pharmaceutical research.
At least 65 percent of participating laboratories reported implementing QC checks in some or all workflow steps. Pre-staining (72 percent) and staining (77 percent) were the stages most frequently reported to have defined standards, mandatory procedures, documentation in a managed quality management system (QMS), and records of compliance. Lower proportions were reported for scanning (62 percent), post-scan (60 percent), digital reporting and display (44 percent), reporting of metadata (34 percent), and computational analysis and AI (34 percent). Digital reporting and display had the highest proportion of laboratories indicating no QC processes.
Survey respondents were also asked about their views on variability in digital histopathology workflows. Sixty percent expressed some level of concern about variability at each step, while 30 percent reported little or no concern, and 7 percent were unsure. Regarding whether image or data processing could reduce the need for improved QC, 62 percent indicated it could to some extent or almost entirely, 33 percent reported little or no potential, and 5 percent were unsure.
The findings indicated a greater prevalence of QC processes in earlier workflow stages compared with later ones. The study did not assess whether reported QC processes were manual or automated, nor did it evaluate differences in QC needs between computational analysis/AI and pathologist-based digital assessment.
Limitations included the small sample size, the involvement of respondents already active in the Bigpicture project, and the descriptive nature of the survey. The authors noted that some respondents interpreted computational analysis and AI as applicable only in the final stage, although such analyses may occur earlier in the workflow.