The Innovator’s Dilemma
How simulated patients can improve the process of clinical utility evidence generation
Randy E. David | | 5 min read | Technology
Demonstrating clinical utility is generally the last – and largest –principal requirement before securing coverage and reimbursement for a molecular diagnostic test. Popularized by the CDC’s Office of Public Health Genomics, the term “clinical utility” refers, at its core, to how a technology or practice, by prompting an intervention, may impact a health outcome. Impacts can be derived through additional treatment options, or improved implementation feasibility, population health equity, and/or cost-effectiveness.
While life sciences companies grapple with establishing evidence of analytical and clinical validity, they can fail to prepare for a significant challenge: procuring direct, hypothesis-driven, utility data.
Indeed, the innovator’s dilemma is that with the rapid advances in molecular technologies, AI algorithms, and biomarker discovery that are driving diagnostics, healthcare payers must be more rigorous in choosing what tests to cover. This challenge is often exacerbated by the context-dependent ways in which clinical utility can be evaluated (1). CMS’ Molecular Diagnostic Services program, or “MolDX,” developed in 2011 and administered by Palmetto GBA, currently creates coverage and reimbursement policies for four A/B Medicare Administrative Contractors (MACs), across 28 states. Thus, they are largely the de facto body for clinical utility judgments nationwide. While MolDX predicates its assessments on the CDC’s ACCE model, McMaster University’s GRADE Evidence to Decision (EtD) framework, and CMS’ “reasonable and necessary” clause, these were all designed as industry standards or guiding principles, rather than roadmaps for evidence generation.
A purpose-built study design
Evaluating the real-world utility of a given diagnostic test requires the direct quantification of ever-elusive provider decision-making behaviors. Simulated patient studies – the examination of virtual patients presented on the screen of a device – are an optimal way to rapidly and accurately collect care data in line with national provider and patient demographics. Just like in real life, provider participants can progress through an entire medical interaction with a “patient,” where they can receive vitals, record a medical history, conduct a diagnostic workup, make diagnoses, and ultimately, arrive at a recommended treatment plan.
Simulated patients might offer a convenient way to capture care practices, but for the high-quality evidence required by healthcare payers, care decisions must be quantified within a purpose-built statistical framework. Other than meta-analysis or a systematic review that filters multiple sources, individual randomized controlled trials (RCTs) are the most effective means of assessing cause-effect relationships (2). And so, for studies to yield results that payers can rely on, they should be designed as an RCT with a narrow confidence interval. This involves recruiting a large and representative sample of providers that would potentially utilize the test in real life. It also requires the use of realistic educational and marketing materials – such as fact sheets, slide decks, and short videos – about the test, which are used as the “intervention” in the RCT design.
Like other means of generating clinical utility evidence, studies should be presented in peer-reviewed publications that life sciences companies can include in their dossiers when applying for coverage and reimbursement. Published articles should explain how the utilization of a given diagnostic creates value in comparison to the current standard of care. For example, a test may streamline, consolidate, simplify or otherwise make results more intelligible; provide additional information for provider decision-making; hasten the discovery of an actionable insight; provide more accurate results; reduce the harm that may be caused by an alternative; or prevent wasteful spending related to hospital admittance. Publications should also be sure to expound upon specific identified use cases, and how a given test may differentially impact various patient groups.
So, why simulated patients instead of real-world ones?
Simulated-patient RCTs are not only much quicker – approximately one third of the duration – and more cost-effective than traditional methods, but they are also more scalable, customizable, and often more accurate too (see Figure 1). Overall, the benefits include:
- Reduced time commitment and effort required by the participant provider.
- More accurate study samples from a national pool, rather than from a limited number of regional or mostly academic sites.
- The diversity of the “patients” is reflective of demographic and epidemiological realities – reducing stigma and increasing equity.
- Simulated patients can be designed for a broad range of disease areas and outcomes.
- Statistical power needed to reach significance thresholds requires far lower sample sizes, since participants care for the same “patients,” eliminating interpatient variability.
Robust validation studies have been published that corroborate the effectiveness of simulated patients, both in relation to clinical utility and in healthcare generally (3,4,5). Some clinical utility RCT designs even combine the presentation of simulated patients with medical chart abstraction (from patients of the same participant providers), which allows for an additional layer of evidence generation and validation.
Nationally and internationally, healthcare payers are becoming increasingly aware of the enormous upsides of simulated patients in this unique context. Because of this, many have been promoting simulated patient clinical utility trials for rare diseases, diagnostics that prevent patient harm, novel technologies, and for trials that would otherwise be too costly. Simulated patient RCTs have also been used to inform commercialization efforts, gauge user adoption, and assess quality of educational/marketing materials. They have a proven return on investment, not only for clinical utility evaluation, but also for precision education of healthcare workers – as well as for reducing clinical variation in large health systems (6,7).
With the digitalization of many aspects of healthcare underway, it is vital that life sciences companies remain abreast of the latest trends and capabilities of implementation science – including the evaluation of clinical utility. Coverage for molecular diagnostics requires more direct evidence generation than ever before, and for logical reasons. A faster, cheaper, reliable method is bound to create a strategic advantage in bringing valuable diagnostics to market.
- D Pritchard et al., “Clinical Utility of Genomic Testing in Cancer Care,” JCO Precis Oncol, 6, e2100349 (2022). PMID: 35085005.
- B Sibbald, M Roland, “Understanding controlled trials: Why are randomized controlled trials important?” BMJ, 316 (1998). PMID: 9468688.
- J Peabody et al., “Comparison of Vignettes, Standardized Patients, and Chart Abstraction: A Prospective Validation Study of 3 Methods for Measuring Quality,” JAMA, 283, 1715 (2000). PMID: 10755498.
- J Peabody et al., “Establishing Clinical Utility for Diagnostic Tests Using a Randomized Controlled, Virtual Patient Trial Design,” Diagnostics, 9, 67 (2019). PMID: 31261878.
- JR Kirwan JR et al., “Clinical judgment in rheumatoid arthritis. I. Rheumatologists’ opinions and the development of ‘paper patients’,” ARD, 42, 644 (1983). PMID: 6651368.
- S Quimbo et al., “Do Health Reforms to Improve Quality Have Long-Term Effects? Results of a Follow-Up on a Randomized Policy Experiment in the Philippines,” J Health Econ, 2, 165 (2016). PMID: 25759001.
- A Kononowicz, et al., “Virtual Patient Simulations in Health Professions Education: Systematic Review and Meta-Analysis by the Digital Health Education Collaboration,” J Med Internet Res, 21, e14676 (2019). PMID: 31267981.