Cookies

Like most websites The Pathologist uses cookies. In order to deliver a personalized, responsive service and to improve the site, we remember and store information about how you use it. Learn more.
Subscribe to Newsletter
Inside the Lab Technology and innovation

Deciding Factors

At a Glance

  • Precision medicine is being adopted at an increasingly rapid pace – but implementing it from scratch can be costly and difficult
  • Clinical decision support (CDS) tools can pave the way to precision medicine for many hospitals and clinical laboratories
  • Many CDS options are available, so users must be careful about selecting the most appropriate tool
  • CDS tools can help make treatment decisions, manage liability risk, and ensure compliance with ever-changing data privacy regulations

The adoption of precision medicine is happening at a pace that would have been difficult to imagine even just a few years ago. Hospitals and clinical labs have tremendous interest in delivering this kind of tailored care to patients – but implementing precision medicine as a new capability remains a major challenge. Nevertheless, the need to adopt is both clear and pressing, especially for healthcare organizations treating patients with cancer; precision medicine has remarkable value in diagnosing, treating, and monitoring the disease. So how can organizations transition to precision medicine in a cost-effective, scalable way?

Precision medicine is built on a foundation of new sequencing technologies that generate massive amounts of patient-specific data.
The pursuit of precision

Precision medicine is built on a foundation of new sequencing technologies that generate massive amounts of patient-specific data – both a blessing and a curse for hospitals and clinical labs. On one hand, it is the sheer volume of genomic information and our ever-improving understanding of disease genetics that make it possible to provide an accurate, customized prognosis or select just the right treatment for a patient. On the other hand, without an army of PhD geneticists and bioinformaticians helping to make sense of it all, healthcare facilities who want to adopt precision medicine are often intimidated by the daunting task of keeping pace in such a rapidly advancing field.

In my opinion, the only way to solve this problem is through technology. In recent years, clinical decision support (CDS) tools have become increasingly available to laboratory staff and clinical care teams. Similar to the way Google Maps sifts through reams of data to help people choose the best routes to their destinations, CDS tools perform the “heavy lifting” of collecting and organizing all the relevant clinical information across lab data sources, electronic healthcare record (EHR) data, and the clinical literature that best captures our understanding of disease. Then, the information is fed into powerful integrated data analytics to offer healthcare professionals comprehensive, up-to-date, evidence-based interpretations that are tailored to the clinical profile of each patient.

Implementing precision medicine

Just a few years ago in the United States, precision medicine was only offered at pre-eminent academic medical centers. Today, an estimated 24 percent of hospitals will provide some form of precision medicine by the end of 2018 (1). But even though precision medicine is projected to spread and develop rapidly in the coming years, there is an urgent need to operationalize its clinical use right now.

The biggest challenge is keeping up with the speed of information growth and our constantly evolving understanding of the biology that underlies disease and treatment response. Though the cost of sequencing technologies continually decreases, the volume and frequency of new information that practitioners must integrate into their genomic analysis is only increasing – adding to the time, effort, and information complexity of solving patient cases using genomics. Elaine Mardis, now at Nationwide Children’s Hospital, phrased this problem succinctly in the title of her article (2), “The $1,000 genome, the $100,000 analysis?”

For precision medicine to be effectively delivered to patients in clinical settings, practitioners must keep up with advances in treatments, disease biology, clinical trial availability, professional guidelines, and much more. Traditional approaches would mean that individual hospitals would need to hire dozens or even hundreds of MD/PhDs to wade through all of the information stemming from internal datasets, EHRs, external databases, and peer-reviewed literature – just to help pathologists, oncologists, and other healthcare team members apply that knowledge to each patient case. The operational, logistical, and financial implications of such a model make it a non-starter for the majority of today’s clinical care settings.

A technological alternative

CDS tools offer a scalable, cost-effective way forward for medical centers that don’t have access to a phalanx of dedicated data analysts. These tools incorporate advances in data mining, machine learning, predictive modeling, and other areas. The result? Technology that can process massive amounts of data and generate clinically actionable interpretations or recommendations for specific cases. There are many types of CDS tools; to select the right one for a particular set of needs, we need to understand the different options each tool provides.

CDS tools often start with some form of knowledge base – a vast collection of information fed into the platform at its foundation and then restructured and reorganized to make it easier for software algorithms to process. Some CDS tools may begin with specific datasets for narrowly defined clinical uses. Others are far more comprehensive, including carefully structured representations of peer-reviewed literature, as well as genomic, clinical, and therapeutic databases. Naturally, these tools can be applied to a broader range of health conditions. The most sophisticated of these approaches expand beyond even clinical literature and lab data to integrate many other types of information, such as best practice clinical diagnosis and treatment guidelines, detailed enrollment criteria for clinical trials, genetic and pharmacogenomics-related indications for available drug treatments, and collections of clinical case datasets that describe outcomes for biologically similar patients. Such technology provides pathologists, clinical geneticists, and other lab professionals with powerful computational engines to process, integrate, and interpret the universe of information relevant to each patient. When implemented properly, these tools can provide the information needed to help inform a clinical decision at any given time.

Consider how this kind of tool could work in a pipeline for reporting the results of a tumor genetic analysis. The tumor would be sequenced, leading to a list of potentially millions of variants spanning many types of genetic variation: single nucleotide variants, insertions and deletions, copy number variation, fusions, and more. When appropriate, a matched normal sample would also be sequenced so that germline variants could be quickly and automatically filtered out of the list. Variants deemed unique to the tumor would then be fed as a first data input into the CDS tool, which would crucially integrate the second data input: an algorithmic knowledge base that represents all known information about each variant – even if that variant’s name, function, or clinical impact has changed over time. The tool could then apply some intelligent algorithms to determine what kind of downstream biological effect each variant might have, its possible corresponding impact on disease physiology, a differential clinical diagnosis, and potential responsiveness or resistance to an array of available therapeutic options. Some tools even automate variant classification according to professional guidelines, such as the American College of Medical Genetics and Genomics variant categories. All of these steps would be automated, running rapidly in the background, and the eventual output could provide a detailed explanation for the algorithmic reasoning that led to a given conclusion. Finally, the CDS tool would generate a short list of the variants most likely to be medically relevant – those that might be driving the cancer, as well as those that could be used to guide treatment selection or clinical trial enrollment.

Importantly, CDS tools are not intended to usher artificial intelligence into the medical establishment. The technology is designed to help experts make decisions, rather than to make decisions for them – and, as such, the strongest of these systems include a critical explanation component whereby physicians, oncologists, geneticists, and the rest of the care team can inspect and understand the evidence-based reasoning that led the system to suggest a particular course of action.

The biggest challenge is keeping up with the speed of information growth and our constantly evolving understanding of the biology that underlies disease and treatment response.
Key differentiators

When considering CDS technology, users should be careful to evaluate all of the features relevant to their laboratory’s needs. For instance, some tools use the “black box” model, generating results without allowing the user to see the calculations and assumptions needed to arrive at that conclusion. This model introduces an element of risk to clinical teams, who cannot fully justify medical decisions, if they don’t understand the evidence underlying the CDS-generated interpretations. For clinical lab purposes, tools that offer transparency are far more empowering. When these tools generate results, each one can be queried to reveal the specific data, filters, and processes that led to it. In the best-case scenario, users can even go back and adjust some of those elements – say, to exclude data deemed irrelevant for the case or tweak a filter to be slightly more stringent based on the user’s expertise.

The need for a clear understanding of how patient data is processed and interpreted to reach a particular conclusion is becoming well-recognized. In some cases, it has even been the subject of regulatory oversight. In Europe, for instance, the recently enacted General Data Protection Regulation (GDPR) includes provisions around providing consumers and patients a “right to explanation,” including “the existence of automated decision-making and meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.” Although GDPR is broad in scope and reaches far beyond healthcare applications, the emphasis on transparency and understandability of software system outputs used for decision-making is likely to make “black box” approaches a thing of 
the past.

The technology is designed to help experts make decisions, rather than to make decisions for them.

Another differentiator is how data was processed to build the original knowledge base powering the CDS tool. Many options rely on machine learning, a fast and cheap method of churning through reams of data. Such artificial intelligence-based approaches have seen significant increases in adoption in recent years, but they remain limited by the size, quality, and “up-to-dateness” of the big data collections used to train the algorithms. Patient datasets are still too small for optimal clinical use (some experts estimate that one billion patient datasets will be needed for breakthrough algorithmic value). Another downside of the machine-learning approach is that when there are inconsistencies in the initial, smaller datasets, results will suffer. For example, two separate papers referring to the same gene by two different names will not be analyzed together – an issue that could lead to incomplete results and possibly an inaccurate interpretation or recommendation.

An alternative is CDS technology that incorporates both machine learning and expert-defined rules and algorithms, supported by well-curated input datasets – a hybrid computational approach that yields the best of both worlds. Critically, for this model to work, an organization must have the operational know-how, infrastructure, and expert staff to enable doctorate-level experts to create the baseline knowledge asset, review and assess the automated results, and adjust when needed before that information is entered into the knowledge base.

Finally, it is worth considering whether the CDS tool relies only upon its pre-loaded information. Some tools allow users to link internal data sources, such as the private knowledge bases many clinical labs are building from information about their own patient populations. Systems that make it possible to incorporate both internal and external data offer the most flexibility and value for hospital-based users, resulting in CDS technology that can be tailored to a particular institution’s patient population by leveraging data from that population.

When considering CDS technology, users should be careful to evaluate all of the features relevant to their laboratory's needs.
Looking ahead

CDS tools are following genomics and precision medicine into clinical use, starting with rare hereditary diseases and cancer. In the near future, I anticipate that these tools will be deployed globally – using patient privacy-sensitive approaches – for many more medical conditions. As that trend continues, healthcare organizations are likely to find that CDS tools are an important way to manage liability risk. Hospitals that lack a mechanism to ensure that decisions are based on the most up-to-date information and are being made in a reproducible, objective manner will not only be less likely to provide consistent, high-quality care, but also run a higher risk of lawsuits. CDS tools will allow organizations to make reproducible, accurate decisions for each patient – and in countries where frameworks are now being put into place to give consumers the legal right to an explanation for each medical decision, such tools will be essential for compliance.

When I look to the future, I see CDS technology paving the way for hospitals and labs with limited budgets to get into the realm of precision medicine, delivering better care for their patients in a cost-effective fashion.

Receive content, products, events as well as relevant industry updates from The Pathologist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. N Versel, “Data requirements, money hold back growth of precision medicine among health systems” (2018). Available at: bit.ly/2qFSb60. Accessed April 18, 2018.
  2. ER Mardis, “The $1,000 genome, the $100,000 analysis?”, Genome Med, 2, 84 (2010). PMID: 21114804.
  3. European Union General Data Protection Regulation, “Article 15, section 1(h): Right of access by the data subject” (2018). Available at: bit.ly/2vqMHBC. Accessed April 18, 2018.
About the Author
Ramon Felciano

Ramon Felciano is Chief Technology Officer of QIAGEN’s bioinformatics unit. He was a founder of Ingenuity Systems, now a QIAGEN company.

Register to The Pathologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Pathologist magazine

Register