Cookies

Like most websites The Pathologist uses cookies. In order to deliver a personalized, responsive service and to improve the site, we remember and store information about how you use it. Learn more.
Subscribe to Newsletter
Diagnostics Bioinformatics, Genetics and epigenetics, Omics

Drowning in Data

Genetic analysis gives us access to a wealth of information about our patients – but are we getting too much of a good thing? The benefits of that information are obvious, but the drawbacks are somewhat more complicated. What do we do with all of this new data? How do we curate it? What parts of it do we analyze, what parts do we interpret, and what parts do we provide to the clinician or the patient? And just who has the right to make those decisions in the first place?

Why am I so concerned by these questions? I’m a molecular geneticist running a diagnostic laboratory in an academic health science center. My lab offers testing for both hereditary and somatic conditions, so we have to be quite flexible in terms of the types of assays we develop. At the same time, like most of our colleagues, we have a limited budget – so we try to restrict the number of different platforms we use to meet our patients’ varied needs.

Several years ago, we began using next generation sequencing (NGS) assays. Why? It seemed clear to us that those platforms best supported the generation of large volumes of data from very small volumes of patient samples. The flexibility of the technology we use to identify relevant mutations for inherited and somatic disease alike meant that we could purchase and maintain fewer total pieces of equipment without compromising our ability to offer an appropriate range of tests.

Our small Canadian molecular laboratory is not unique. Regardless of jurisdiction, a similar revolution is taking place in almost every molecular diagnostic lab as NGS platforms move in and take up residence. For cancer testing, the appeal is obvious – labs can now generate data on tens, hundreds or even thousands of gene sequences using incredibly small amounts of patient tumor material. Even in the hereditary disease setting, the ability to obtain vast quantities of sequencing data from a single run comes as a big relief, allowing us to replace the slow historical approach of interrogating sequential gene candidates using labor-intensive Sanger methods. In either case, NGS gives us the power to deliver much more detailed results much faster than we ever could before.

But the thrill of the high-throughput platform is tempered by an abundance of questions. Which genes should we sequence? Which variants should we report? And what should we do with this unprecedented amount of data?

Placing your order

The questions start at the very beginning, as we decide on the best approach to the assay itself. We’re spoiled for choice; NGS platforms are quite accessible, because vendors have made great efforts to provide both commercially-validated panels for a multitude of uses and support for laboratory-developed tests (LDTs). But whichever type of test we want to use, our first decision must be about the extent of its power. Is the patient best served by a genome-wide interrogation? Perhaps only the coding regions of the genes? Or a set of most likely candidate genes? Perhaps a hotspot approach to assay only actionable changes? Each of these possibilities carries its own set of pros and cons, and our decisions have to be made against the backdrop of an ever-changing knowledge base. Many labs will hedge their bets and validate an assay that is not strictly confined to actionable changes, knowing that information they can’t use today could easily be a drug target or diagnostic assay tomorrow. This leaves molecular labs with the tricky task of balancing their assay selection to meet both current and future needs – without necessarily even knowing what those future needs might be. And because each clinical testing lab faces different constraints in terms of budget, resources, knowledge and ability to predict the future, decisions like these fuel the lack of standardization that characterizes the current NGS era.

The use of commercial panels has an advantage over LDTs in that those assays are validated by the vendor, reducing the technical burden on the lab. The tradeoff is that the lab has no control over the panel content. So, for instance, I might be interested in looking for variants in specific areas of KRAS, EGFR, BRAF, KIT and NRAS, so I might select a panel that includes all these regions of interest – but also includes other cancer genes. That’s certainly a viable option, but now I’m left with another quandary: what do I do with all of the extra data that the clinician didn’t ask for, and the patient didn’t consent to have investigated?

What do I do with all of the extra data that the clinician didn’t ask for, and the patient didn’t consent to have investigated?

Many labs avoid this particular problem by assembling a custom panel to meet specific needs. In that case, they make decisions about which parts of which genes are clinically relevant, and then design the panel to accommodate only those targets. But those labs have to be careful, too; assays that are too focused are not very future-proof. Odds are that the lab will need to redesign and revalidate an expanded panel before much time has passed, just to keep pace with growing knowledge about the genetics of the lab’s disease or disease group of interest.

Points of View

An interview with Steven Ralston

What are the main pros and cons of genetic panel testing?

Access to genetic testing gives patients important information about their health, which may help them in making many health care decisions. On the flip side, results can be difficult to understand or interpret, not every test yields information that is actionable, and testing also has consequences for genetically-related family members.

What are the most important ethical issues to consider with this type of testing?

It is imperative that providers and patients understand the limitations of genetic testing and the possibility that results may not be unequivocally predictive of the presence (or absence) of any particular disease. A patient once said to me, “I thought that because it was a DNA test it was 100 percent accurate.” I think this perfectly demonstrates one of the most common misconceptions.

I also think the question of incidental findings is crucial: what should testing companies or laboratories do with information that is gleaned from genetic testing that is unrelated to the question originally posed or the disease initially screened for?

What can health care providers do to address these issues?

We must educate ourselves and our patients about genetic tests, their indications and limitations; this is key.

Steven Ralston is Chair of the Department of Obstetrics and Gynecology at Pennsylvania Hospital and Director of Obstetrical Service and Vice Chair of the Perelman School of Medicine’s Department of Obstetrics and Gynecology. He is currently on the American Congress of Obstetricians and Gynecologists’ Committee on Genetics and has lectured extensively on ethical issues related to genetic technologies, cesarean section on demand, abortion, and human rights.

Genetics test kitchen

Regardless of the selection process, the assay must be put through its paces prior to being placed onto the clinical lab test menu. Validation processes (1)(2)(3)(4) include identifying sample types that are appropriate for testing, ensuring that the method of nucleic acid extraction is adequate, and defining the metrics of analytic sensitivity and specificity. Even so, not all targets are equal; these parameters can be highly influenced by the genomic context of a given variant, leading to the possibility that some areas are less well scrutinized than others. It’s important to know where these areas are, and to determine whether or not we might need additional measures to completely evaluate an intended set of targets. Likewise, for the foreseeable future, it remains impossible to validate every single variant we might detect in an assay with thousands of targets. So how do we manage validation? It relies, at least to some extent, on extrapolations of the panel’s performance in detecting different types of mutations – be they single-nucleotide changes, small insertions or deletions, or larger structural changes. Assays continue to provide information about their performance, and labs continue to learn about their limits, long after they are placed into clinical service.

If we continue to adhere to this policy of validation testing as NGS panels become more routine, it’s likely to have a major impact on our lab budgets.

There’s also the question of orthogonal validation of important variants that might be identified in an NGS run. Many labs require that clinically relevant variants be verified by a second method appropriate to the sample type – but there is a price to pay for this comfort. First, there isn’t always sufficient material to carry on to a second platform. And second, this standard isn’t applied to tests across the board. If we continue to adhere to this policy of validation testing as NGS panels become more routine, it’s likely to have a major impact on our lab budgets – something not every lab can afford, which is why I anticipate that we’ll increasingly begin to curtail our reliance on this additional testing.

A sample’s journey

When a fully validated assay is available, patient samples (like blood, plasma, tissue, saliva, urine or bone marrow) can begin to flow through the lab. For hereditary conditions, the sample is generally sent up to the laboratory along with relevant paperwork. For somatic cancer, the tumor block is usually sectioned and a stained slide examined by a pathologist, who marks the appropriate area for nucleic acid extraction and testing prior to the molecular lab’s receipt of the sample.

Once in the lab, the sample information is accessioned into a secure database and the appropriate test ordered. We extract nucleic acid (usually DNA) from the sample and assess it to ensure sufficient quality and quantity. Then, we perform the assay, the technologist completes a preliminary analysis, and the results go into the secure database. Finally, the raw data go to the laboratory director or designate for final analysis and reporting. What does that entail? The lab director examines the data, compares the finding to that of the technologist, and provides a clinical interpretation of any variants identified. He or she then issues a report that explains the assay, its limitations and parameters, the result, and the clinical impact of that result.

This routine, in-and-out process – wherein patient samples come into the lab and results flow back out to the referring clinician – doesn’t address the larger questions that arise when we use panels that include genes and variants not specifically requested. In an era where lab testing is so expansive but the clinical utility is restricted to a handful of genes or variants, how do we deal with the “extra” data? This leads us to an important ethical question: should complete information derived from patient samples be provided back to the patient and their care provider? And if so, how?

The good, the bad and the complicated

The ethics of using panel testing can be considered in multiple ways. On the one hand, we are generating information that has not been requested, may not have been discussed with the patient or clinician, and may be of limited or no immediate clinical value. We might identify clinically meaningful variants that are not germane to the condition at hand, or variants with dubious clinical relevance. Disclosing such results to the referring clinician and ultimately to the patient could result in increased anxiety for no particular gain. Therefore, one might argue against panel testing because of the potential harm to patients and the increased burden on clinicians to handle these results when they do occur.

The other view suggests that it would be unethical to consume small and precious samples for only a very limited number of investigations when a much broader panel could provide the same clinically relevant results and a host of additional, near clinically relevant information as well. In the case of hereditary disease, it could also be considered ineffective patient care to test only for variants in a limited number of genes, when a negative result would trigger additional testing in a broader set of gene targets anyway. And then there’s the human factor to consider. It’s highly unlikely that labs and scientists would agree to put the genie back in the bottle, as it were – there’s no way to “unsee” the power of NGS approaches in the patient setting. Instead, we must determine thoughtful approaches to dealing with the consequences of using this technology.

What can we do? We have several options:

  • Not analyzing data beyond that specifically requested.
  • Analyzing all of the genes in a test, but disclosing only the specific result that was requested.
  • Obtaining informed consent from patients prior to testing, ensuring they understand the scope of the test and possible outcomes.

The first option seems wasteful – why generate all those data without ever analyzing or using them? The second seems ethically questionable and leaves lab directors in the untenable position of having to make the call on what to share and what to keep back. The third option is likely the best for the time being, but it’s cumbersome and requires significant buy-in from clinicians and counselors. It also requires us to thoroughly explain the complexities of panel testing, the potential outcomes, and the changing landscape to patients, who are naturally more worried about their own health and may not be focused on taking in a high volume of complex information. Nonetheless, a thoughtful approach to informing both patients and clinicians of the power and pitfalls of panel testing could at least ensure that everyone is forewarned of the possibility of unexpected results – making those conversations, if necessary, easier. This option also lets patients consent to the use of their data in research, one more way to maximize the value of very small samples.

This solution, however, brings up several issues for the clinical testing laboratory. First, who is responsible for the interpretation of the “extra” data? And how deeply must that person explore the scientific literature to be sure he or she can provide up-to-date information about variants whose clinical relevance may still be a long way off? Dealing with variants of unknown significance (VUS) is not a new problem, nor is it specific to panel testing. Any interrogation of DNA has the potential to identify a VUS, and clinical laboratories need a stated policy for handling them. Disclosing a VUS result to a clinician or patient can result in anxiety, frustration and misunderstandings – but non-disclosure can result in omitting information from the patient’s record that could one day, through lookback testing (reinterpreting results in the context of new scientific findings), be clinically important. Lab directors are not – and should not be – in the business of deciding which results to pass on and which to hold back. That means the onus is on the lab director to ensure that VUS are appropriately researched and explained, and that the data used to determinate a VUS are kept accessible. Just what that means in the context of “additional” findings from NGS panels is an important issue, and one we as medical laboratory professionals need to discuss.

A second and more controversial issue is that of follow-up. As scientific evidence accumulates, variants move from being investigational tools to being validated targets that guide clinical management decisions. Who is responsible for linking historical information about a specific patient sample with newly emerging data? No part of the health care team is currently equipped to deal with the issue of “lookback” testing. There is some guidance around this issue as it relates to hereditary variants (5), but institutions are still left to themselves to identify best practices and implement a process that works well for patients and professionals. Even efficiently tracking which samples carry which variants is a huge logistical problem, because for many molecular labs, data issues like storage, handling, annotation and linkage are still unfamiliar territory.

Finally, there is the question of ownership. Who “owns” the data generated in laboratory investigations? If patients insist on being provided with a full record of results from a genetic assay, how does the lab handle the fact that some targets may not be completely validated, and that some results don’t have a solid clinical interpretation? If patients don’t fully understand the information they receive, might they try to make use of it in ways that are incorrect and potentially dangerous? How do clinicians deal with patients bringing these results to them without appropriate interpretation or oversight? How do we prevent the public confusion that might come from patients armed with raw genetic results trying to make sense of findings that don’t yet have a place in clinical practice? The questions, and the implications, are endless…

The Power of Pedigree

By Leigh Stott

I can call to mind one example when the fallibility of panel testing was really highlighted, and that was in a rural Australian mining and farming community with a higher-than-average (1 in 25) population cystic fibrosis (CF) carrier risk. A family physician referred a Caucasian couple to a genetic clinic for prenatal CF counseling. A detailed family pedigree showed CF-affected individuals in the paternal lineage – but according to the couple, the father himself had been “tested for everything, but nothing was found.” Based on this lack of result, the family incorrectly believed that their pregnancy carried zero risk of CF, and that there was no need for concern or maternal testing.

Why was this such a problem? Recent research suggests that there are more than 100 disease-causing CF mutations, but common panels search for only about one-fifth of these.

After reviewing paternal panel testing documentation, a variant of unknown significance was reported. Additional counseling encouraged the mother to proceed with carrier testing – which returned positive for ΔF508, the most common CF mutation. Based on pedigree and testing, the family was advised of a potential 25 percent risk of a CF-affected pregnancy.

This case illustrates the real risk associated with patients’ perception of infallibility in panel testing. Most genetic disorders don’t have reliable testing available – and even in those that do, the test is not an absolute guarantee. Variants of unknown significance, unknown disease-causing mutations, and the preconceptions of non-genetic medical professionals leave significant and potentially harmful gaps. That’s why I’d like all doctors to remember that the family pedigree remains the ultimate and most essential tool in genetic counseling.

Leigh Stott is a Certified Associate Genetic Counsellor, Australasian Society of Genetic Counsellors (ASGC). After eight years of specializing in neurological diseases, he currently serves as a clinical trial manager for ultra-rare genetic diseases in Denver, USA.

The train has left the station – now what?

How do we, as laboratory professionals, kick-start a conversation about these issues? First, we have to acknowledge that the train has already left the station. There’s no way to recall the power provided by NGS platforms, nor should we want to do so. The benefit of generating deep and accurate data about patient samples surely outweighs our concerns about the unintended consequences that may flow from such testing. But with that said, each institution should initiate its testing with broad discussions that include the whole team – clinical geneticists, genetic counselors, oncologists, surgeons, pathologists, radiologists, and patient advocates.

There’s no way to recall the power provided by NGS platforms, nor should we want to do so.

What should this kind of discussion entail?

  • In cases where incidental findings are possible, be clear about that and agree on how patients and clinicians wish to be informed. Have a written policy to outline clearly what will be done in these cases.
  • Variants of unknown significance will remain a challenge for years to come. Many national and international efforts have already produced excellent approaches to systematic handling of VUS interpretations (5)(6)(7)(8), so follow one or more of these guidelines rigorously to ensure that all VUS from your testing facility undergo the same careful scrutiny, and that the evidence for interpretation is well-documented and stored for future reference.
  • Consider sharing your VUS interpretation and data through any number of initiatives that support de-identified data sharing to improve our understanding of VUS in particular genes.
  • Consider a policy for periodic VUS review – although that brings with it a host of additional questions, including the wisdom of “lookback” interpretation and how best to provide a patient with a new interpretation after initial testing.

I have some advice for patients and their families, too: have patience. Understand that those of us working in clinical labs, uncovering genetic secrets in your cells or your tumors, are doing so because of a genuine wish to help you receive more effective and rapid treatment. We are still working out how to handle these complex situations and how best to balance the never-ending struggle between the amount of information we can generate and the amount of money we can spend. We are also trying to find the best ways to decide when a piece of genetic information is really useful to a patient and when it falls into the category of “research” (interesting, but perhaps not yet ready for clinical prime-time). Appreciate that the landscape is changing rapidly, and that it will continue to do so – and that, just like you, we’re doing our best to learn all we can and turn it into the best treatment and care for you.

Harriet Feilotter is an Associate Professor in the Department of Pathology and Molecular Medicine at Queen’s University, as well as Laboratory Director of Molecular Genetics and Service Chief of Laboratory Genetics at Kingston General Hospital, Canada.

Read more

Genetic Panel Testing: Points of View (Steven Ralston)

Genetic Panel Testing: Points of View (Marilyn Bui)

The Power of Pedigree

Is More Always Better?

Justice Prevails?

Receive content, products, events as well as relevant industry updates from The Pathologist and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. N Aziz et al., “College of American Pathologists’ laboratory standards for next-generation sequencing clinical tests”, Arch Pathol Lab Med, 139, 481–493 (2015). PMID: 25152313.
  2. K Fisher et al., “Clinical validation and implementation of a targeted next-generation sequencing assay to detect somatic variants in non-small cell lung, melanoma, and gastrointestinal malignancies”, J Mol Diagn, 18, 299–315 (2016). PMID: 26801070.
  3. R Kanagal-Shamanna et al., “Principles of analytical validation of next-generation sequencing based mutational analysis for hematologic neoplasms in a CLIA-certified laboratory”, Expert Rev Mol Diagn, 16, 461–472 (2016). PMID: 26765348.
  4. G Matthijs et al., “Guidelines for diagnostic next-generation sequencing”, Eur J Hum Genet, 24, 1515 (2016). PMID: 27628564.
  5. S Richards et al., “Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics and Genomics and the Association for Molecular Pathology”, Genet Med, 17, 405–424 (2015). PMID: 25741868.
  6. MM Li et al., “Standards and guidelines for the interpretation and reporting of sequencing variants in cancer: A joint consensus recommendation of the Association for Molecular Pathology, American Society of Clinical Oncology and College of American Pathologists”, J Mol Diagn, 19, 4–23 (2016). PMID: 27993330.
  7. MA Sukhai et al., “A classification system for the clinical relevance of somatic variants identified in molecular profiling of cancer”, Genet Med, 18, 128–136 (2016). PMID: 25880439.
  8. LM Amendola et al., “Performance of ACMG-AMP variant-interpretation guidelines among nine laboratories in the clinical sequencing exploratory research consortium”, Am J Hum Genet, 98, 1067–1076 (2016). PMID: 27181684.
About the Author
Harriet Feilotter

Harriet Feilotter is an Associate Professor in the Department of Pathology and Molecular Medicine at Queen’s University, and the Laboratory Director of Molecular Genetics and Service Chief of Laboratory Genetics at Kingston General Hospital.

Register to The Pathologist

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Pathologist magazine

Register