A Tradeoff Decision
How do we decide which NGS technology is the best for detecting fusion variants?
| 7 min read
sponsored by Thermo Fisher Scientific
Fusions – whether for approved or investigative therapies – are one of the rarer but important variant targets in precision oncology. Though next-generation sequencing (NGS) has long been established as one of the technologies of choice, especially when there is a need to analyze multiple biomarkers, there is a debate on which of the technologies is “the best”. Experts Annarita Destro and Eric Vail share their thoughts on the issue below.
Can you please introduce yourself and your laboratories?
Destro: I lead the diagnostic molecular pathology lab. We analyze a variety of cancer samples every year: the majority of them are lung cancer (around 500); followed by 250 for CRC; 50 for melanoma, 50 for CNS tumors; and roughly 30 each for cholangiocarcinoma, sarcoma, and GIST.
Vail: I’m the director of a clinical molecular pathology laboratory and we perform both solid and hematological tumor molecular analysis on approximately 2200 solid tumor samples and about 1000–1200 heme samples a year. The biggest volume from solid tumor samples is lung cancer, while the rest is a mixture of colorectal, breast, pancreaticobiliary, CNS tumors and others.
Which fusions are clinically actionable today?
Destro: As of today, the actionable fusions are ALK, ROS1, RET, MET, NTRKs for lung cancer, FGFR2 for cholangiocarcinoma, and NTRKs for gliomas and sarcoma.
How do you perform fusion detection in routine samples?
Destro: Since 2022 we have been utilizing a 50 gene amplicon-based NGS panel in routine diagnostics.
Vail: We are using 500 gene amplicon-based NGS as a first line testing method, and in some cases, we confirm with fluorescence in situ hybridization (FISH) or anchored multiplex PCR based NGS.
Why have you chosen amplicon based-NGS for first line testing?
Destro: We chose it because it has a highly automated workflow which cannot be found with other platforms. We have a very high sample flow which cannot be sustained without this level of automation. The technology works with almost every type of sample, which gives us an extremely low failure rate – even when the quantity of the sample is really small. For example, between June 2022 and March 2023 we have profiled 471 lung cancer samples, 98.72 percent of which with valid RNA-NGS analysis. Furthermore, 99.78 percent (470/471) had successful DNA analysis, meaning that we are able to provide clinical meaningful results to virtually all our patients. In the past with other technologies we had up to 50 percent failure, including insufficient quantities for analysis. Moreover, most clinically actionable fusions are covered by our panel.
Vail: This technology is very robust. It requires far less material than the other NGS methods, like large hybrid capture-based NGS panels or anchored multiplex PCR based NGS. We routinely run samples with 10ng of nucleic acid (NA) input and have even had success with less than 5ng. This is very important because many real-life tumor samples – roughly 20–30 percent – are of very low quality or quantity. If we were unable to test those, we would exclude large numbers of samples, and ultimately patients, from targeted therapy options.
Destro: There is the occasional technical failure, which can be resolved, and the software analysis may have more than one call for deletions and insertions (due to amino acidic loss following mutation), but they can be manually checked on the original file.
Vail: The disadvantage of amplicon-based NGS is that it cannot detect all fusions, including novel fusions and partners. Some other technologies can do it, but as I said, they require large amounts of material for testing, which is not often available. In an ideal world where every sample was freshly frozen, perfect quality, and large in size, it would not matter. But in real life we have mostly FFPE tissue samples, many of which these methods would not be able to test at all. So, it’s a tradeoff decision in the end.
Could you explain further?
Vail: You get extreme sensitivity and specificity with amplicon-based chemistry because you’re targeting both partners and can amplify them from very little original content. And the specificity is very high as well – I do not remember a false positive intergenic fusion in the years we’ve used this technology. But, given the design of the amplicon-based assays, they cannot detect what is not targeted. Some of the sarcoma genes which are very promiscuous are not well covered, as well as fusions important in pediatric cancers. That’s because it’s hard to design targeted panels for some of these very low frequency tumors and rearrangements.
Also, they [amplicon-based assays] cannot detect novel fusions. There is a rare but meaningful amount of these and as we expand the knowledge by sequencing more, there will be more of those and they are not covered by the targeted design.
How many novel fusions are typically missed?
Vail: We have all of our validation data, where we used other technologies and cases that were sent out to different laboratories. Also, we have been running FISH ALK, ROS and RET on more than 1000 lung cancer samples in parallel, and we have discovered just one novel ROS fusion partner. Adding all these data together, we believe the miss rate is less than 1 percent of all solid tumor cases – and as the targeted panels are very well designed for lung cancer samples, it’s even less, about 0.1 percent. Is it important for that patient? Yes. But, if we use a different technology that requires high input to detect these rare novel fusions, we might lose 20 percent of patients, or more. So that’s the tradeoff. All technologies have some tradeoffs.
It’s difficult to establish what the true miss rate is, but we can estimate. We have been running FISH of ALK, ROS and RET on over 1000 lung cancer samples in parallel, and we have discovered just one novel ROS fusion partner. Combined with data from other labs, we believe the miss rate is less than one percent of all solid tumor cases. And as the targeted panels are very well designed for lung cancer, it’s even less, about 0.1 percent. Is it important for that patient? Yes. But, if we use a different technology that requires high NA input to detect these rare novel fusions, we might lose 20 percent of patients, or more. So that’s the tradeoff. All technologies have some tradeoffs.
How do you detect novel fusions?
Destro: We do not proactively look for them. Being a diagnostic unit, we rather focus on the identification of what is proven to be clinically relevant right now. However, I believe that novel fusion identification is important in the clinical and translational research space. In the future we could run alternative approaches like anchored multiplex PCR-based NGS, IHC, or FISH to see if we missed something, especially for negative cases where the patient characteristics (woman, young, never smoker) would suggest the presence of a genetic alteration. However, these other NGS technologies work only on “perfect” FPPE samples, and therefore it would not be the optimal approach to look for novel fusion in more difficult samples like lungs tend to be.
Vail: Our panels have the 5’/3’ imbalance assay for ALK, RET, NTRK 1/2/3, and FGFR2. It looks for over-expression of the fusion partner portion of the gene and under-expression of the gene portion that’s lost from the fusion. It works in the sense that the sensitivity is pretty high, however, the specificity is pretty poor for a clinical assay to use as standalone. So, in my opinion it is a screening tool. If it’s positive without a concurrent targeted fusion cell, we do a confirmatory assay afterwards. We don’t report it on its own, as it might well be negative. We also do and recommend reflex testing with another method for true driver-negative lung cancer samples.
What is important when choosing NGS technology and which is the best?
Vail: Every lab director should look at NA input requirements, sensitivity, and ability to detect novel fusions – all in context of the available workflow automation, the local technical expertise, and labor costs. For our patient population, we have chosen a technology which we believe enables us to test more patients and detect more actionable variants, including fusions. The advantages we discussed outweigh the disadvantages – which we are aware of and have secondary strategies in place to offset.
Destro: It’s important to look at: 1) the need to reliably detect gene rearrangement events in all clinical samples, even with little tissue; 2) the level of automation and ability to detect DNA and RNA alterations simultaneously; and 3) whether the workflow is really complete and does not require new special skills. In my opinion, we have chosen the best solution for our routine diagnostic samples: highly automated, 7-day measured turnaround time, and simultaneous DNA/RNA analysis.
Annarita Destro is Head of Molecular Pathology Department, Humanitas Research Hospital in Milan, Italy.
Eric Vail is Director of Molecular Pathology, Cedars-Sinai Medical Center, Los Angeles, California.
You can find full interview with Dr. Eric Vail including his advice to different types of laboratories and why he is passionate about democratization of genomic profiling at www.oncomine.com/blog