Is next-generation sequencing (NGS) all it's cracked up to be? Following their packed panel session at ESCMID Global 2025, we sat down with Etienne Ruppe and Stefan Green to discuss NGS platform selection, challenges in application, and what the future holds.
Could you please give us a brief overview of your experience in NGS thus far?
Etienne Ruppe: I’m a professor of bacteriology at University Paris Cité and I also work as a clinical bacteriologist at Bichat–Claude Bernard Hospital and conduct research at the IAME Research Center in Paris. My experience with NGS started with studying antibiotic resistance. From there, I expanded into microbiota research and then clinical metagenomics. My focus is on analyzing genomes to track antibiotic resistance genes in complex environments, such as clinical and microbiota samples.
Stefan Green: I’m director of Core Laboratory Services at Rush University Medical Center in Chicago. I manage the DNA sequencing core and also hold a faculty position in the Division of Infectious Diseases. My background is in microbiology and microbial ecology, and I’ve been running core facilities for 14 years. My postdoctoral work involved bacterial whole genome sequencing. I started in environmental microbiology and now work on both environmental and host-associated microbiomes, including human studies – covering topics from Parkinson’s disease to how spaceflight affects the gut microbiome.
What’s your NGS “elevator pitch”?
ER: I think NGS brought a new level of scale and depth. Before, with techniques like PCR, we had to know exactly what genes we were looking for, and if we didn’t, cloning was the only option – which was slow and difficult. Now, with just one experiment, NGS gives you broad, detailed data.
It’s really about scale and accessibility. This has led to many new applications, like high-resolution strain tracking, identifying mobile genetic elements that used to be hard to detect, and sequencing clinical samples where unexpected findings can appear. NGS has also revolutionized microbiota research – it used to be limited and low-resolution, but now it's far more advanced. I think the microbiota field has benefited the most from NGS.
SG: Agreed – I always joke that during my PhD, I spent four years sequencing just 150 genes – and that was enough for a couple of papers and my degree. I was proud of it! Now, if a sequencing run doesn't yield 15 billion sequences, we think something went wrong. So yes, as Etienne said, the scale has massively increased.
One important point is that NGS can be an agnostic discovery tool – you don’t need to know what you’re looking for ahead of time. It can detect bacteria, viruses, and even eukaryotic pathogens all at once. And because of the increased scale and reduced cost, you can now run larger, more complex experiments that would have been impossible or too expensive before.
Also, sequencing costs have dropped faster than almost anything else in science or society over the last 20 years. That’s opened the door to new uses. For example, in proteomics, scientists now attach DNA tags to antibodies and use sequencing to count proteins. The same idea is being explored for detecting metabolites. So we’re likely to see sequencing used as a readout method for all kinds of biological questions – just because it’s fast and affordable.
What are some of the primary factors guiding NGS platform decision making?
SG: There are a few key factors to consider, but a big one is cost. Another is how stable the company behind a sequencing platform is – will it still be around in a few years? Many researchers stick with the reliable choice that’s been stable for over a decade. But newer platforms offer different things, like cheaper instruments and reagents, longer read lengths, or different quality levels. It's a competitive and somewhat risky space right now, with many platforms to choose from.
So, yes, cost really matters. But so does the type of sequencing you need. Short reads work well for tasks like re-sequencing or counting things you already know are there. But if you're assembling a genome or trying to link genes – like for antibiotic resistance – then long reads are better.
Turnaround time is another factor: do you need results in an hour, a day, or a week? And lastly, availability plays a role. Some instruments are more accessible, have more reagent options, or are compatible with automation, which makes them easier to use. That tends to give established platforms an edge.
ER: Yes, as Stefan mentioned in our presentation at ESCMID Global 2025, it really depends on your specific needs. For example, in our lab, we mainly work with E. coli, so we don’t need a large, high-throughput platform – just something suited to genome-scale work. But when we need to do something more complex, like metagenomic sequencing, we find the most appropriate platform for that and often outsource it instead of buying expensive equipment.
I think a key takeaway from our discussions is that today, with so many options available, you can match the platform to your needs. Ten years ago, it was the other way around – you had to adapt your work to the limitations of the available platforms.
SG: Just to add one more point – no single sequencing platform can do everything. Different applications need different types of sequencers. Shared resource facilities are a great way to access the technology you need without buying the equipment yourself. For example, your lab might already have a device, but if you need to sequence 10 human genomes, it wouldn’t make sense to buy a million-dollar machine – you’d outsource it.
This kind of shared setup helps the whole research ecosystem. Why invest in a machine that costs a lot to maintain if you’re not using it all the time? Plus, the technology changes fast – what you buy today could be outdated in four years. It’s smarter and more flexible to use shared resources when needed.
How much weight should labs give to read length and throughput vs ease of integration into existing workflows?
SG: As we’ve been emphasizing, it really depends on the application. For example, if you’re working with a new isolate and need to fully close its genome, you must use long-read sequencing – short reads just won’t work for that. We also do a lot of 16S amplicon sequencing, where long reads can help improve species-level identification. Short reads can still work, but the results aren’t as accurate.
In general, short-read sequencing is more cost-effective, easier to use, and better supported with automation. So unless there’s a clear need for long reads, we prefer short-read platforms. Some researchers want to do everything themselves, and they can get a long-read sequencing instrument for about $1,000. This does also necessitate ancillary equipment and bioinformatics, so not everyone wants to run this in their own lab. But when someone comes to us, we always assess whether long reads are truly necessary – otherwise, we recommend short reads for simplicity, speed, and cost.
What are the key considerations for optimizing turnaround times?
SG: I sat in on the clinical metagenomics session before ours at ESCMID Global, and nearly everyone there was using the same type of sequencing platform. The main reason was speed – they needed same-day results. In clinical settings where you have only a few samples and need rapid answers, certain technologies that offer fast library preparation and real-time data output are a strong choice. You can get usable data within a few hours, often with minimal automation.
Other platforms are improving their turnaround time but still aren’t as fast. Some generate data in real time as DNA passes through the sequencing system, while others build up data more slowly, base by base, and don’t yield results until the entire run is complete.
So in clinical metagenomics, the priority is often speed rather than read length. That rapid turnaround makes certain sequencing platforms particularly appealing. Of course, the broader question remains: is faster always better, or is it sometimes worth waiting longer for more detailed and comprehensive results?
ER: In cases like sepsis or severe infections in intensive care – such as pneumonia – we need results as quickly as possible. Multiplex PCR is currently a strong option; it's not perfect, but it's fast and reliable. We're hoping that metagenomic sequencing will be the next big step, but right now the turnaround time is still about six to seven hours. That means if you take a sample in the morning, you get results by the afternoon – still a bit too slow to compete with current methods.
Another challenge with real-time sequencing is timing. You may get early results – say, you detect Klebsiella – but only later pick up important resistance genes. That delay means we can’t act too quickly on early findings without risking missing critical information. We might need to define time points in future studies, such as confirming that results after one hour are reliable enough to identify pathogens, while resistance genes might require more time.
So, while the technology shows a lot of promise in urgent cases, it's not yet fast or sensitive enough to fully replace existing methods.
SG: It’s exciting to think about how rapidly sequencing technology is evolving. For example, there’s an upcoming high-throughput nanopore sequencer, which – if it delivers on expectations – could combine the speed of nanopore sequencing with the data volume and quality of short-read platforms. The idea is that you could have a two-hour sequencing run and get something like 10 gigabytes of data per sample – enough to be confident in your results without needing to extend the run for 24 hours to see what else might appear. That could eliminate the current uncertainty that comes with early, incomplete results.
This kind of hybrid system – offering fast turnaround and high-quality, high-volume data – could be a perfect fit for clinical applications. Of course, this instrument hasn’t been released yet, so everything is still speculative: we don’t know its cost, performance, or even availability. But it’s clear that we’re approaching a point where these capabilities will be possible, and that’s a game-changer for the field.
It’s a striking contrast from just a few years ago. Now, with sequencing costs dropping so drastically, it’s often cheaper and easier to simply generate more data than to use data reduction strategies we used to use. The questions we ask today are completely different because we’re no longer limited by cost or scale in the same way. Technology has changed not just the answers we can get – but the way we think about the questions themselves.
What are some of the limitations preventing routine implementation of NGS?
ER: In our ESCMID Global session, we ran a poll to discuss this very issue. One of the biggest challenges participants mentioned is the lack of bioinformatics expertise. Many users aren’t sure what to do with the large amounts of data they get from sequencing. Bioinformatics can feel overwhelming because it often requires coding, not just clicking through simple software. While user-friendly tools do exist – especially for genome analysis, where data volume is relatively small – not everyone is aware of them.
Things become more complex with metagenomic sequencing, like microbiota or clinical metagenomics, because the data volume is much larger and requires more computing power. Although private companies offer bioinformatics solutions, these often come with high costs. Keeping up with constantly evolving tools, classifiers, and databases is also a major hurdle – new software becomes outdated almost as soon as it's released. In many research labs, pipelines are still built in-house, which means hiring skilled bioinformaticians. Over time, this might get easier as more accessible tools emerge and AI becomes more helpful in analysis.
Another concern is cost – not just the machines and reagents, but also the staffing and storage. Fast turnaround requires dedicated personnel, which isn’t always realistic given current staffing shortages and financial pressures. While sequencing costs have dropped significantly, data storage has become more expensive. Researchers are now debating whether it’s better to store data or simply re-sequence samples when needed, which is a big shift from earlier days when generating even a single sequence was a major achievement.
Still, most participants felt that NGS clearly offers value over conventional methods, especially for diagnosing difficult infections where turnaround time isn’t urgent. However, for critical cases where rapid decisions are needed, NGS may not yet be fast enough to fully replace existing tools.
SG: Those are great points. I also believe that soon the cost of bioinformatics will surpass the cost of sequencing itself – especially as the focus shifts toward getting accurate clinical answers. Some companies have already invested millions in building curated databases to support this. While you can run raw sequence data through public databases like NCBI, those aren’t always curated, which can lead to incorrect results or misannotations. For clinical use, having reliable, up-to-date databases is essential – and that comes at a price.
The challenge is that people are generally willing to pay for sequencing, but not for bioinformatics. That mindset needs to change. Sequencing gives you a tangible output – the raw data – which feels like a product you’re buying. But interpreting that data correctly is crucial. It’s like buying a bicycle and expecting the instructions on how to ride it to come free – but in reality, expert guidance costs extra.
Ultimately, people will need to understand that the value of accurate data interpretation is worth paying for – especially in critical clinical settings. Changing that perception will be a major challenge.
Why is it so difficult to standardize this type of sequencing?
SG: I don’t think standardization is impossible, but it’s definitely more complicated for metagenomic sequencing than for targeted tests. In a typical lab test for one organism, like with real-time PCR, you use known standards to quantify results. But metagenomic sequencing is agnostic – it can detect anything, even organisms we didn’t know existed. For example, there was no specific test for SARS-CoV-2 in December 2019, but shotgun metagenomics could still have detected it. That discovery aspect makes standardization harder.
That said, it is possible to build a standardized workflow: use a specific robot for DNA extraction, another for library prep, feed the sample into a sequencer, and process the data through a defined bioinformatics pipeline. You’d also need proper controls – positives and negatives – at every step, from extraction to sequencing. But this requires careful planning: for instance, deciding whether to spike in control organisms, and if so, which ones. It’s tricky because different organisms – bacteria, viruses, fungi – have different properties and require different extraction methods.
In fact, I think nucleic acid extraction is one of the most overlooked steps in the whole workflow. If you don’t consistently extract genetic material from all types of microbes, then it doesn’t matter how good your sequencing or analysis is. Without standardized extraction protocols, the whole process can break down.
ER: Yes, I agree that for genome sequencing, standardization is definitely achievable. The real challenge lies in sequencing clinical (or "eco") samples. These samples vary widely in terms of the organisms present, the amount of human DNA, and how the sample was collected and stored – for example, whether it was frozen or not.
That’s why DNA extraction is the hardest part. You’re dealing with many different situations, and that makes it difficult to standardize. For microbiota specifically, it’s a complex type of sample, but there has been progress. The International Human Microbiome Standards project has helped develop standard methods for these samples, so it’s definitely feasible – at least for that sample type.
So, overall, I’d say microbiota sequencing is fairly well standardized. The bigger challenge is adapting to the variability in clinical samples – but even that is possible with effort.
What common mistakes or oversights do you see labs make when choosing or implementing an NGS platform?
ER: I would start by asking: What will I do with the data? What’s the goal? One of the biggest mistakes in this field is failing to communicate with the rest of the team involved in NGS. It's essential to bring everyone to the table – microbiologists, clinicians, clinical microbiologists, and bioinformaticians – so they all understand each other’s needs and perspectives.
If there’s no clear communication, especially between clinicians and bioinformaticians, misunderstandings can happen, leading to mistakes in how data is interpreted or used. Just like in a hospital, where different specialties must collaborate to give the best care, the same applies here. Bioinformaticians are now part of that team and should be fully integrated into the discussion.
SG: I completely agree with Etienne – you really need to involve your bioinformaticians from the start. You don’t want to generate all your data and then realize it can’t be properly analyzed, or that you didn’t include the right controls or enough replicates. Planning ahead with the full team helps avoid these problems.
Also, you don’t need to reinvent everything. There are well-established protocols and plenty of good literature available, especially for combined DNA and RNA workflows. Use those resources instead of starting from scratch.
Another important point: if you don’t absolutely need to buy your own sequencer, don’t. The technology is evolving fast, and shared resource facilities can meet most needs. Even a single sequencer in a hospital can be used for multiple purposes. Some instruments can handle more than one run at a time, and smaller platforms can be scaled with multiple devices. DNA and nucleic acids can be shipped easily, even internationally, so you can send samples to labs that have the tools you need. Unless you require rapid turnaround, you don’t need to invest in every new machine yourself.
A common mistake is focusing too much on the sequencer itself and not enough on the upstream steps, like nucleic acid extraction and library preparation. These parts benefit the most from automation – even though they cost more, they’re worth the investment. In fact, the sequencer is probably the least complex part now. What really matters is having consistent, automated prep that doesn’t depend on which technician ran the sample.
In your talk at ESCMID Global, Stefan stated that by the end of the presentation, everything discussed was already out of date. With this in mind: are we fighting a losing battle?
SG: I don’t think it’s a bad thing that sequencing technology keeps changing – I actually see it as a sign that we’re making real progress. Costs are dropping and capabilities are improving, which means we’re winning, not falling behind. Think about it: you wouldn’t celebrate using the same tech from 10 years ago. Change means advancement.
Yes, the constant updates can be frustrating. Sequencers become outdated, pricing models shift, and newer versions can make previous methods incompatible. But those changes also reflect big improvements in sequencing quality. So yes, the market is a bit chaotic right now, with many competing platforms and rapid change. But that competition is good – it drives innovation, improves performance, and lowers prices. It may feel overwhelming at times, but overall, the field is moving in the right direction.
ER: Agreed, at the end of the day, we win – as researchers, as clinicians – because we have more options to provide our patients with improved care.
Finally, if you had to summarize your advice for clinical professionals in a sentence or two, what would it be?
ER: In my view, NGS is a powerful tool – but it's just that, a tool, not a magic solution. Like any other test, it needs to be used in the right context and applied where it fits best. We once hoped metagenomic sequencing could be a catch-all method since it can detect almost anything, but it has its limitations too. Our job now is to understand those strengths and weaknesses, and use NGS where it truly adds the most value.
SG: To follow Etienne’s comment, I’d emphasize that nucleic acid extraction is crucial – it’s the starting point of the entire workflow. Even if everything else is done perfectly, poor extraction will affect your results.
More broadly, the key message is: you’re not alone. You don’t have to build everything from scratch. Talk to shared resource facilities, bioinformaticians, and others who’ve done this before. There’s no one-size-fits-all solution, but there are reliable workflows you can follow and adapt. It might seem overwhelming at first, but with the right support and advice, it becomes much more manageable.