At a Glance
- High profile public health emergencies have raised awareness of the importance of infectious disease diagnostics, but huge unmet needs still exist, especially in low- and middle-income countries
- Validation of diagnostics is also a big concern; many laboratory tests have not been adequately validated
- It is difficult for lab professionals to decipher which tests have high-quality performance; medical education is needed to support effective decision-making
- There are many exciting innovations in infectious disease testing, but a change in the approach to hospital financing is necessary for the field to progress and for patient care to be improved
If you scour decades’ worth of health stories in consumer literature, you’re likely to see a great number of articles on “groundbreaking” developments in the treatment of a given condition. Fast forward to the present day and, though these (often sensationalist) stories still abound, you are more likely than ever before to read about a new diagnostic test or screening program in your daily newspaper. Much of this attention has been driven by the molecular diagnostics revolution – particularly in oncology, but recent public health emergencies, such as Ebola and Zika, have also had an impact. And though increased awareness of the criticality of good diagnostics is great for the field, there are downsides. We spoke with Mark Miller, Chief Medical Officer of bioMérieux, to find out more...
Would you agree that high-profile infectious disease outbreaks have raised public awareness of diagnostics?
Absolutely. Though diagnostics have always been a major preoccupation of bioMérieux and many other companies, the general infectious disease focus for many decades has been on therapeutics and vaccines, which is great, but diagnostics have had to take a backseat. Vaccines for Dengue, Zika, and meningococcal virus, for example, need to be accompanied by the proper diagnostics, otherwise we would never know if the vaccines are successful. Moreover, we would remain ignorant to the true prevalence of these conditions without diagnostics. Finally, I believe the lay public, clinicians, and labs are all now better sensitized to the dangers of ignoring diagnostics.
What does increased awareness mean for diagnostic test development?
There’s a huge buzz around infectious diseases right now, with many researchers and manufacturers wanting to ride that wave. However, operating in the diagnostics space requires thorough pre-market testing and validation to assure high-quality performance – but not all companies or labs are equipped for this task. The outcome? There are now a plethora of diagnostic tests on the market – some CE marked, others awaiting regulatory body decisions, some with very limited regulatory oversight – and it has become hard for labs to figure out which tests are reliable and which are not.
As a manufacturer operating in the infectious disease diagnostics business for more than five decades, we place a lot of emphasis on validation and ensuring that we achieve the performance that labs and patients expect. And though such stringent processes favor companies with the right level of resources, such as ours, smaller companies can struggle with the complicated regulatory landscape – that’s a tragedy, because good tests can sometimes be impeded from entering the market.
Lab-developed tests (LDTs) further muddy the water. Some labs have excellent internally-validated LDTs but others do not, which raises the issue of heterogeneity of both performance and the definition of validation. Should validation be local only? And how much validation is required before a test can be used within a hospital, lab or clinic? I know that the FDA is working on harmonization of validation standards, but it’s a huge quagmire...
What are the consequences of inconsistent validation practices?
I can offer an extreme example. During the Ebola crisis, a huge number of tests were being developed to diagnose the virus, but not many of them were being validated in the field, which meant that their performance (and by default, the accuracy of the result) was questionable. This is dangerous, but just one example that can be applied to so many other targets – Zika, antibiotic resistance… Many people are out there making claims, but very few of those claims have actually been validated.
It’s a big problem and, unfortunately, companies with good tests tend to be viewed with the same level of distrust. And that’s not surprising; sometimes the labs, clinicians and even the regulatory bodies (until they go through the data) are not knowledgeable enough to differentiate between them.
How can the validated tests be distinguished from those that are not?
In general, it’s back to the old adage of ‘buyer beware.’ When considering a test, laboratories must ask about performance criteria and validation – and they need to ask if the test has been validated in the population and for the disease of interest. It can be a minefield.
How can the complexity of the regulatory process be improved?
I think regulators need to apply the same approach to diagnostics as they do to therapeutics. For example, there are shortcuts and priority programs for orphan drugs, and for treatments with high medical value, such as novel antibiotics for multi-drug-resistant organisms. Let’s create criteria that would make a diagnostic test important and prioritized, and incentivize its development and validation; incentives could be tax-, regulatory- or research-related. Reimbursement is also key. A diagnostic should be reimbursed based not only on the technology used but also on its medical value – whether that be a reduction in length of hospital stay, reduced antibiotic use, improved patient outcomes, and so on. This approach does not currently exist for diagnostics, but those of us working in industry are all pleading for it.
But haven’t discussions around diagnostic reimbursement already begun?
They have, but they are in their infancy. People acknowledge that the system is not functioning properly, and we’re starting to see a link being made between health technology assessments (HTAs) (which assess diagnostic tests) and the actual impact on a patient, hospital or society. That’s not something that’s happened before in the US. A good example of where the approach is already being applied is the UK’s NICE. There’s also a new program in France that links HTAs and health economic outcomes to reimbursement; and the same thing is happening in other parts of Europe. But many countries are still using the old model, where for instance every molecular test is reimbursed irrespective of patient value. And that’s something that really concerns us, because diagnostics are not all the same and they don’t all bring the same medical value.
Collecting that data and making the cost argument is going to take a long time…
Yes, it will. And it’s also expensive and complicated. For instance, most HTAs conducted by NICE take between one and two years and require a tremendous amount of resources. And that means that we have to wait for a government, an agency or somebody like the WHO to take an interest in evaluating diagnostic assays. We, the diagnostic companies, can only do so much; we certainly can’t conduct health outcome assessments with the breadth and scope that governments desire.
What are the areas of current unmet need in infectious disease diagnostics?
Without doubt, the biggest infectious disease killers in the world today: diarrheal and respiratory diseases. Both are killing children in Africa, South America, and Southeast Asia. We can make a huge difference by diagnosing the pathogen among the 15, 20, 25 that are possible, and within a sufficiently short timeframe to administer treatment so that children can eat properly, grow, go back to school, and survive.
Even basic infectious disease tests are still not performed in many countries in Africa; for example, blood cultures for people with a high fever who are at risk of sepsis. Why not? Well, the reasons are complex and relate to price, regulation, the healthcare system, payers, and reimbursement. The bottom line is: low- and middle-income countries don’t have these basic tests, and that should be bothering everybody. I would love to see good quality, basic infectious disease diagnostics in all countries.
What are the most interesting areas of innovation in the field?
There are three key areas that I feel are really making a difference right now. The first is the so-called ‘syndromic approach’ to testing, which is one of the most interesting developments that this field has witnessed in the past five years or so. In infectious diseases, the classic approach involves guessing what the pathogen may be and then selecting the specific test(s) for it. But the introduction of new technology with multiplex nucleic acid detection capabilities – multiplex PCR – is allowing us to test for up to 30 different pathogens or so simultaneously, including antibiotic resistance. Not only is this allowing us to get at the root of what’s causing the patient’s issue, but it’s reducing the time that it takes us to get there. So we finally have diagnostic tests that mimic the way that clinicians approach patient tests – by syndrome rather than by pathogen guesswork – and I think this is very exciting.
The second relates to speed and simplicity. We now have the technology to conduct a test at the point of care. These improvements are actually allowing clinicians to come to a treatment decision while the patient is still present, instead of having to send them home and wait a period of time before the result is returned. The value of this advance cannot be underestimated: it will lead to better patient management, better diagnosis, and probably, better use of antibiotics – whether it’s not using them at all or using more focused ones instead.
Finally, mass spectrometry. It’s an accepted fact that the technology – and its ability to not only differentiate between pathogens but to do so in 15 minutes versus days – has revolutionized the labs where it has been introduced. And yet, there are still huge numbers of labs, in the US and elsewhere in the world, that have not adopted it. I think that’s largely owing to a lack of understanding among the lab medicine community around its capabilities – and that’s not their fault. After all, they are flooded with new developments almost every day.
So, awareness of new developments among the lab community is an issue?
I believe it is. Many hospitals and labs actually still use the same microbiology techniques they’ve been using for 100 years! Perhaps the main reason is that microbiology and infectious disease diagnostics are only seen as a small branch of lab medicine – relegated to what I call “third cousin” status below cancer diagnostics and tissue pathology.
For instance, I read the results from a survey conducted in France that showed that general physicians were still not using simple rapid tests for group A strep (1). They were vastly under-utilized even though they are cheap, perform well and have been available for years. Why? Many of them didn’t know they existed, some weren’t aware of the increase in performance, others were worried about reimbursement, and so on.
If you take the other extreme, we can look at highly complex tests like multi-PCR syndromic panels and novel assays like innovative biomarkers for sepsis prognosis and kidney injury evaluation. Despite the availability of these valuable tests, there is not enough known among the lab community to ensure the widespread use of these diagnostic tests for the right patient under the right conditions. I think laboratory professionals need help and we, as manufacturers, certainly have a role to play. How? By developing evidence-based medical education. If you just look at procalcitonin, which is probably the single most useful biomarker in infectious diseases today, it took over 10 years for significant market adoption. There was a lot of skepticism among clinicians and labs before they realized that there was enough data out there to trust it and use it. Hopefully, we’re a little faster today at getting the message out, but I still think that more education is needed in the infectious disease space.
What impact do different financing models have on the adoption of new techniques and tests?
Siloed financing is the single biggest barrier to adoption of novel diagnostics – at least, in hospitals. It’s a huge problem. Those hospitals that are truly progressive are the ones that break down the silos between departments; where administrators look at budgets transversely across a hospital and allow a department to benefit from the savings made in another, or conversely, where an investment in a particular department will directly benefit another. For example, you could demonstrate that a $50,000 machine in the pathology lab reduces the length of hospital visit in an emergency department by 12 hours, making it a worthwhile investment. There are far too many hospitals today that still have the antiquated silo financial structure, where each department is responsible for its own budget and are under continuous pressure to save money and to justify spending. This is a Neanderthal approach; it’s damaging, inhibitive, and it has to go.
I think there are enough good models out there of how it can work successfully, and hopefully that ship will turn around eventually.
What can industry do to help?
I can’t stress enough the importance of investment into R&D. Companies can really help address the issues that I’ve been discussing by continuing to innovate in infectious disease diagnostics, to drive down the time to results, and to increase their medical value. We cannot just content ourselves with selling products; we have to sell solutions, and investment in infectious disease diagnostics must be an ongoing focus.
- J Carlet, P Le Coz. “Let’s save antibiotics together”, Proposals of the Ad Hoc Working Group for the Preservation of Antibiotics. Ministry of Social Affairs, Health and Women’s Rights. (2015) bit.ly/2k5Zvr4
Mark Miller is the Chief Medical Officer of bioMérieux.