In short, lab tests are as critical to patient care as drugs or medical devices. While the federal government regulates the latter two categories for safety and efficacy, however, it does not provide similar oversight for lab-developed tests — those created and used in single laboratories, including those run by hospitals, and clinical lab operators such as LabCorp and Quest Diagnostics. They have grown into a multibillion-dollar industry, which Food and Drug Administration Commissioner Robert Califf calls “one of the most significant gaps” in U.S. health-care regulation. He’s right; and FDA is right to propose changing that.
Under the proposed plan, which is open for public comment until Dec. 4 and would not take effect until sometime next year, the FDA would require sound evidence that lab-develop tests work as intended — as it does for all medical devices. The rule would be phased in over about four years to avoid unduly interrupting the use of existing laboratory tests, or tests intended for small patient populations. It is hardly burdensome; lab-test developers already have a professional responsibility to make sure that their own tests work; it should be easy enough to provide that evidence to the FDA.
The status quo originated in 1976, when Congress gave the FDA authority to regulate in vitro diagnostic tests as medical devices. The agency began to regulate manufactured test kits, but it chose to exempt labs that created their own tests for their small patient populations. In those pre-software days, tests were simpler and lower-risk. Since then, though, they have proliferated and become more complex. And many laboratories have taken advantage of their latitude to use unregulated tests, bringing in specimens from doctors far and wide for evaluation.
As tests developed by individual laboratories became ever more complicated and were used to diagnose ever wider populations, however, the risk of error also grew. When test results end up wrong, it is often because the testing processes are flawed or inadequate, or because they are designed to identify proteins, cholesterol, genes or other markers that have no proven connection to the patient’s suspected illness. Then patients end up with a wrong understanding of their condition and receive therapy that doesn’t work for them, or they are given appropriate therapy in inappropriate quantity.
Notably, the phony finger-prick blood test created by Theranos — that founder Elizabeth Holmes claimed could detect all sorts of conditions — was a lab-developed test and thus unregulated. This is perhaps an extreme example of what can go wrong, but in recent years the FDA has turned up more anecdotal reports of inaccuracies even among tests whose manufacturers make much more modest claims. These have involved possibly inaccurate tests for issues such as lead poisoning, autism, heart disease and cancer, as well as flawed prenatal genetic screens. The agency can’t measure their actual frequency, though — because it heretofore has not monitored laboratory-developed tests.
Independent academic research has documented flaws, though. In one study of tests for genetic markers of colorectal cancer, investigators sent the same engineered cell samples to 19 laboratories for evaluation and found that nine of the labs’ tests made five or more errors. Most were false negatives, especially dangerous where successful treatment depends on early detection. During the pandemic, an FDA analysis of 125 laboratory-developed coronavirus tests revealed that 82 (66 percent) had major validation problems or design flaws.
Opponents of FDA regulation, including academic medical centers (which develop tests of their own) and large commercial laboratories say it could slow creation of new tests and make development more expensive. But new tests benefit medicine only if they work; misdiagnosis raises the cost of health care, too. And misdiagnosis happens a lot: so often, most Americans can expect to experience it at least once. If regulation improves medical accuracy, it could create net savings for the system.
Human error by medical personnel will always occur; bad tests aren’t the only cause of misdiagnosis. But they are a cause the FDA can do something about. Once it enacts regulation, the FDA will be able to build a database of lab-developed tests used in the United States, providing the knowledge necessary to diagnose, and treat, patients with maximum accuracy.
Credit: Source link