The ‘Fatal Flaw’ in a Government Report on ER Misdiagnoses
The New York Times reported last week that a newly released federal government study believes that up to 250,000 people die in the U.S. annually due to misdiagnoses made in emergency rooms.
However, in a large document obtained by Inside Medicine that is not yet public, one expert contributing to an internal review of the report prior to its publication found a “fatal flaw” in the methodology behind some of the most crucial and eye-catching findings. Other major concerns were brought up by other reviewers and technical experts, which the study authors did not fully address prior to the release of the report. The technical expert concerned about the “fatal flaw” wrote that results were, “Headline grabbing, yes, but this is at best gravely misleading, given the concerns….”
Emergency medicine organizations have already pointed out major problems in the report. One thing not yet pointed out is that the magnitude of the findings fail every whiff test imaginable. If the findings of the report were somehow to be true, that would mean that 8.6% of all deaths in the U.S. — that is, 250,000 out of 2.9 million deaths (2019, the last pre-pandemic year) — are caused by mistakes and misses in ERs. That’s preposterous, on its face.
Below, you’ll find the internal review in PDF form. It contains reviewer comments (left column), and the author replies (right column). Overall, I would characterize the reviewer comments as a combination of being supportive whilst confused; concerned, and yet (in some instances) openly admitting to being out of their depth. On one hand, a reviewer praises the report. But a paragraph later, the same reviewer finds vast problems that render the findings impossible to interpret. Such concerns did not keep those same findings off of the front page of major newspapers.
For example, “Peer Reviewer #1” was largely supportive of the report, but worried that the conclusions and recommendations might be wrong. That’s kind of a major problem. The reviewer wrote that, “[Emergency Departments] have often been criticized for the overuse of diagnostic tests and an emphasis on diagnostic error has the potential to increase testing among low-risk patients, leading to increased costs, identification of incidentalomas [random findings of unclear meaning, which are usually benign] which adversely impact patient wellbeing, radiation exposure, etc.” This is the correct concern to bring up. It gets lip service in the paper. It is the crux of the problem. It’s not really dealt with, though.
Indeed, later, this same reviewer writes about the perils of over-diagnosis. For example, if a patient with a virus is admitted for IV antibiotics that they do not need (rather than sent home to get better), dangerous parasitic infections can occur.
The theme that emerges is that this report focuses on diagnostic misses, without acknowledging that the opposite — over-diagnosis or over-admitting more patients to the hospital for further tests and treatments for needle-in-the-haystack searches — comes with harms. Remember, if over-diagnosis causes more harm than the benefit of catching a rare missed case, the system has failed, overall.
This paper seems to have forgotten that. This paper seems to believe that a carefully-reasoned risk-benefit analysis is akin to missed diagnoses. As the first reviewer notes, “[Over-diagnosis] is an important component of misdiagnosis…I feel this section that introduces overcalls should be more balanced throughout the manuscript,” the reviewer said. For their part, the authors basically ignored the reviewer comments whenever they got too close to rendering any of their conclusions null and void. Instead, they responded with window dressing, but not any further analysis.
That same reviewer also noted that making the leap from European data to make a U.S. estimate on mortality and disabilities “caused” by ER errors is a massive problem. And yet, that’s exactly how the 250,000 deaths per year number was generated. “U.S. estimates are made on limited…data primarily from Europe…” the reviewer noticed. Indeed, in Europe, emergency medicine is not even a recognized specialty in every country. While many European nations have excellent ERs, some are still on the learning curve. Comparing European ERs to U.S. ones (our ERs are staffed by seasoned board-certified emergency physicians and other experts in emergency care) is like concluding something about the U.S. soccer team from data about the French team’s performance. “These results should be more tempered with acknowledgment of the limitations of these numbers,” the reviewer wrote. And yet, as more than one reviewer predicted, the headline findings — whether true or not — were too delicious for the media to pass up.
Another reviewer pointed out that “hindsight bias” was not considered in the report. For example, consider what happens when ERs discharge a patient who has been ruled-out for a heart attack. What happens if that patient goes on to have a fatal heart attack in the next 30 days? Is that a missed diagnosis? No. It’s failure to have a crystal ball, even after all the tests designed to identify such patients have been passed. The only way to catch such rare events (the theory would seem to be) would be to hospitalize hundreds, or thousands, or even tens-of-thousands of such patients. But doing that would cause more harm than good because a small number of those patients would die of complications from what were overly aggressive invasive procedures, or from hospital-acquired infections, or falls, etc. In fact, the American Heart Association (and Emergency Medicine organizations), have protocols that are designed to make sure that ERs do not admit too many patients, specifically so that we do not do more harm than good.
Jeremy Faust, MD, is editor-in-chief of MedPage Today, and an emergency medicine physician at Brigham and Women’s Hospital. This post originally appeared in Inside Medicine .
(This excerpt originally appeared on MEDPAGE Today.)