You would expect that an expensive, high-tech tool would be particularly valuable in the clinical setting, wouldn’t you? It seems intuitive – the lofty price tag should ensure advanced capabilities that a lower cost technology might not have the functionality to do. All that money has be pooled into something innovative.
In the case of a $400 million computer-aided detection (CAD) software now used for 90 percent of mammograms, apparently not. According to a recent study published in JAMA Internal Medicine, the mammography tool has a negligible effect in aiding radiologists to detect breast cancer. It’s supposed to double-check screening results, but doesn’t improve upon cancer detection and may even result in missed diagnoses.
In the study, radiologists were able to detect cancer in four of 1,000 women (invasive cancer in three out of those four) interpreting nearly half a million mammograms with or without the CAD software. Lead author of the study Constance D. Lehman, MD, PhD, notes, “Even more troubling, when we studied the 107 radiologists who interpreted both with and without CAD, we found that a given radiologist tended to miss more cancers when using CAD than when he or she didn’t use the software. It may be that radiologists reading with CAD are overly dependent on the technology and ignore suspicious lesions if they are not marked by CAD.”
So why even use the technology all, if it’s proving to be an unnecessary drain on healthcare funding resources? This software seems to be something that has taken root in the clinical setting – especially so, considering clinics are willing to pay the $400 million price tag – without sufficient risk or benefit analysis. If CAD doesn’t demonstrate any improvement detecting breast cancer, there shouldn’t be any reason for patients to pay for it. Apparently patients can receive a perfectly adequate diagnosis without it!
Since a reimbursement ruling in 1998, the CAD mammography tool now practically comes standard on most mammogram machines. Many radiologists don’t like using CAD – it can miss important areas while indicating problems that don’t exist. Dr. Debra Monticciolo, chair of the American College of Radiology’s breast-imaging commission shared her view in a Seattle Times article, “They were trying to give us the latest technology. If you’re asking me in my own personal practice, most of us would not feel tremendously affected by not using CAD. We do not feel dependent on it.”
The fact that many radiologists might use the potentially worthless technology to supplant their usual procedures in favor of embracing the latest technology is a little troubling. It raises the issue of what will happen to clinical practices when new technology is introduced in the clinic. Will a healthcare professional develop a potentially dangerous reliance on the technology, causing human error to skyrocket? New tech seeks to eliminate human error all the time, but diagnostic practices especially require some intuition to come to a conclusion.
Of course I advocate for healthcare’s technological innovation in general – I wouldn’t have a job without it – but things like this really need to be worked out in the research and development process. Shouldn’t there have been some sort of preliminary study on CAD breast mammography’s actual benefit over traditional mammography methods? It’s the equivalent of building a mousetrap with all kinds of fancy bells and whistles that performs just as admirably (or less) than a glue trap.
I don’t want to see what could be a huge (albeit, expensive) technological improvement in breast cancer screening fall to the wayside because of a lack of research on effectiveness. I’m ever the optimist – maybe CAD mammography has some hitherto unknown way to detect cancer that hasn’t been discovered. Let’s get on that before our pockets are emptied even further.