Health policy analysts from the University of Pennsylvania and the University of California have published a Policy Forum in Science calling for a more rigorous means of monitoring and introducing AI medical applications. In their paper, Ravi Parikh, Ziad Obermeyer, and Amol Navathe provide five standards they believe should be implemented when allowing AI medical applications to be used for medical procedures.
The process of using AI applications when diagnosing medical conditions or providing predictions for outcomes based on treatment options is still a new technique, and one that is constantly being explored. The authors say only recently have AI-based algorithms and tools been integrated into medical predictions. The paper suggests five standards to provide security to patients who are part of a medical treatment that involves AI applications or devices.
The first standard includes establishing endpoints, and developing benefits that are clearly identifiable and are subject to validation by the FDA, just as other drugs and devices undergo. The second includes establishing benchmarks that cater to the area in which they are applied, so the usefulness and quality can be evaluated. The third involves ensuring that variable input specifications are clear, so other institutions can use this information when testing a new application or device. The fourth standard contains possible interventions associated with findings by AI systems, and determining if they are successful and appropriate. The last includes the implementation of regular audits. Auditing AI applications takes into consideration the importance of collected data and their abilities that change as time progresses.
The authors also suggest that because AI applications are so new, current regulations may not be working or implemented correctly. Because of this, they suggested that medical companies incorporate a “promise and protection” mantra towards AI medical devices to ensure the technology solely focuses on promoting better healthcare for patients.