8. AI for diagnostic imaging
With artificial intelligence, the quality of the output depends on the quality of the data used to train the AI application. Unreliable AI function can lead to misdiagnoses or can prompt inappropriate care decisions. In ionizing imaging systems that use AI, for example, inaccurate interpretation of an image could affect the patient’s treatment plan or the radiation dose delivered, according to ECRI.
A key challenge associated with developing and refining an AI algorithm is overcoming bias in the data. AI applications are inherently biased toward patient populations that “look like” the population used in developing the algorithm. If that data does not accurately represent a particular patient population, the resulting output may not be appropriate for those patients.
Healthcare providers who use AI applications have little visibility into how an AI implementation makes decisions. ECRI recommends conducting a risk-benefit assessment of AI functionality to help healthcare institutions assess the safety and effectiveness of medical technologies that incorporate AI. A key part of this process will involve verifying that the data used to train the algorithm represents the organization’s patient population.