Healthcare can’t fully embrace artificial intelligence until it’s better understood — and this device developer says explainable AI is critical.
By Sreeni Narayanan, EarliTec Diagnostics
Artificial intelligence (AI) now performs feats that would have seemed far-fetched just a short few years ago. Users simply enter prompts and then quickly receive proposed actions or substantive answers in response, with a potential impact in healthcare that seems virtually boundless.However, the calculations that occur between point A and point B — or the input and the output — are a mystery to most people. How GPTs and other AI programs do what they do remains poorly understood. While for many industries the how may be of negligible importance, that is not the case in the highly regulated and complex field of healthcare.
Decoding explainable AI
Until we achieve a better understanding and regulation around AI, its rollout to the life-and-death business of medicine will continue to falter and experience setbacks. But make no mistake: AI is here to stay, and it is up to each of us to ensure it reaches its full potential to provide the tools healthcare providers need and improvements to care that patients deserve. With the World Health Organization expecting the global shortage of medical professionals to reach 10 million over the next six years, every participant in the healthcare ecosystem will be pressured to do more with less. AI can make that possible.
Governments and industry groups are pushing to create ethical AI, efforts likely to have a positive impact. But we as developers must also do our part. We need to ensure we are creating products and services that healthcare, and each of the participants in the ecosystem, will embrace. It is within our reach and our responsibility to create explainable AI (XAI) and overcome AI skepticism.
The Defense Advanced Research Projects Agency (DARPA) defines explainable AI as the creation of machine learning techniques that “produce more explainable models, while maintaining a high level of prediction accuracy; and enable human users to understand, appropriately trust, and effectively manage the emerging generation of AI partners.”
Before AI can handle some of the most sensitive information on the planet — personal health data — in a bid to diagnose disease and chart courses of treatment, the technology must be explainable. By providing explanations alongside results, XAI models aim to enhance trust, accountability and understanding of AI systems, avoiding many of the common concerns of generative AI or black box algorithms.
Explainable AI at EarliTec Diagnostics
I built explainable AI for EarliTec Diagnostics, a company that develops a novel diagnostic and assessment device for children with autism. Our EarliPoint Evaluation is the first objective measurement tool designed to aid clinicians in the diagnosis and assessment of autism in children from 16 to 30 months of age.EarliPoint displays curated scenes of social interactions on a portable tablet, and the embedded eye-tracking technology measures more than 120 focal preferences per second, something only AI can accomplish. The technology uses digital biomarkers supported by XAI to track how a child watches the short videos, indicating whether a child has autism. This breakthrough in the diagnosis of autism, and our diagnosis and assessment process began with the creation of XAI models.
Only 20% of children with autism are diagnosed by age 3, and many cases go undiagnosed and untreated until they reach school age. We hope that EarliPoint will allow people with autism to do amazing things sooner with an earlier diagnosis.
The inner workings of our models are interpretable at every stage and the decisions made by the model are explainable. Our technology is not meant to replace clinicians, but rather assist in quickly and objectively measuring key characteristics of a child to inform clinical decision-making. These tools can help clinicians become more accurate and efficient in their diagnostic process, which today is based entirely on behavior and generally does not happen until after a child is 30 months old at the earliest.
We leveraged our model transparency to determine whether certain biases we detected originated from clinical settings. Others can do the same.
5 steps for building explainable AI
For every health technology company, making AI explainable will bring distinctly different challenges. Diseases, treatments and patient populations vary widely, so a one-size-fits-all solution will not work. The common denominator remains that companies developing new technologies to solve our biggest healthcare challenges should be using XAI models wherever possible to help drive adoption.
We took five steps to overcome AI skepticism in the autism space, which can serve as a roadmap for other developers:
- Create discovery and replication cohorts: In building AI models with data, create two independent cohorts. The discovery cohort is where initial patterns are detected and the replication cohort is where the generalization of that pattern is tested. This ensures results are replicable.
- Develop preregistering plans: Outlining how data will be collected and then allocated to these cohorts ensures they stay independent from one another. Protocols can be the same for both cohorts, or you can intentionally use cleaner data during discovery and then see if observed patterns are still present with real-world variability for the replication cohort.
- Train the model on diverse datasets: Expose your model to diverse datasets and multiple instances of data. We used roughly 1,100 patients for the discovery cohort and the replication cohort. This helped create transparency and minimize bias.
- Reduce hidden layers: In programming, simplicity can be difficult. But this is what we aimed for, with explainable AI as our goal. With AI, the black box is created by overlaying complex modeling inputs and using multiple layers of machine learning. For simplicity’s sake, we used as few layers as possible.
- Don’t seek to replace clinicians: Don’t build with the idea of cutting the clinician out of the equation. Combine the strengths of clinicians and AI models to improve the overall experience and outcome for patients.
The above heedful steps bring transparency and openness to AI models. When put into practice, biases become easier to eliminate and results replicable.
Building thoughtful and reliable XAI models has been key to EarliTec’s success. By following this roadmap, other developers can also be a driving force behind XAI models to win over customers who might be skeptical of the technology.
AI continues to make inroads into our healthcare system, but this will not be considered a positive development until more AI models can be understood and explained. XAI can make this a reality.
Sreeni Narayanan is the chief technology officer of EarliTec Diagnostics, where he leads data scientists, computer vision experts and software engineers developing models for the early diagnosis of autism and assessment during treatment. He has more than 30 years of experience and repeated success identifying white-space opportunities in the healthcare and medical device space, and led the development of transformative solutions based on machine learning and other related technologies in leadership roles at early-stage startups and a global market leader.How to submit a contribution to MDO
The opinions expressed in this blog post are the author’s only and do not necessarily reflect those of Medical Design & Outsourcing or its employees.