The World Health Organization (WHO) yesterday warned of the risks from using AI-generated large language model (LLM) tools in healthcare.
LLMs like OpenAI’s ChatGPT and Google Bard have the potential to make healthcare more efficient and effective, whether it be connecting patients with information or helping providers with diagnosis or treatment.
“While WHO is enthusiastic about the appropriate use of technologies, including LLMs, to support healthcare professionals, patients, researchers and scientists, there is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs,” the organization said. “This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation.”
“Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world,” the statement continued.
Key considerations for risks of LLMs in healthcare
WHO said critical concerns must be addressed in order for AI-generated LLMs to be used safely, effectively and ethically.
While LLMs are good at generating responses that sound accurate and relevant, they’re often partially or completely wrong in ways that could cause harm if used in healthcare. Because they can be convincingly confident and sound authoritative, LLMs have the potential for abuse, allowing bad actors to generate and propagate dangerous disinformation about important health issues like vaccines, for example.
The data used to train these LLMs can also introduce risks. Biased training data may lead to misleading or inaccurate information and harm health, equity and inclusiveness. And the training data may include information for which consent was not granted for use with AI LLMs, which means those models might not appropriately protect health data or other sensitive information.
“WHO proposes that these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine health care and medicine — whether by individuals, care providers or health system administrators and policy-makers,” the organization said.
The WHO issued guidance on health AI ethics and governance in 2021.