WHO Encourages Caution As AI Becomes More Prevalent In Health Care


The World Health Organization (WHO) shared their enthusiasm for the “appropriate” use of new technologies, and urged safety and caution as automation becomes normalized in the health care sphere.

Artificial Intelligence (AI) generated large language model tools (LLMs) like ChatGPT, Bert and Bard have gained much public attention in their use for health-related purposes. The World Health Organization (WHO) shared their enthusiasm for the “appropriate” use of these technologies; though, they are calling for caution to be exercised to protect and promote human well-being, human safety, autonomy and preserve public health.

These LLM platforms have been rapidly expanding as users take advantage of their features that imitate understanding, processing, and produce human communication. Their growing experimental use for health-related purposes is generating excitement around the potential to support user’s health needs, the WHO reported in a release in May.

If used appropriately, LLMs can support health-care professionals, patients, researchers and scientists. But, there are risks and the WHO stressed how crucial it is for these risks to be examined carefully to improve access to health information or enhance diagnostic capacity to protect user’s health and reduce inequity. There is concern that caution that would normally be exercised for any new technology is not being exercised consistently with LLMs. This includes widespread adherence to key values of transparency, inclusion, public engagement, expert supervision, and rigorous evaluation, according to the release.

Abrupt adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and delay any potential long-term benefits or uses of these tools globally.

Concerns shared by the WHO that call for caution of these technologies to be used in safe, effective and ethical ways include:

  • Data used to train AI may be biased, generating misleading or inaccurate information that could pose risks to health, equity and inclusiveness.
  • LLM platforms generate responses that can appear authoritative and plausible to an end user. They can also incorrect or contain errors, especially for health-related responses.
  • The tools could be trained on data for which consent may not have been previously provided for such use, and they may not protect sensitive health data a user provides.
  • LLMs can be misused to generate convincing disinformation in the form of text, audio or video content that is difficult for the public to differentiate from reliable health content.

The WHO encouraged these concerns be addressed, and clear evidence of benefit be measured before their widespread use in routine healthcare and medicine – whether by individuals, care providers or health system administrators and policy-makers.

Though further evidence is needed to support these concerns, results from a study published in Jama Internal Medicine in April shared responses to patients using ChatCPT were preferred by healthcare professionals over physician responses.

In the cross-sectional study of 195 randomly drawn patient questions from a social media forum, a team of licensed healthcare professionals compared physician’s and chatbot’s responses to patient’s questions asked publicly. The chatbot responses were not only preferred but were also rated significantly higher for both quality and empathy.

Researchers of the study claimed the results suggest AI assistants may be able to aid in drafting responses to patient questions.

This article originally appeared on Managed Healthcare Executive.

Related Videos
© 2024 MJH Life Sciences

All rights reserved.