WHO concerns about AI tools in healthcare

AI News


Geneva: The World Health Organization (WHO) supports the use of Artificial Intelligence (AI) generated Large Language Modeling Tools (LLM) in health-related responses to protect human well-being, safety, autonomy and public health. I warned you.

LLMs such as ChatGPT, Bard, and Bert mimic human communication and rapidly expand their platforms. Due to their increasing use in medical settings, there is growing enthusiasm for their potential to support health needs.

However, LLMs should undergo a thorough risk assessment to improve access to medical information, aid decision-making, and enhance diagnosis in resource-poor settings. Essential.

While WHO welcomes the appropriate use of technology, including LLMs, to assist healthcare professionals, patients, researchers and scientists, the usual caution with new technologies applies consistently to LLMs. There is concern that the The rapid adoption of untested systems will lead to mistakes by healthcare professionals, harm to patients, and a loss of trust in AI, ultimately jeopardizing the realization of the long-term benefits of these technologies and their impact around the world. application may be hindered.

WHO recommends caution in using LLMs to improve access to health information, serve as decision support tools, or enhance diagnostic capabilities, especially in resource-poor settings. recommended. The risks associated with biased training data leading to misleading or inaccurate information and the generation of inaccurate or erroneous health-related responses should be carefully considered.

Additionally, LLMs may be trained on data without prior consent, potentially compromising the protection of sensitive user-provided information such as health data. Additionally, there are concerns that LLMs could be misused to spread compelling disinformation, making it difficult for the public to distinguish between trusted health content and false information.

WHO supports the use of technology, including LLMs, to assist health care workers, patients, researchers and scientists, but with a consistent focus on transparency, inclusiveness, public engagement and professionalism. House supervision, and rigorous evaluation.

WHO should address these concerns before broadly integrating LLM into routine health care and health care, whether LLM is used by individuals, caregivers, or health system administrators and policy makers. , proposes to ensure clear evidence of benefit.

Additionally, WHO emphasizes the importance of applying ethical principles and good governance, as set out in its Guidance on Ethics and Governance of AI for Health. These principles include preserving autonomy, human well-being and security, promoting public good and understandability, promoting responsibility and accountability, ensuring inclusiveness and fairness, and promoting responsive and sustainable AI. included.




Source link

Leave a Reply

Your email address will not be published. Required fields are marked *