ChatGPT raises concerns for AI-driven infodemic in public health

AI News


OpenAI is an artificial intelligence (AI) based research and development company that recently developed ChatGPT, a large scale language model (LLM). While the previously developed LLM can perform various natural language processing (NLP) tasks, ChatGPT does something different. ChatGPT is an AI chatbot that can have human-like conversations.

Interestingly, just 5 days after ChatGPT’s release, the number of users exceeded 1 million. Most users have tried ChatGPT to answer complex questions or generate short texts. Plagiarism detection for texts generated by ChatGPT tools is not easy compared to manually created texts.

Recent Public health frontier journal The study focused on the evolution of LLM. We also evaluated how ChatGPT might impact future research and public health. This study aimed to facilitate discussion on the function of ChatGPT in medical research, given the concept of “AI-driven infodemic”.

Perspective: The Rise of ChatGPT and Large Language Models: A New AI-Driven Infodemic Threat in Public Health. Image credit: Mila Spinskaya Glashchenko / Shutterstock

Evolution of LLM

Over the past five years, we have observed an exponential increase in LLMs, enabling them to perform a wide variety of tasks. However, before 2017, most NLP models were trained for his one specific task. This shortcoming was overcome by the development of self-attention network architectures, also known as transformers. In 2018, this concept was used to develop two of his innovative models: Generative Pretrained Transformer (GPT) and Bidirectional Encoder Representations from Transformers (BERT).

We used a combination of supervised fine-tuning and unsupervised pre-training to achieve generalization capabilities on BERT and GPT. This approach allowed us to apply pre-trained linguistic representations to perform downstream tasks.

The GPT model evolved rapidly and many versions were launched. The improved version contains large text data and parameters. For example, the third version of GPT (GPT-3) is 100 times larger than his GPT-2 and contains 175 billion parameters. GPT-3 can generate text that covers a wide range of areas, but often provides biased text with false facts. This is because many LLMs, including GPT-3, are designed to predict the next text element based on data available on the Internet, thus reproducing the bias. The main problem was designing his LLM in line with human values ​​and ethical principles.

To address the aforementioned issues, OpenAI developed ChatGPT, which incorporates 1.3 billion parameters trained using Reinforcement Learning from Human Feedback (RLHF). 2021 ChatGPT generated inaccurate text due to lack of fact-checking, fixed by integrating GPT-4 into ChatGPT. Although modern ChatGPT produces reliable data, all limitations of this tool should be considered, especially when applied to medical research.

ChatGPT Threat Assessment in Public Health Considering AI-Driven Infodemic

ChatGPT can be used by researchers to produce important scientific papers. For example, you can use this tool to suggest relevant titles for research papers, write drafts, or express complex scientific concepts in simple, grammatically correct English. The interest in ChatGPT in the scientific community is evidenced by the rapidly growing number of research papers on the tool.

Many authors are already using ChatGPT to write part of their scientific papers. This highlights the fact that the tool was already embedded in the research process even before it addressed ethical concerns and established standard rules for its application.

LLMs may be tricked into producing texts related to controversial topics or misinformation. LLM can generate text similar to that produced by humans. This feature can be abused to create fake news articles and fake or misleading content, without the user being aware that the content was generated by AI.

Recently, some authors have emphasized the need for LLM detectors that can identify fake news. His current GPT-2 detector is unreliable in detecting text written by AI when generated by ChatGPT. There is an ongoing need to improve detectors as LLM rapidly advances to curb malicious intent.

Due to the lack of accurate detectors, some precautions must be followed. For example, the 2023 International Conference on Machine Learning (ICML) banned the use of his LLM in submitted drafts. However, there are no tools for checking compliance with this rule.

Many scientific journals have updated their author guidelines. For example, the Springer Nature journal added that LLM cannot be listed as an author and that LLM’s use must be mentioned in the methods and acknowledgments sections. These updated guidelines have also been implemented by Elsevier.

ChatGPT can be abused to generate fake scientific summaries, papers, and bibliographies. Here, the Digital Object Identifier System (DOI) can be used to accurately detect fake references. Scientists noted that years of research are needed to validate medical discoveries before they can be used clinically. Therefore, fake information generated by AI tools can endanger people’s safety.

The 2019 coronavirus disease (COVID-19) pandemic has had a major impact on health research. This is largely due to the rapid spread of information from preprint servers through social media that influences individual health choices. Information about COVID-19 spread primarily through social media, creating a phenomenon known as an infodemic. We observed that infodemics can have a significant impact on medical decision-making in prevention and treatment strategies. The authors predict that AI-driven infodemic outbreaks will pose a significant public health threat in the future.

written by

Dr. Priyom Bose

Priyom holds a Ph.D. She holds a PhD in Plant Biology and Biotechnology from the University of Madras, India. She is a practicing researcher and an experienced science her writer. Priyom has also co-authored several original research papers published in reputable peer-reviewed journals. She is an avid reader and amateur photographer.

Quote

To cite this article in your essay, paper or report, please use one of the following formats:

  • Apa

    Bose, Priyom. (May 17, 2023). ChatGPT raises public health AI-driven infodemic concerns. news medical. Retrieved May 18, 2023 from https://www.news-medical.net/news/20230517/ChatGPT-raises-concerns-of-AI-driven-infodemic-in-public-health.aspx.

  • MLA

    Bose, Priyom. “ChatGPT raises concerns for AI-driven infodemic in public health.” News – Medical. May 18, 2023..

  • Chicago

    Bose, Priyom. “ChatGPT Raises Concerns for AI-Driven Infodemic in Public Health.” News Medical. https://www.news-medical.net/news/20230517/ChatGPT-raises-concerns-of-AI-driven-infodemic-in-public-health.aspx. (Accessed May 18, 2023).

  • Harvard University

    Bose, Priyom. 2023. ChatGPT raises concerns for AI-driven infodemic in public health. News-Medical, accessed May 18, 2023, https://www.news-medical.net/news/20230517/ChatGPT-raises-concerns-of-AI-driven-infodemic-in-public-health.aspx.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *