Is generative AI safe for medicine? Yes, under professional supervision

AI News


Generative AI has been a hot topic lately, with Elon Musk and others sounding the alarm as a risk to society. Regardless of your position, the question arises: Is it safe for use in medical applications today? In an industry where accuracy can be life-or-death, can you trust it when interpreting data and creating new content in the correct format, from doctor’s instructions to patient instructions and more?

The short answer is “yes, it is possible under professional supervision”. Let’s break down how generative AI works, its current strengths and limitations, and how it can be safely assimilated into medical applications now and in the future.

How generative AI works

At its core, generative AI aims to mimic human thinking, albeit in a binary way, to create new and original content. At the macro level, we try to understand how humans think and act. For example, how do writers write, painters paint, and innovators innovate? Machine learning algorithms are based on a creative process that utilizes large amounts of data to “train” a neural network in a specific domain. When the user provides a prompt, it theoretically triggers an intelligent response.

This process is complicated, but in reality there are three parts to the generative AI equation: data, training, and neural network.

For example, let’s say you want to create population health generated content for clinical use. For example, you can hire people to conduct intensive research and manually create content to support precision medicine for personalized treatment plans to combat disease. Alternatively, a generative AI can be prompted to retrieve information from a relevant database, such as a patient’s electronic medical record, and compile the information. When AI is trained on relevant subjects, it can intersect with psychology, mental health, and social determinants of health, activating neural networks to gain a deeper understanding of medical content.

What could go wrong?

Just as our minds can’t always connect the dots precisely, the same is true for generative AI. While the output may appear logical and correct, it may not be scientifically accurate. If you don’t have access to experts who can verify the content, you may be exposing false information, which can be very dangerous in precision medicine examples and other use cases. For beginners, this can be especially problematic as they may not know what is true and what is not. Taking AI-generated content at face value can do more harm than good.

A better approach is to use generative AI in the medical field to help professionals. For example, if you’re a clinician looking for the latest information on Alzheimer’s disease in order to create a care plan, you can prompt to see what the generative AI has returned, discarding anything while discarding anything. Accuracy can be elicited and verified. you disagree

As part of that process, you may find unexpected and surprisingly useful insights. We all have knowledge gaps that generative AI can fill by generating permutations and combinations of ideas that challenge our thinking. From a creative point of view, this is perfectly fine. As with the population health example, there are many insights that can be gleaned from racial, ethnic, and societal information. However, please know that this may not be scientifically validated or clinically proven when it comes to patient care.

Beware of prejudices too. Generative AI is inherently biased by the underlying data and knowledge it learns from, such as social media interactions on platforms such as Twitter and informational websites. ChatGPT, for example, is trained on the vast amount of his Reddit content. As a rule of thumb, know that any bias that exists on the internet exists within neural networks.

Making Healthcare Content Production More Efficient

Ultimately, generative AI helps adopters optimize efficiency gains in healthcare content creation. For health insurance, for example, generative AI can be trained on an understanding of medical plans to help optimize reimbursement. It can also be used to assess a member’s risk profile based on, for example, re-enrollment.

Based on this information, users can create policy plans for different types of patients. What would be the strategy if the patients were healthy teens? How would the situation change if they were baby boomers with multiple comorbidities? What if I have a mental health problem, such as an illness? For pop health, it can be used to reduce the time and cost of developing and validating care and action plans for specific patients. There is a possibility. But again, clinicians need to know what is true and what is not. Failure to do so may result in harm.

It’s important to note that at this point, the AI ​​doesn’t ask explicit questions, it asks the user. This means that at initial introduction, a subject matter expert or consultant who understands the nuances should ask the kinds of questions to frame meaningful responses. This shift in expanding AI’s ability to generate better responses has led to the emerging field of “prompt engineering.” Essentially, AI models and effects that help interact with large language models, such as carefully constructing prompt sequences that can generate insightful content to produce the desired high-quality output. understanding prompting techniques.

Could new AI career specialties change in the future? Yes, but that’s a topic for another day. If you use it today, you have to know what you want to achieve. The key is to know what you are trying to optimize, not the other way around.

The future of generative AI

Generative AI is a promising technology that has the potential to further increase engagement with chatbots in a variety of use cases. It can also be used to generate more chatbots faster by using large language models to create, inform, and reshape more human-like conversations.

We also believe that generative AI can be used to delve deeper into chatbot use cases like post-hospital discharge and explore the dynamic nature of engagement to improve patient follow-up. Generative AI can help you sketch what the next level of engagement might look like to feed the right information into your chatbot design session. It also reduces cycle time and potentially provides a sample that takes designers to the next level.

As time goes on, one thing is certain. Generative AI is getting smarter, transplanting intelligence from one model to another. Not a replacement for humans, but unlike humans who can touch, taste, and feel, generative AI only has two senses: sight and hearing, unlike robots historically. Assist us in ways that may affect white collar workers in the same way. Influenced blue-collar workers.

This is both a challenge for society and a source of creativity and new growth. Everyone can be a creator, and some will eventually become generative AI natives who innovate in ways never thought possible. How it will play out is unpredictable, but much like smartphones with Tiktok, it is only limited by our imagination and ingenuity.

By the way, I didn’t use ChatGPT or any other AI platform to generate this content.

Photo: Ole_CNX, Getty Images



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *