Deceitful AI Videos Mislead Seniors on Important Health Issues | Office for Science and Society

AI Video & Visuals


I had never received an email in Vietnamese before. My request for an interview had been written in English, and the channel I had reached out to made English-language videos, but the reply I received was a simple question written in Vietnamese: “What can we help each other develop?”

The man in the videos was white, but of course, he was never real in the first place.

There is a rapidly spreading plague of videos on YouTube aimed at older adults who need medical advice. As their body ages, they want to know the secrets to eating right, exercising effectively, and maybe regaining the energy they had between the sheets decades earlier, and a new industry has set up shop on Google’s video platform to supply solutions, even if they are made up. Unlike with the quacks of old, nothing is being sold here. Your attention is what puts money into their pockets.

Hallucinated medicine

It was after delivering a lecture to the McGill Community for Lifelong Learning, made up of senior citizens with a thirst for knowledge, that I was asked about Senior Secrets

Every few days, the YouTube channel with 321,000 subscribers posts a new video with “science-backed health tips, surprising remedies, and powerful longevity secrets that most people over 60 have never been told.” The channel’s icon is a cartoonish grandpa with a finger on his smiling lips, swearing you to secrecy, and the thumbnails advertising each video are emblazoned with red and yellow banners, like crime scene tape. “Goodbye old age!” screams the latest. “Never take this after 65!”; “Just 1 cup before bed repairs your eyes overnight”; “99% of seniors don’t know: huge mistake.” One of the channel’s trademarks is opening a video title with “Over 60?”.

I began watching their most popular video, where a top heart surgeon allegedly says to skip walking and do these five exercises instead—a video posted two months ago which has accumulated a whopping 3.3 million views. Tech-savvy consumers will recognize the anonymity of these types of videos, but for older adults who may be less familiar with how the sausage is made, it’s worth pointing out a few red flags. There is an overreliance on stock footage, meaning video clips professionally shot for the purpose of being used by just about anyone. These shots lack character: they display a blandness, a sort of vanilla flavour that is immediately noticeable to the trained eye. The stock footage here is supplemented by simplistic cartoon drawings that jiggle left and right, and the voiceover narration sounds like a youngish man from North America. Except that I don’t believe this is a recording.

The voiceover is likely to be generated by artificial intelligence (AI). It’s very good, but it’s a little bit monotonous and, once in a while, the emphasis is slightly off. In fact, this entire video appears to be constructed from parts generated via AI. In and of itself, it doesn’t mean that the information is bad. If the text was written by a human but the video was realized using AI, the advice could theoretically be good. So, I decided to look up the “groundbreaking” 2024 study out of Copenhagen that this entire video hangs on, and I found that it did not exist. There is a third issue in the 34th volume of the Scandinavian Journal of Medicine & Science in Sports, as listed in the video description, but the article itself does not appear in it. I checked on the journal website and I did a web search for the title of the article. It’s fake.

Generative AI has been caught on multiple occasions hallucinating documents that do not exist. The MAHA Report released by the White House in May was riddled with hallucinated citations; meanwhile, librarians are now dealing with requests for books and articles that do not exist, and judges are finding hallucinated cases in legal documents as more people uncritically trust AI to answer their questions and do work on their behalf. Now, fake papers are being cited in YouTube videos aimed at seniors.

Senior Secrets is not the only channel delivering AI hallucinations to a hungry audience: I found dozens of similar channels, with names like Senior BookSenior WellnessDr. ReevesAgeless Vitality//www.youtube.com/ [at] DR.NERITA. Click or tap if you trust this link.”>DR. NERITA, and WISE ADVICE. With its 17+ million total views, Senior Secrets is merely the tip of the AI iceberg, but unlike our world’s actual icebergs, this one is quickly growing. It’s actually more of a fatberg—those masses of wet wipes and greases that clog up our sewers—than something that benefits the world.

I picked four such channels and checked every scientific reference their most popular videos listed to see if they existed. Out of 65 references, five were real. I was unable to find the 60 others. As with the Copenhagen non-study, the journals, volumes, and issues were usually dead-on: the AI is simply inserting fake papers into real pages. Occasionally, a journal was made up. Often, the only authors listed were departments or institutes (like “Mayo Clinic Center for Aging” or “British Columbia University Exercise Science Department”), which is highly unusual and should serve as a red flag. People write papers, not departments.

A minority of these channels lists an email address. I reached out to five of them, first with an interview request, then with a list of fake citations they had published in their videos. Only one wrote back. The channel Senior Power Daily, which has released an astounding 304 videos since it began posting content on August 8 of this year, has a GMail address listed as a contact, and a “Nguyên Anh Phan” replied with the above-mentioned, business-friendly Vietnamese invitation to collaborate. I replied but they stopped responding.

While the videos’ aging audience may believe they are watching content made by English-speaking, North-American healthcare professionals, what they are likely seeing is being manufactured halfway around the world by content farms.

Made in Asia

Detective work often hinges on a stupid mistake someone made once, because even if you diligently cover your tracks, you are likely to slip up at some point. I ran a dozen of these YouTube channels through MW Metadata, an online tool that extracts from a channel’s videos all kinds of information, including their geolocation. Most of the videos had no geotags—the person who uploaded the video did not indicate where the video had been made. But here and there, I saw slip-ups. Most of the videos on the Senior Wellness channel had ”Hoa Ky” as their location, which is Vietnamese for “United States,” and one video from July was marked as “An Do,” Vietnamese for “India.” Multiple French-language channels named after fake doctors were geotagged “Pháp,” Vietnamese for “France.”

We can use this metadata to see where on a map this video was posted, but unfortunately, the resulting GPS coordinates can be misleading. A large number of AI videos masquerading as being American have GPS coordinates that took me near the now-extinct mining camp of Abbeyville, Colorado, which appears to be roughly in the middle of that state. The reason is that these coordinates are derived, via certain imperfect shortcuts, from the Internet protocol (or IP) address the video uploader is using. On top of this, the uploader is likely to be using a VPN—a virtual private network, which allows them to appear to be using the Internet from a different country, like the United States or France—and the GPS coordinates inferred from this IP address can be wrong. Indeed, there is a farm in Potwin, Kansas that has been the subject of angry visitors because bad actors on the Internet were erroneously traced to that specific location by people following their Internet footprint.

What we are witnessing with channels like Senior Secrets is the work of content farms, likely based in Vietnam. Sitting in front of dozens of computers are people with no formal training in science or medicine who write prompts for generative AI platforms like ChatGPT and Gemini. The AI creates scripts, animations, thumbnails, voiceover narration, fake scientific papers; and these made-up elements are mashed together in a video that gets uploaded to YouTube.

Parallel to this, we see AI-generated videos mimicking real-life influencers—like Quebec’s own Dr. Alain Vadeboncoeur, who writes for the magazine L’actualité—and selling a supplement, like CBD. If you’re not paying close attention, it looks like him and sounds like him, and he is telling you exactly what to buy to improve your health.

Speaking of Dr. Vadeboncoeur, it only took me a few minutes to find French-language channels just like Senior Secrets, hosted by fictitious physicians—some old and wise, others young and fit—named //www.youtube.com/ [at] Dr.ThomasDurand. Click or tap if you trust this link.”>Dr. Thomas Durand//www.youtube.com/ [at] Dr.CamilleDubois. Click or tap if you trust this link.”>Dr. Camille Dubois, and Santé avec le Dr Joseph, the latter having accumulated a stunning 5 million views. On this channel, one video’s geotag took me to a dense urban area in Lahore, Pakistan, possibly another slip-up. A video on the channel of “Dr. Marc Belland” has the AI-generated host speaking English with French subtitles. One commenter remarked in French that this urologist is supposed to be in France, so why is his voice dubbed into English? The answer, of course, is because he is a computer creation.

YouTube has invited AI into its ecosystem. Last summer, the company made a minor update to its policy, communicating that it would go after “mass-produced and repetitious content.” But AI content, as long as it doesn’t meet this standard of “mass production” and “repetitiveness,” is allowed. The more people watch it, the more money is made in ad revenue for whoever owns the channel, because ads play before, during, and after many videos. Welcome to monetized AI slop.

Targeting older adults is particularly insidious. Not only are they less likely than teenagers to understand how good AI has become at mimicking us, but the visual and hearing impairments common at that age will make detecting the subtle signs of AI even harder. Imaging watching a video like this on an old smartphone while dealing with hearing loss and macular degeneration, trying to spot if the doctor you see onscreen has hair that is slightly too smooth or a voice that is a bit out of sync with his mouth. It’s practically impossible.

And that is without considering newer AI models like Sora and Nano Banana Pro that generate videos often indistinguishable from reality. Even young, tech-savvy people are struggling to know if an apparently leaked photo from the set of the next Avengers  movie or an upcoming Doctor Who special is genuine or AI generated. They are reduced to scrutinizing the slightest pixel that looks wrong and imagining they have found the telltale sign of AI. The photos are likely to be fake, but the online sleuths themselves might also be hallucinating in their quest to catch a glitch in the matrix. In the case of movie set leaks, the damage is minimal; but when it comes to health advice, the harm can be significant.

One video from Senior Book claims that adding flaxseed to your daily oatmeal has the same benefit on your blood pressure as medication without nasty side effects, according to a University of Toronto study that does not appear to exist. (A meta-analysis of multiple trials on this question, including some from Canada, does not list a study that matches the criteria mentioned in the video, and the actual analysis’ conclusion is that consuming whole flaxseeds may reduce blood pressure, but none of the trials pitted the seed against actual medication.) Another video from the same channel scares the elderly away from eating healthy vegetables because the AI says it increases the risk of a stroke, citing a Dr. Mei Tanaka from a paper that I could not find. And in a Senior Secrets video on juices that are claimed to heal your vision while you sleep, the calm AI voice casually mentions that “most doctors won’t tell you this because, let’s be honest, there’s no money in natural remedies.” Not only does it ignore the financial mastodon that is the wellness industry, but it teaches older adults to distrust their doctor. 

We are facing a crisis right now where reality and hallucinated fantasy have become indistinguishable. The kinds of videos made for Senior Secrets will only improve, as their content farm manufacturers switch to more and more realistic video generators. Do not trust random videos for health information. Make sure the host is human and credentialed. Look up their medical license on the website of their medical college to see if they exist. Seek their appearances on legitimate shows that prove they are real. Put more trust in in-person interactions than in what you see online. Ask health questions to your doctor, if you have one. Rely on professional orders and associations to find specialists who know the academic literature in their field and can give you evidence-based advice. Develop the healthy reflex, when watching a video from a source unknown to you, to ask yourself, “Could this be AI? Is this voice real?”

This technology will need to be better regulated before our collective grasp on reality slips. For now, we must all remain vigilant.

Take-home message:
– More and more videos offering medical advice to older adults on YouTube are entirely made using generative AI, from their script to the voiceover narration, and they cite scientific papers that do not exist but that superficially look real
– Many of these videos appear to be made by content farms in Vietnam
– Because they are not made using cutting-edge AI tools yet, these videos can still be recognized as fake because they present incorrect anatomy, unnatural-looking people, somewhat monotonous voices, and gibberish writing on screen
– Do not trust random videos online for health information


@‌jonathanjarry.bsky.social



Source link