Joe Rogan and Justin Trudeau in a fake video.Youtube
Images and videos generated by artificial intelligence (AI) have gone viral on social media, sparking debate about how much liability the companies that run these platforms should take if users are deceived. .
A recent AI-generated image of the Pope in a puffy white coat and an AI-generated video of an interview with popular podcaster Joe Rogan and Prime Minister Justin Trudeau are just a few examples of how users believe them to be real. .
AI expert Litish Kotak said these platforms have a role to play in ensuring that certain types of information and content are not used for malicious purposes.
“Obviously there are parody accounts and things that are used for fun. But on the flip side, this opens up the whole discussion of ‘Did that person actually agree to that?’ is. If they’re in the public eye, how much can even parody accounts that can be used to lure or trick someone into thinking “this is legal”.
He points out that even if something is labeled as AI content, it can be difficult to find.
“In an interview with Joe Rogan and Prime Minister Justin Trudeau posted on YouTube ..if you look at the title it says ‘Interview with Joe Rogan and Prime Minister Justin Trudeau’. “You really have to click the description,” Kotak explained.
YouTube relies on a combination of people and technology to enforce its policies. Their misinformation policy prohibits content that is technically manipulated or tampered with in a way that misleads users.
In the most recent reported quarter, Q3 2022, 94.5% of videos in violation of this policy were detected by our automated reporting system. YouTube has removed over 121,000 videos for violating its misinformation policy.
They allow content that provides sufficient ESDA context, such as basic facts about what is happening in the content.
Regarding Joe Rogan and Trudeau’s defaced video, YouTube said it did not violate its policy because the video description stated that the voices in the video were fake.
A TikTok spokesperson told CityNews that it uses a combination of technical and moderation teams, including 40,000 talented safety experts, to review and remove content that violates its community guidelines. said.
TikTok will release updated Community Guidelines on April 21st. It includes rules on how to treat synthetic media and clarifies it as content created or modified by AI technology.
Their Synthetic Media Policy prohibits manipulated synthetic media that “distort the truth of the event in a way that could cause serious harm to the community or society.”
Also, if a user impersonates an individual, that content will be removed regardless of whether the content is synthetic.
A TikTok spokesperson said, “Like many technologies, advances in synthetic media open up both exciting creative opportunities and unique safety considerations, and we are committed to responsible innovation. ‘ said.
As for Meta, which covers Facebook and Instagram, there are specific manipulated media policies that allow the removal of manipulated content.
- It has been edited or synthesized beyond clarity and quality adjustments in ways that the average person cannot see, and could mislead someone into thinking that the subject of the video said something it didn’t actually say.
- A product of artificial intelligence or machine learning that merges, replaces, or superimposes content on a video to make it look like the real thing.
This policy does not apply to parody or satirical content.
They add that any audio, photo or video will be removed from the app if it violates other community standards, including nudity, graphic violence, voter suppression and hate speech.
Meta also said he believes AI can be a tool for detecting harmful content. They launched the “Deepfake Detection Challenge” to accelerate the development of new methods to detect deepfake videos.
Kotak said “deep fakes” can be devastating.
“I have dealt with individuals who have created deep fakes, and they are very devastating.
It can also be used for porn material.
“This has a very devastating human impact on individuals who have been harmed by the process and then harmed again. [of getting them removed.]”
To avoid being fooled by AI-generated content, Kotak said look to trusted sources.
“To see if what you’re seeing is actually correct, if something is claimed to be true, check multiple sources. It’s on a particular platform, so It’s not a superficial value,” Kotak said.
“Unfortunately, the world we live in now is inundated with messages of this kind. This kind of fake content certainly exists. , it is important to be aware that this information may not be accurate.Please do your homework, visit reliable sources and make an informed decision.”
As for new AI-powered chatbots like ChatGPT, the Canadian Privacy Commissioner announced this week that it will begin investigating OpenAI, which runs it.
“AI technology and its impact on privacy is a priority for our firm,” said Privacy Commissioner Philippe Dufresne. “We need to keep up with and stay ahead of fast-moving technological advances, which is one of my areas of focus as Commissioner.”
An investigation was launched in response to complaints alleging that personal information was collected, used, and disclosed without consent.