“No Safety Rules”: Concerns grow as AI-generated videos become hated online

AI News


At first it looks like a quirky video clip produced by artificial intelligence.

Among them is a hairy bigfoot wearing a cowboy hat and a vest decorated with an American flag sitting behind the wheels of a pickup truck.

“We're going to the LGBT parade today,” says the Apella-like creature with a laugh. “You're going to love it.”

Things then take violent and intrusive turns as Bigfoot runs through the crowd of screaming people.

The clip, posted to the Americanbigfoot Tiktok page in June, has earned over 360,000 views and hundreds of comments, most of which praise the video.

In recent months, similar AI-generated content has been flooded with social media platforms, openly promoting violence against members of LGBTQ+, Jews, Muslims and other minority groups, and spreading hatred.

While the origins of most of these videos are unknown, their spread to social media has sparked anger and concern among experts and supporters who say that Canadian regulations are unable to keep up with the pace of content generated by hateful AI, and that they cannot properly address the risks pose to public safety.

Egale Canada, an LGBTQ+ advocacy organization, says the community is worried about the rise of transphobic and homophobic misinformation content on social media.

“These AI tools are weaponized because they dehumanize and trust people of different trans and gender, and because existing digital safety laws cannot address the scale and speed of this new threat,” Executive Director Helen Kennedy said in a statement.

The rapidly evolving technology gives bad actors a powerful tool to spread misinformation and hatred, and transgender individuals are disproportionately targeted, Kennedy said.

“From deepfake videos to algorithm-driven amplification of hatred, the harm is not artificial. They're real.”

The LGBTQ+ community is not the only target, said Evan Bargord, executive director of Canadian anti-hate networks. Islamophobia, anti-Semitism and anti-South Asian content, created with generative AI tools, is also widely distributed on social media, he said.

“When they create an environment where there are many blessings of violence against these groups, it is more likely that violence against those groups in person or on the street,” Bargord warned in a telephone interview.

Canada's digital safety laws are already behind, and advances in AI have made things even more complicated, he said.

“There are absolutely no safety rules when it comes to social media companies. There's absolutely no way to take responsibility.”

The bill, which aims to address harmful online content and establish a regulatory AI framework, died when Parliament was prologged in January, said Andrea Slaine, a law professor at Ontario Institute of Technology, who conducted extensive online safety investigations.

Slaine said the government should look at online harm laws again and reintroduce the bill “urgently.”

“I think Canada is in a situation where they really need to move,” she said.

Justice Minister Sean Fraser told the Canadian news outlet in June that the federal government is looking at the online harm law “fresh” but has not decided whether to rewrite it or simply reintroduce it. In particular, the bill aimed to take social media platforms accountable to reduce exposure to harmful content.

A new spokesman for the Ministry of Artificial Intelligence and Digital Innovation said the government is taking the issue of hateful content generated by AI seriously.

Sofia Ouslis said the existing law provides “critical protection,” but admitted it does not aim to address the threat of generated AI when designed.

“We need to understand how AI tools are used and misused and how to enhance guardrails,” she said in a statement. “The work is ongoing.”

This work will review existing frameworks, monitor court decisions and “listen closely to both legal and technical experts,” Ouslis said. She added that Prime Minister Mark Carney's government is also committed to making the distribution of indifferent sexual profound into a criminal offence.

“In this rapidly moving space, she said Ottawa wants to learn from the European Union and the UK.

Slaine said the European Union is ahead of others in ensuring AI regulation and digital security, but despite being on the “frontline”, he feels it needs to do more.

Experts say it's particularly difficult to regulate content distributed by the social media giants because those companies are not Canadians. Another complex factor is the current political situation south of the border, where high-tech companies in the US are reducing regulations and restrictions, and they “feel more powerful and less responsible,” Slaine said.

Generic AI has been around for several years, but in recent months there have been “breakthroughs” and it is easy to create high-quality videos using tools that are mostly free or available at very low prices, says Peter Lewis, Canadian research chair for trusted artificial intelligence.

“I have to say that it's really accessible to most people who have technical knowledge and access to the right tools right now,” he said.

Lewis, who is also an assistant professor at Ontario Institute of Technology, said that large-scale language models such as CHATGPT are implementing safeguards to rule out harmful or illegal content.

But to create such a guardrail, he said, there's a need to do more in the video space.

“You and I could probably do something scary after watching the video,” he said, “it's not always clear that we have the ability to reflect what an AI system has created.”

Lewis said he is not a legal expert, but believes that he can use existing laws to combat the online glory of hatred and violence in Americanbigfoot videos. However, he addressed the issue by adding collaboration between government, consumers, advocates, social platforms and AI app developers, with the rapid development of generative AI and the wide availability of new tools “call new technology solutions” and adding collaboration between governments, consumers, advocates, social platforms and AI app developers.

“If these things are uploaded, you need a really robust responsive flag mechanism to be able to remove these things from the internet as quickly as possible,” he said.

Lewis said it would help to detect and flag such videos using AI tools, but the problem remains unsolved.

“Due to the nature of how these AI systems work, they are probabilistic and therefore we don't catch everything.”

The report, which was first published on August 10th, 2025, by Canadian report.

Canadian media Sharif Hassan





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *