Civil liability may be the answer to AI-generated disinformation on social media

AI For Business


Jenny Chan Rodriguez/BI, Getty Images

  • AI makes it easier than ever to spread misinformation and disinformation on social media.
  • Legal experts argue that real change can only come through new laws or through the social media companies themselves.
  • One expert called for reforms to internet law to protect social media companies from civil liability.

In the age of generative artificial intelligence, it is arguable that ordinary people are more likely than ever to fall victim to disinformation and misinformation on social media.

False and misinformation thrive on social media platforms. Any time a major event enters the public consciousness, digital misinformation is arguably set to spread rapidly — think of the COVID-19 pandemic and the 2016 and 2020 US presidential election cycles.

So-called AI-generated deepfakes are further exacerbating the problem, making it easier than ever to spread false and misinformation through social media.

Legal experts told Business Insider that the only real way to combat misinformation and disinformation on social media is through new federal laws or for the tech companies behind the platforms to step up their self-regulation efforts.

“AI will mean that it's not just words that spread misinformation on social media, but videos, photos and audio recordings,” said Barbara McQuade, a former U.S. attorney and author of “Attack from the Inside: How Disinformation is Sabotaging America.”

McQuaid, the University of Michigan law professor, believes new laws will be needed to address the issue because “this is a new technology that hasn't existed before.”

“We may be reaching a point of awareness where people are starting to understand the risks and the dangers,” McQuade said.

A recent federal assessment compiled by the Department of Homeland Security warned of the threat that AI poses to the 2024 US presidential election.

“As the 2024 election cycle progresses, generative AI tools could provide domestic and foreign threat actors with increased opportunities to interfere by exacerbating emergent events, disrupting electoral processes, or attacking election infrastructure,” the analysis obtained by ABC News stated.

Social media companies are protected from civil liability under US law

Social media has been largely unregulated since its inception nearly three decades ago. In the United States, tech giants like Meta, X, and TikTok are protected from civil liability related to user-posted content and the companies' content moderation practices under Section 230 of the Communications Decency Act of 1996.

“The law says that companies are immune from legal liability and have immunity from liability,” McQuade explained, “and that probably made sense in 1996 when the goal was to encourage innovation and investment. And yet, almost 30 years later, we're seeing some of the collateral effects of this unregulated sector.”

So why has the government had such a hard time tackling the problem of disinformation and misinformation on social media head-on? Legal experts say it has to do with First Amendment concerns, pushback from big tech companies, and a lack of political will.

“It's hard to write laws, it's hard to define the terms misinformation and disinformation, it's hard to agree on what the appropriate interventions are, and I think it's hard to craft something that doesn't violate the First Amendment,” said Gautam Hans, a law professor at Cornell University and vice director of the First Amendment Clinic.

“Any kind of regulation that targets speech has very difficult constitutional obstacles, and the challenge is that you have to define disinformation and misinformation in a way that doesn't fall under the First Amendment,” Hans said.

Hans said he believes there is a “general unease about lawmakers and government officials proposing things that could be considered Orwellian in their attempts to create regulations for protected speech.”

“So I think most politicians are aware that being seen as a suppressor of speech is damaging to their reputation,” he said.

Hans also pointed out that “misinformation benefits certain political actors.”

Hans said he believes any remedy for misinformation and disinformation on social media will likely be found not in the realm of law, but through the private practices of tech companies themselves.

“Given the constitutional issues surrounding legislative and regulatory intervention, I think that's likely and would be more effective in the long run,” he said.

Section 230 has been hotly debated for many years.

McQuaid argued that social media companies need to be given incentives to do more to self-regulate to combat misinformation and disinformation.

“I think we need to apply public pressure through consumers to change behavior or through federal legislation,” McQuade said.

McQuaid has proposed amending Section 230 to hold social media companies liable under certain circumstances.

“A better way to regulate social media and online content may be to focus on process rather than content, because content is a very tricky thing to deal with in terms of First Amendment protections,” the former federal prosecutor said, adding that “regulating some processes could include things like algorithms.”

“I'm proposing that Section 230 could be amended to provide for civil liability, i.e. monetary damages, if social media companies fail to take certain precautions,” she said.

McQuaid said these precautions could involve disclosing how algorithms and personal data are used, requiring users to label AI-generated material, and removing bots that “amplify disinformation.”

“So I think it would be a way to kind of put pressure on social media companies to ensure compliance by exposing them to legal liability if they don't comply with certain conditions,” McQuade said.

This will inevitably lead to legal challenges, McQuaid said, and “whether these laws apply will be challenged in court, which is probably going to be necessary.”

“Information is a critical resource, especially in a democracy,” McQuade says, “and I think we can all agree that when misinformation spreads, it gets in the way of good government.”

Section 230 has come under intense scrutiny for years from both Republican and Democratic politicians.

Former President Donald Trump and other Republicans say the law gives big tech companies too much power to censor conservative voices, while President Joe Biden and other Democrats say it doesn't do enough to combat hate speech.

Biden doubled down on his calls for reform of Section 230 in a Wall Street Journal op-ed last year.

“We need bipartisan action from Congress to hold Big Tech accountable,” Biden said in the op-ed. “There's been much talk about creating a commission. Now is the time to act and get something done.”

But big tech companies scored a major victory last year when the US Supreme Court ruled in favor of Twitter and Google in a lawsuit alleging that the companies “aided and abetted” terrorist attacks.

The decision was Selected by conservative judges, he did not intervene in the fight over Section 230.

Major social media companies have their own misinformation policies

Many major social media companies, including Meta, TikTok, and X, have their own policies when it comes to dealing with misinformation and disinformation.

For example, Meta, which owns Facebook, Instagram, and Threads, said it would remove misinformation on its website that “may lead to a direct risk of imminent physical harm.”

“We will also remove content that may directly interfere with the functioning of the political process, as well as certain media that is highly deceptive and manipulative,” Mehta said.

Meta is focused on “slowing the spread of hoaxes and viral misinformation” and requires users to disclose any content they post that contains “digitally created or altered photorealistic video or realistic audio” using its “AI disclosure tool,” threatening penalties for failure to do so.

“We may also label certain digitally created or altered content that poses a particular risk of misleading people on matters of public importance,” Mehta said.

“To date, we have not observed any new GenAI-driven tactics that would impede our ability to disrupt the hostile networks behind the scenes,” Meta said in its May 2024 Adversarial Threat Report.

TikTok says it doesn't allow “harmful misinformation” on its platform and has “strict policies around specific types of misinformation, including health care, climate change, election misinformation, misleading AI-generated content, conspiracy theories, and public safety issues like natural disasters.”

Social media site X, formerly known as Twitter, says on its website that users must not “share synthetic, manipulated, or out-of-context media that may deceive or confuse people and cause harm ('misleading media').”

“Additionally, we may label posts that contain misleading media to help people understand the veracity of the post and provide additional context,” X said.

But these policies are not always enough.

Earlier this year, a graphic AI-generated image of Taylor Swift went viral on X, with the post receiving 45 million views before being removed after 17 hours.

Elon Musk-owned X blocked searches for the pop star following the incident.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *