(The Hill) — Political momentum is building to regulate the spread of explicit, non-consensual deepfakes as the problem of digitally altered images goes from a potential threat to a reality.
Several bipartisan bills introduced in Congress aim to mitigate the spread of non-consensual explicit images created using artificial intelligence (AI) – a problem that plagues not only public figures and celebrities, but also ordinary people and even children.
“Last year was really something new and created huge problems,” said Anna Olivarius, founding partner at transatlantic law firm McAllister Olivarius, which specializes in racism and sex discrimination cases.
The issue drew public attention in January when an explicit AI-generated image of Taylor Swift was circulated online, sparking outrage and leading lawmakers and the White House to pressure platforms to enforce rules to prevent the spread of such images.
While the spread of Swift's deepfake has brought attention to the rise of non-consensual AI pornography, the problem is much more widespread, forcing schools to grapple with new forms of cyberbullying and harassment as students create and spread deepfakes of their peers in largely unregulated spaces.
“This is affecting a lot of people in the public,” Olivarius said.
Even lawmakers are becoming victims. Rep. Alexandria Ocasio-Cortez (D-New York), one of the lawmakers leading a bill to combat explicit deepfakes, said in an April interview with Rolling Stone that she too has been the target of explicit deepfakes without her consent.
The issue has garnered support from lawmakers across the political spectrum: One bill, the “Rebel Act,” is sponsored by Ocasio-Cortez and Judiciary Committee Chairman Sen. Dick Durbin (D-IL), and the other, the “Repeal Act,” is sponsored by Sens. Ted Cruz (R-TX) and Amy Klobuchar (D-MN).
Olivarius said the support from both sides has been remarkable.
“Maybe there's finally something here that lawmakers can agree on and actually pass,” she said.
The two bills aim to tackle the problem from different angles. A counter-bill introduced in March would create a federal civil action law that would allow victims to sue individuals who create, distribute or solicit deepfakes.
The Take It Down Act, introduced last month, would make it a federal crime to publish or threaten to publish digitally altered images without consent. It would also create a process through which victims can compel tech platforms to remove non-consensual, explicit deepfakes that depict them.
Durbin spokeswoman Emily Hampsten said the two bills are complementary and that his staff is in discussions with the sponsor of the other bill's office.
While the bills have bipartisan support, they still face an uphill battle to pass, especially in the months leading up to a hotly contested election with power in the White House and both houses of Congress at stake.
Senate Majority Whip Durbin brought the rebel bill to a unanimous vote in June, but it was blocked by Take It Down co-sponsor Sen. Cynthia Lummis (R-Wyo.).
Stacey Daniels, a spokeswoman for Senator Lummis, said the senator “supports the intent of the Opposition Bill” but is “concerned that the bill contains overly broad language that could inadvertently threaten privacy technologies, fail to protect victims, and stifle innovation.”
Daniels said Lammis' team is working with Durbin to resolve the issue.
“Senator Lummis supports the Take It Down Act, a more tailored approach to hold accountable those who create or knowingly distribute deepfake pornography,” Daniels said in an email.
Olivarius said the civil remedies built into the Defy It Act are “very powerful” because they give affected individuals the power to take action, but the Take It Down Act is “much more limited.”
Victims' rights attorney Carrie Goldberg said the Take It Down law is an “interesting new approach,” but noted potential hurdles in how it would be implemented as criminal law.
“I'm pretty skeptical of legislation that just gives power back to the government,” Goldberg said.
“Then it becomes a question of whether law enforcement will take it seriously,” she said.
At the same time, Goldberg said one of the goals of a bill like this is to show that such behavior is illegal, and that alone could deter would-be perpetrators.
She also said tech companies could argue that Section 230 of the Communications Decency Act, which protects platforms from liability for content posted by third parties, overrides the bill's notice-and-takedown provisions.
“But this is federal law that clashes a little bit with other federal laws, so it'll be interesting to see how it plays out,” Goldberg said.
Another bill combating non-consensual explicit deepfakes was introduced in May by Sen. Maggie Hassan (R-TX), the Democratic nominee for Prince Harry and Meghan Markle. The bill would make it illegal to share deepfake pornographic images or videos without consent, making the sharing of such images a criminal offense. The bill would also create a civil right of action for victims to sue parties who share the images.
Olivarius particularly highlighted the impact on women and its dire, sometimes fatal, effects, citing examples of victims who committed suicide after doctored images were spread, and called on Congress to take action on the issue.
“Society hasn't done much to show that many people care about women,” she says. “This is [support for the bills is] “This is unusual. I think it's a wonderful thing and I hope it becomes law as soon as possible,” she said.
But given the potential obstacles that Section 230 poses, Goldberg said Congress should prioritize repealing the controversial provision to help victims.
“The best way to address a lot of the harm that happens on platforms is for the platforms themselves to share the costs and responsibility,” Goldberg said.
“Power needs to be transferred to the people, and they need to be able to sue and demand content be removed from platforms,” she added.