Microsoft calls for new legislation on deepfake scams and AI-generated sexual abuse images

AI For Business


Happy Tuesday! I'm Gerrit De Vynck, a reporter covering Google and artificial intelligence, filling in for Cristiano today. Send your news tips to gerrit.devynck@washpost.com.

Microsoft calls for new legislation on deepfake scams and AI-generated sexual abuse images

Tech giant Microsoft is calling on Congress to pass legislation that would make it illegal to use AI-generated voices or images to commit fraud and require AI companies to develop technology to identify fake AI images created by their own products.

The recommendations are part of a 50-page document released by Microsoft on Tuesday that lays out a broader vision for how governments should approach AI.

Lawmakers and regulators across the country Discuss how to regulate AICompanies developing new technologies have issued a number of proposals for how politicians should handle the industry.

Microsoft has long been accustomed to lobbying governments on issues that affect its business, and has sought to position itself as an active and helpful company trying to shape the debate and ultimate legislative outcomes by aggressively pushing for regulations.

Smaller technology companies and venture capitalists are skeptical of the approach, accusing big AI companies like Microsoft, Google and OpenAI of trying to pass legislation that would make it harder for startups to compete with them. Supporters of the legislation, including California politicians who are leading the nation in passing broad AI legislation, say the government's early failure to regulate social media use could allow problems like cyberbullying and misinformation to flourish unchecked.

“Ultimately, the danger is not in moving too fast, but in moving too slow, or not moving at all,” the Microsoft president said. Brad Smith It was written in a policy document.

In the document, Microsoft called for the enactment of a “deepfake fraud law” that would specifically make it illegal to use AI to deceive people.

As AI gets better at generating voices and images, scammers are already using it to trick people into sending money to their loved ones. Other tech lobbyists argue that existing anti-fraud laws are enough to crack down on AI, and that the government doesn't need to enact additional legislation.

Microsoft split with other tech companies on a separate issue last year when it suggested the government should create an independent agency to regulate AI, while others argued the FTC and DOJ have the ability to regulate AI.

Microsoft also called on Congress to require AI companies to build “provenance” tools into their AI products.

AI images and audio are already being used around the world for propaganda and to mislead voters, and AI companies are working on developing technology to embed hidden signatures into AI images and videos that can identify whether the content is AI-generated or not. However, deepfake detection is notoriously unreliable. And some experts question whether it will ever be possible to reliably separate AI content from real images and sounds.

Microsoft said state and Congress should also reform laws to address the creation and sharing of sexually exploitative images of children and intimate images without their consent. AI tools are already being used to create sexual images and sexual images of children against their will.

Government Scanner

Federal court rules US Border Patrol must get warrant before searching cell phones (TechCrunch)

Google's Anthropic AI Deal Closes in on U.K. Regulators (Bloomberg)

New US Commerce Department report encourages 'open' AI models (TechCrunch)

Hill's Affair

Senators turn to online content creators to push bill (Taylor Lorenz)

Low-income families lose internet service as Congress eliminates discount program (Ars Technica)

Inside the Industry

Trump v. Harris is splitting Silicon Valley into opposing political camps (Trisha Thadani, Elizabeth Dwoskin, Nitasha Tyk, Gerrit de Vink)

TikTok has a Nazi problem (Wired)

Amazon paid about $1 billion for Twitch in 2014, and it's still losing money. (Wall Street Journal)

Fraudsters Use Meta's Proprietary Tools to Target Middle Eastern Influencers (Bloomberg)

Competition Watches

Adobe, Canva, and ByteDance's CapCut are losing users — especially on TikTok (Bloomberg)

As AI companies keep building new scrapers, websites are blocking errant AI scrapers (404 Media)

trend

How Elon Musk came to support Donald Trump (Josh Dorsey, Eva Doe, Faiz Siddiqui)

A Field Guide to Spotting Fake Photos (Chris Velazco and Monique Woo)

AI Gives Weather Forecasters a New Advantage (New York Times)

diary

  • The Information Technology and Innovation Foundation will host an event called “Can China Innovate with Electric Vehicles?” on Tuesday at 1 p.m. in 2045 Rayburn House Office Building.
  • The Consumer Technology Association hosted a conversation with the White House's National Cyber ​​Director. Harry Coker Jr.It will be held Tuesday at 4 p.m. at the CTA Innovation House.
  • The Senate Budget Committee will hold a hearing on the future of electric vehicles at 10 a.m. Wednesday in the 608 Dirksen Senate Office Building.
  • The Center for Democracy and Technology will host a virtual event at noon Wednesday, “What You Need to Know About Artificial Intelligence.”
  • sense. Ben Ray Lujan (D.N.M.) and Alex Padilla (Democrat, California) will host a public panel, “Countering Digital Election Disinformation in Languages ​​Other Than English,” on Wednesday at 4 p.m. in Room G50 of the Senator Dirksen Office Building.
  • The U.S. General Services Administration will host a Federal AI Hackathon starting at 9 a.m. Thursday.

Before you log off

That's all for today. Thank you for joining us. Please tell others to subscribe to Tech Brief. Contact Cristiano (by email or Social media) and Will (email or Social mediaTips, feedback, greetings, etc.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *