During Google’s big I/O showcase event for developers, an event where the company’s new AI features shine brightly for a long time, one of the keynote speakers is the “bad guy” We talked in detail about the risks posed by
The phrase involved a balance of real and abstract threats within the context of an otherwise self-conscious and optimistic event. The use of the term villain is threatening enough to reassure the audience that Google’s human brains are rightly considering the dangers of AI, which is rapidly expanding beyond the limits of practical control. But there was not enough specificity about the threat to dampen the spirit of the party.
In fact, the mainstreaming of generative AI could put an ever-more powerful weapon of mischief in the hands of scammers, disinformation dealers, and other downright bad guys. We’re right to be concerned about this, as Google did, acknowledging the tension that currently exists in a company of such importance between what can and should be released to market. was right.
But from Google’s tone, it seemed likely, at least for now, that the company would likely operate on the basis that the public could trust a fair amount of generative AI. But that could be underestimating the villain’s run-of-the-mill villainy. That is, people who don’t actively seek out the technology’s dark potential, but who definitely use it if only they are ready to exploit it.
The problem is that with each new Google AI product on our screens, risk feels less abstract and more real. The fact that Google, Microsoft, and other big tech companies are creating a battleground for AI consumers and businesses means that commercial competition is effectively directed to do what it does best, leaving it free to act. It means that the That means getting as much into our hands as legally possible as quickly as possible. as possible. This means that the tools you need to become a casual (but highly efficient) villain are more available than ever.
There were two moments that stood out. In one, Google executives demonstrated the AI-powered translation software they’re currently testing, which the company perceives to be user-friendly and very powerful deepfake video generators. ing. The head of Google’s division similarly acknowledged, explaining the need for guardrails, watermarks and other security measures that may prove difficult to enforce in practice.
A video of the speaker speaking in one language is played. Their words are transcribed by AI, translated, and rendered as audio in another language. The tone and brightness of the translated audio are adjusted to more closely mimic the speaker’s voice and re-dubbed into the original video by the software. Eerily, but not quite yet, the AI manipulates the tape so that new words are in sync with the speaker’s lips. As amazing as it is, it’s not too hard to imagine how the ability to make it seem like people are saying things they never actually do would be useful for both villains and villains.
In a separate demo, Google executives showed off the company’s AI-powered Magic Editor. Essentially, it’s a very quick and easy Photoshop-type tool that allows non-technical people to modify photos and, as a result, could change history. An event or encounter in which you have had your finger pricked a few times.
The company’s scenario, inevitably benign, started with a photo of a tourist in front of a waterfall. It’s a fun memory, but- oops! — The conspicuous handbag strap she wanted to get rid of. Jab! It quickly disappeared. She wished the weather on her trip had been better. Jab! The sky was no longer granite clouds, but a glorious blue. I wish she had been closer to the waterfall and had her arm pointed at a different angle. Jab! She has moved.
No one would begrudge this conceptual traveler’s right to slightly rewrite reality. But how villainous actors use this puts everything in an even more questionable light. Not everyone will immediately see how they can benefit from this instant ability to retroactively manipulate visual records, but just having that ability in your pocket can make so many People will be interested in airbrushing.
Since ChatGPT’s launch, Google and others have little choice but to engage in this nascent, experimental three-way conflict between humanity, AI, and trillion-dollar corporations. Google Chief Executive Sundar Pichai said last week that Google’s guiding principle in this regard is to be “bold and responsible.” This is fine, but until you get the hang of just how bad-looking actors are out there, this feels like a replacement.
leo.lewis@ft.com