Get caught up in
Stories to keep you up to date
When you search for “guacamole” on Google, what is most likely to come up at the top? Lisa BryanTitled “The Best Guacamole Ever (Fresh, Easy, Authentic),” this recipe uses classic ingredients: avocado, Roma tomatoes, cilantro, garlic, onion, lime, jalapeño, and sea salt.
Top results for popular search queries are Brian's bread and butter, but she fears artificial intelligence could soon take that away.
Brian, a former health care executive from Southern California, said she got burned out at work 10 years ago and started posting recipes online for family and friends. She now runs a cooking and lifestyle blog called Downshiftology, where she advocates for “slowing down” and embracing simple pleasures. She has hired a full-time social media manager, has 2.5 million followers on YouTube, and her website reaches 130 million people a year.
Her success story was made possible by Google search, which drove millions of people to her blog, with notable increases in searches leading up to the Super Bowl and Cinco de Mayo, when searches for guacamole peak. But as Google shifts away from traditional search results toward using AI to answer users' questions directly, independent web publishers like Brian are at risk.
Now the bloggers are taking their case to Congress.
On Wednesday, they'll be holding an “Independence Day” lobbying event on Capitol Hill. The effort is being organized by Raptive, a company that does advertising and marketing for online publishers and helps them rank higher in search results, and which has a vested interest in fighting back against AI.
Bryan is one of thousands of people who signed an open letter sent to Congress by Raptive's CEO. Michael Sanchez They are urging scrutiny of Google's “AI brief,” and some of the authors plan to meet with staff and lawmakers in their home states.
“This new product strips publishers and creators of revenue and copyrighted content without consent or compensation, and competes directly with creators while giving nothing in return,” the letter argues. The letter asks Congress to urge tech companies to compensate content creators when their work is used to train AI tools, and to pressure Google to promise that its AI answers will not reduce traffic to third-party websites.
Raptive shared the open letter and accompanying report (which includes client survey results on the importance of Google Search to their business) with Tech Brief.
The group isn't pushing for any specific legislation, but it will urge lawmakers to hold hearings and pressure the tech industry to do right by web creators.
Sanchez said that while big media companies may be able to force tech giants to the negotiating table, Raptive's clients don't have that leverage. Among the creatives meeting with lawmakers are a crochet craft designer in Michigan, an independent blogger in Minnesota and two Tennessee brothers who run the country music site Country Rebel.
Bryan, who was not in Washington, D.C., said she urges viewers to trust and follow individual creators rather than relying on AI answers. “All my recipes are taste-tested, which is something a chatbot can't do,” she said.
Google maintains that fears that AI answers will disrupt the web economy are unfounded.
A Google spokesperson said the feature complements search results, not replaces them. Brianna Duff AI Summaries will only appear for certain searches where our systems predict they would be especially useful, such as when a user is looking for an overview of information from a variety of sources.
The company also said that sources cited in its AI briefs get more traffic than sources that appear in traditional search results. Sundar Pichai Speaking to The Verge in May, Google recognized the value of the web ecosystem that its AI-powered answers depend on.
Still, the feature got off to a rocky start when users posted screenshots of Google's AI suggesting people eat rocks or put glue in their pizza, sparking controversy. The company responded by temporarily reducing the number of searches that trigger the AI's response while it works to resolve the glitch. But some experts say the problem of poor-quality answers may not be fully solved anytime soon.
Web creators are the latest group to join a growing backlash against tech companies that are using their work without permission to train chatbots, image generators and other generative AI tools.
Artists, writers, and media organizations have sued ChatGPT's developer, OpenAI, and other AI companies for copyright infringement, while others have signed licensing agreements that allow the companies to use their works in return for a fee. This week, a group of record companies sued two AI music companies that use software that generates songs on-demand based on user instructions.
It's an open question how the litigation will play out. The tech companies argue that training their AI systems to use other people's publicly available work without permission amounts to “fair use” under copyright law because the software transforms the work into something new and original.
But they appear to be losing public support, and public comments from some tech executives have made the situation worse.
Mira Murati, chief technology officer at OpenAI, drew criticism this week for asserting in a recent speech that “some creative jobs may disappear,” asserting that while AI tools will replace those jobs, they will help make other jobs more creative. Moreover, of the jobs that will be lost, she added, “maybe those jobs shouldn't have existed in the first place.”
US investigating China Telecom and China Mobile over internet, cloud risks, sources say (Reuters)
Brett Taylorco-founder and CEO of Sierra and chairman of OpenAI, spoke with The Washington Post's Elizabeth Dwoskin about the present and future of AI at the Post Live event, “The Futurist: The New Age of AI.”
“[AI] We might get a bit of a bubble. … We're probably investing too much, too fast, in various sectors of the economy. [But] “If we were to fast forward 30 years from today, my strong intuition is that the impact of this next generation of AI will probably be in line with expectations over time.”
— Bret Taylor, Chairman of OpenAI
OpenAI delays release of voice assistant due to safety testing (Gerrit de Vynck)
Beleaguered self-driving car maker Cruise appoints new CEO (Trisha Thadani)
Waymo ends waitlists, opens up robotaxis to everyone in San Francisco (The Verge)
DeepMind says political deepfakes top list of malicious AI uses (Financial Times)
AI tools make it easy to replicate someone's voice without their consent (Proof News)
EU claims Microsoft violated antitrust laws by bundling Teams with Office (by Cat Zakrzewski and Aaron Gregg)
Meta is connecting Threads more deeply with the Fediverse (The Verge)
Reddit updates web standards to block automated website scraping (Reuters)
From Clinton's emails to the Iraq War, how WikiLeaks changed the internet by Joseph Meng
What is micro-cheating? TikTok users discuss online infidelity. (Tatum Hunter)
It's time to be realistic about what AI can and can't do (by Shira Ovide)
How AI can mimic restaurant reviews (The New York Times)
- The House Homeland Security Committee will hold a hearing at 10 a.m. Wednesday on ways to address the U.S. cyber talent shortage.
- The House Small Business Committee will hold a hearing on how the “censorship-industrial complex” affects small businesses at 10 a.m. Wednesday.
- The House Energy and Commerce Committee will consider bills including the American Privacy Rights Act and the Children's Online Safety Act at 10 a.m. Thursday.
- The American Bar Association will host a fireside chat with FTC Chairman Lina Khan at 11 a.m. Thursday.
That's all for today. Thank you for joining us. Please tell others to subscribe to Tech Brief. Contact Cristiano (by email or Social media) and Will (email or Social mediaFor tips, feedback, greetings, etc., please contact us at .
