Join C-suite executives in San Francisco July 11-12 to hear how leaders are integrating and optimizing their AI investments for success.. learn more
When Microsoft-funded lab OpenAI launched ChatGPT in February, millions of people realized almost overnight what tech experts had long understood. Today’s AI tools are advanced enough to transform not just our daily lives, but an incredibly wide range of industries. Microsoft’s Bing jumped from his well-placed second place in search to a much higher profile. Concepts such as Large Language Models (LLM) and natural language processing are now part of the mainstream discussion.
However, when the spotlight hits, there are also strict eyes. Regulators around the world are eyeing AI risks to user privacy. The Elon Musk-backed Future of Life Institute has gathered 1,000 signatures from tech leaders calling for a six-month moratorium on training AI tools more advanced than GPT-4, which underpins ChatGPT.
Legal and engineering issues are complex, but basic ethical questions are easy to understand. If developers need to take a summer break to work on AI advances, will they shift their focus to ensuring AI adheres to ethical guidelines and user privacy? Can we control the potentially disruptive impact on media and media monetization?
Google, IBM, Amazon, Baidu, Tencent, and many smaller companies are working on similar AI tools, and in Google’s case, have already launched. In emerging markets, it is impossible to predict which products will become mainstream and what the outcome will be. This underscores the importance of protecting the privacy of AI tools today, planning ahead of the unknown.
event
transform 2023
Join us July 11-12 in San Francisco. There, he shares how management integrated and optimized his AI investments to drive success and avoid common pitfalls.
Register now
As the digital advertising industry looks keenly at AI applications for targeting, measurement, creative personalization and optimization, and more, industry leaders will need to take a close look at how this technology is being implemented. I have. Specifically, the use of personally identifiable information (PII), the potential for accidental or intentional bias or discrimination against underrepresented groups, and how data is handled through third-party integration and global regulatory compliance. should consider whether it will be shared with
Search vs. AI: Is Spend Redistribution Optimal?
As far as advertising budgets go, it’s easy to imagine what a “search vs. AI” showdown would look like. Instead of rephrasing your search query or clicking links to narrow down what you’re really looking for, it’s great to have all the information you’re looking for collected in one place by AI. Non-AI search engines are at risk of becoming irrelevant if we see a generational shift in how users discover information—when young people embrace his AI as a central part of the digital experience of the future.This can have a significant impact on the value of your search inventory and Ability for publishers to monetize traffic from search.
Despite publishers’ continued push to drive audience loyalty through subscriptions, search still drives a significant share of traffic to publisher sites. And now that advertising is making its way into AI chat (Microsoft is testing ad placements in Bing Chat, for example), publishers are asking themselves how AI providers can source their tools and revenue. I have a question regarding distribution. It’s safe to say that another data source from the walled garden for the publisher to look at as a source of revenue is the black box. To thrive in this uncertain future, publishers need to lead the conversation and get stakeholders across the industry to understand what we are pushing for.
Develop processes with privacy in mind
Industry leaders need to keep a close eye on how they and their technology partners collect, analyze, store and share data for AI applications across all processes. A process to obtain explicit user consent for data collection and provide a clear opt-out must occur at the start of an interaction with AI Chat or Search. Readers should consider implementing consent and opt-in buttons using AI tools that personalize content and advertising. Despite the convenience and sophistication of these AI tools, the costs associated with user privacy risks cannot be paid. As industry history has shown, users are expected to become increasingly aware of these privacy risks. Companies should not rush to develop consumer-facing AI tools and risk privacy in the process.
Right now, Big Tech’s AI tools are getting the most attention, so don’t be fooled by the false reassurance that the impact of this evolution will be Big Tech’s problem. The recent headcount cuts at big tech companies have led to a significant distribution of talent, which will in turn lead to advances in AI coming from the small and medium-sized companies that acquired the talent. And for publishers unwilling to work with yet another walled garden to survive, there is an additional level of privacy beyond the critical one where the greatest business interests are at stake. Industry leaders should treat the rise of AI chat as a pivotal moment.
Take this opportunity to prepare for a privately secure, transparent, and profitable future.
Fred Marthoz is Lotame’s Vice President of Global Partnerships and Revenue..
data decision maker
Welcome to the VentureBeat Community!
DataDecisionMakers is a place where experts, including technologists who work with data, can share data-related insights and innovations.
Join DataDecisionMakers for cutting-edge ideas, updates, best practices, and the future of data and data technology.
You can also consider contributing your own articles.
Read more about DataDecisionMakers
