Google, one of AI’s biggest backers, warns its own staff about chatbots

AI News


SAN FRANCISCO, June 15 (Reuters) – Alphabet (GOOGL.O) is selling programs around the world while warning employees about how to use chatbots, including its own Bard, the company said, citing circumstances. Four people familiar with the matter said. Reuters.

Google’s parent company has advised employees not to enter sensitive company material into AI chatbots, a person familiar with the matter said, a move the company confirmed because of its long-standing information protection policy.

Chatbots such as Bard and ChatGPT are human-like programs that use so-called generative artificial intelligence to converse with users and answer a myriad of prompts. Human reviewers could read the chats, and researchers found that similar AI could reproduce data it absorbed during training, potentially creating a risk of leaks. .

Alphabet also warned engineers to avoid direct use of computer code that chatbots could generate, some of the people said.

When asked for comment, the company said Bard might make suggestions for undesirable code, but it would still be helpful to programmers. Google also said it aims to be transparent about the limitations of its technology.

This concern shows how Google wants to avoid business damage from software it launches to compete with ChatGPT. Google’s race against ChatGPT backers OpenAI and Microsoft (MSFT.O) is at stake with billions of dollars in investment and the still immeasurable ad revenue from new AI programs and cloud Income.

Google’s warning also reflects what is becoming a corporate security standard to warn personnel about using publicly available chat programs.

More companies around the world are putting guardrails on AI chatbots, including Samsung (005930.KS), Amazon.com (AMZN.O) and Deutsche Bank (DBKGn.DE) told Reuters. Apple (AAPL.O) also did not respond to a request for comment, but reportedly responded as well.

According to a survey of nearly 12,000 respondents, including top U.S.-based companies, by networking site Fishbowl, about 43% of professionals used AI tools such as ChatGPT as of January. In many cases, they used it in secret from their superiors.

By February, Google instructed staff who were testing Bard before its launch not to provide inside information, according to insiders. Google now has Bard deployed in over 180 countries in his 40 languages ​​as a starting point for creativity, and that caveat extends to code suggestions.

Google is in detailed talks with Ireland’s Data Protection Commission, following Politico’s report on Tuesday that it would delay the introduction of Bard into the EU this week until more information is available about the chatbot’s privacy implications. He told Reuters he was answering questions from regulators.

Concerns about confidential information

Such technology can create emails, documents, and even software itself, and can be expected to greatly speed up the task. However, this content may contain misinformation, confidential data, and even copyrighted passages from the “Harry Potter” novels.

Google’s privacy notice, updated on June 1, also states, “Do not include sensitive or sensitive information in bard conversations.”

Some companies have developed software to address such concerns. For example, Cloudflare (NET.N), which protects websites from cyberattacks and provides other cloud services, sells the ability to tag businesses to restrict some data from flowing outside. increase.

Google and Microsoft also offer interactive tools for enterprise customers, but they come with a hefty price tag, but refrain from ingesting data into public AI models. By default, Bard and ChatGPT store your conversation history and you can choose to delete it.

Youssouf Mehdi, Microsoft’s chief consumer marketing officer, said it was “obvious” that companies didn’t want their employees to use public chatbots for work.

“Companies are rightfully taking a conservative stance,” Mehdi said, explaining how Microsoft’s free chatbot Bing compares to the company’s enterprise software. “There our policy is much stricter.”

Microsoft declined to comment on whether it has outright banned employees from entering sensitive information into public AI programs, including its own, but another Microsoft executive told Reuters personally said it restricted its use to

Cloudflare CEO Matthew Prince said entering confidential information into a chatbot was like “leaving a lot of PhD students off all their personal records.” rice field.

Reported in San Francisco by Jeffrey Dustin and Anna Tong, edited by Kenneth Lee and Nick Zieminski

Our standards: Thomson Reuters Trust Principles.

Jeffrey Dustin

thomson Reuters

Jeffrey Dustin is a San Francisco-based correspondent for Reuters, reporting on the technology industry and artificial intelligence. He joined Reuters in 2014, initially writing about airlines and travel for the New York bureau. Dustin graduated from Yale University with a degree in history. He was part of a team investigating lobbying by Amazon.com around the world, with which he won the SOPA award in 2022.

Anna Tong

thomson Reuters

Anna Tong is a correspondent for Reuters based in San Francisco, reporting on the technology industry. She joined Reuters in 2023 after working as a data editor at the San Francisco Standard. Tong previously worked as a product manager at a technology startup, and helped her research user insights and run a call center at Google. Mr. Tong graduated from Harvard University. Contact: 4152373211



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *