1 minute read
June 25, 2023, 10:02 PM IST
shraddha gold
IBM is developing policies to regulate employee use of third-party generative AI tools such as OpenAI’s ChatGPT and Google’s Bard. Gaurav Sharma, vice president of IBM India Software Labs, said the company values the field and its veracity because these tools are built on unusable and unreliable sources. IBM is not the first to consider regulating the use of ChatGPT. Samsung Electronics, Amazon, Apple, and global banks such as Goldman Sachs, JP Morgan and Wells Fargo are among the companies restricting internal use of ChatGPT due to data security concerns.
IBM is drafting policies to define how employees can use third-party generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT and Google’s Bard, according to three tech giants. senior executives at the 2019 AI Innovation Day event. Bangalore on June 20th.
IBM is drafting policies to define how employees can use third-party generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT and Google’s Bard, according to three tech giants. senior executives at the 2019 AI Innovation Day event. Bangalore on June 20th.
Gaurav Sharma, Vice President of IBM India Software Labs, talks about the rise of generative AI and how such tools are being used in internal processes, saying the company appreciates the field and its veracity. Stated. He added that policies regarding the use of generative AI applications such as ChatGPT are “still being worked out.”
Gaurav Sharma, Vice President of IBM India Software Labs, talks about the rise of generative AI and how such tools are being used in internal processes, saying the company appreciates the field and its veracity. Stated. He added that policies regarding the use of generative AI applications such as ChatGPT are “still being worked out.”
subscribe and read more
Vishal Chahal, director of automation at IBM India Software Labs, further affirmed the development of internal policies regarding the use of such tools.
Work to develop this policy is still underway, but so far no outright ban has been put in place. “There has been public education not to put our code into ChatGPT, but we have not banned it,” said Shweta Shandiriya, director of IBM India Software Labs in Kochi.
In response to a question about its internal policy framework for ChatGPT, an IBM spokesperson said, “As new technology emerges, such as the use of other generative AI tools (besides ChatGPT), we continue to review its usage. It’s a process,” he said.
IBM is not the first to consider regulating the use of ChatGPT. Bloomberg reported on May 2 that South Korea’s Samsung Electronics decided to ban its employees from using ChatGPT after it was deemed to have leaked confidential internal data. On Jan. 25, Insider reported that Amazon issued a similar internal email asking staff not to use his ChatGPT due to security concerns about sharing sensitive internal data with his OpenAI. bottom. On May 18, The Wall Street Journal reported that Apple had followed a similar path.
Global banks Goldman Sachs, JP Morgan, and Wells Fargo are also believed to have restricted their internal use of ChatGPT due to concerns that their clients and their sensitive data would be leaked to OpenAI’s data testbed. ing.
IBM’s policy alleges that the data of more than 100,000 ChatGPT accounts was scraped and sold on dark web markets, according to a report released on June 20 by Singapore-based cybersecurity firm Group-IB. ing.
However, OpenAI said on June 22 that the stolen data was “the result of generic malware on the device and not a compromise of OpenAI.”
Jaya Kishore Reddy, co-founder and chief technology officer of Mumbai-based AI chatbot developer Yellow.ai, explained why such internal bans are in place: . There are also accuracy issues and it is possible to misinterpret the information generated. Additionally, data entered into these platforms is used to train and fine-tune responses, which can expose sensitive company information. “
On February 27th, Mint announced that businesses are wary of deploying tools like ChatGPT, with no safeguards against data hallucinations, inaccurate and misleading information, and the retrieval or deletion of sensitive corporate data. I reported that I had some concerns.
Byrne Elliott, vice president and analyst at Gartner, said at the time: “It is important to understand that ChatGPT was built without any corporate privacy governance and all data collected and provided is left without any safeguards. GPT model This is also difficult for organizations such as media and pharmaceutical companies, as introducing into chatbots leaves them with no protection from a privacy standpoint.Future versions of ChatGPT, backed by Microsoft through its Azure platform, will be available in the near future. In the future, it may be offered to businesses for integration, and may be a more secure option.”
Since then, OpenAI has introduced better privacy controls. On April 25, the company announced via a blog post that users can turn off conversation history, permanently deleting usage data from its servers after 30 days. It also clarified that a “business-friendly” version of ChatGPT is in development, which will give businesses more control over their data.
Yellow.ai’s Reddy added that companies are now choosing enterprise-grade application programming interfaces (APIs) such as OpenAI to ensure data security, or building their own in-house models. rice field.
