Who is listening when you are talking to your chatbot?

AI News


New York (CNN) As tech sectors race to develop and deploy powerful new AI chatbots, their widespread adoption is raising new data privacy concerns among some businesses, regulators and industry watchers. .

some companies including JP Morgan Chase (JPM)has cracked down on employee use of ChatGPT, the viral AI chatbot that first launched Big Tech’s AI arms race, due to compliance concerns related to employee use of third-party software.

OpenAI, the company behind ChatGPT, will have to temporarily take the tool offline on March 20th to fix a bug that allowed some users to see subject lines from other users’ chat histories Privacy concerns were heightened when it was revealed.

The same bug, which has now been fixed, caused “some users to display another active user’s first and last name, email address, payment address, last four digits (only) of credit card number, credit card expiry date, OpenAI said in a blog post.

And just last week, Italian regulators announced a temporary ban on ChatGPT in the country, citing privacy concerns, after OpenAI disclosed its violations.

The “black box” of data

Mark McCreary, co-chair of the privacy and data security practice at law firm Fox Rothschild LLP, told CNN: “It’s like a black box.”

ChatGPT, which opened to the public in late November, allows users to generate essays, stories, and lyrics by simply entering prompts.

Since then, Google and Microsoft have also rolled out AI tools that work in a similar way, leveraging large-scale language models trained on vast amounts of online data.

As users enter information into these tools, McCreary says: This raises particular concerns for businesses. As more employees casually adopt these tools to help with work emails and meeting minutes, McCreary says, “I think there will be more opportunities for corporate trade secrets to get into these various AIs. I will.”

Steve Mills, chief AI ethics officer at Boston Consulting Group, similarly told CNN that the biggest privacy concern most companies have with these tools is “inadvertent disclosure of sensitive information.” said it was.

“All these employees are doing things that seem very harmless, like, ‘I can use this to summarize meeting notes,'” Mills said. “But pasting meeting notes into prompts can suddenly reveal a ton of sensitive information.”

As many of the companies behind the tools state, when the data that people put in is used to further train these AI tools, “you lose control of that data and someone else does it.” have,” Mills added.

2,000 word privacy policy

OpenAI, the company behind Microsoft-backed ChatGPT, says in its privacy policy that it collects all kinds of personal information from people who use the service. We may use this information to improve or analyze our services, conduct research, communicate with users, and develop new programs and services.

Our Privacy Policy states that we may provide personal information to third parties without notifying you, except where required by law. If the 2,000+ word privacy policy seems a little confusing, it’s probably because it’s become an industry standard in the Internet age. OpenAI also has a separate Terms of Service document, which places most of the responsibility on users for taking appropriate steps when using the tool.

On Wednesday, OpenAI also published a new blog post outlining its approach to AI safety. “We do not use data to sell our services, advertise, or build profiles of people. We use data to make our models more useful to people,” the blog post said. “For example, ChatGPT can be improved by further training people’s conversations.”

Google’s privacy policy, including the Bard Tools, is similarly lengthy, with additional terms and conditions for generative AI users. The company says it “selects a subset of conversations and uses automated tools to remove personally identifiable information” to improve his Bard while protecting user privacy.

“These sample conversations can be reviewed by trained reviewers and are retained separately from your Google account for up to three years,” the company said in another Bard FAQ. The company also warns, “Do not include information that can identify you or others in your bard conversations.” The FAQ also states that Bard’s conversations are not used for advertising purposes, and “will clearly notify you of any changes to this approach in the future.”

Google also told CNN that users can “easily choose to use Bard without saving conversations to their Google account.” Bard users can also check prompts and delete Bard conversations from this link. “We also have guardrails in place designed to prevent Bard from including personally identifiable information in responses,” he said.

“We are learning exactly how all of this works,” Mills told CNN. I don’t fully know if it is, how it will appear as output at some point, or if it is.”

Mills added that users and developers may not even realize the privacy risks inherent in new technology until it’s too late. An example he gave was an early autocomplete feature that sometimes had unintended consequences, such as completing a social security number that the user started typing.

Ultimately, Mills said:



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *