So Apple restricted the use of OpenAI’s ChatGPT and Microsoft’s Copilot. wall street journal report. ChatGPT has been on the banned list for months, bloombergof Mark Garman added:.
Not just Apple, but Samsung and Verizon in the tech industry, as well as bank figures (Bank of America, Citi, Deutsche Bank, Goldman, Wells Fargo, JP Morgan). This is because sensitive data may be leaked. In any case, ChatGPT’s privacy policy clearly states that prompts can be used to train models unless you opt out. The fear of leakage is not unfounded. In March, a ChatGPT bug revealed data from other users.
Is there a world in which Disney would want to divulge Marvel spoilers?
I tend to think of these bans as very loud warning shots.
One obvious use for this technology is customer service where companies are trying to minimize costs. But for customer service to work, customers must provide their details, sometimes private and sometimes confidential. How does the company plan to protect customer service bots?
This is not just a customer service issue. Let’s say Disney decides to leave the scripting of a Marvel movie to an AI instead of his VFX department. Is there a world in which Disney would want to divulge Marvel spoilers?
One of them is general When it comes to the tech industry, early-stage companies (like the younger generation at Facebook, for example) don’t pay much attention to data security. In that case, it makes sense to limit the disclosure of sensitive material, as OpenAI itself recommends. (“Don’t share sensitive information in conversations.”) This is not an AI-specific issue.
These big companies that focus on confidentiality may simply be paranoid
But I’m curious if there is an inherent problem with AI chatbots. One of the costs associated with running AI is compute. Building your own data center is expensive, but with cloud computing, your queries are processed by remote servers, essentially relying on someone else to protect your data. increase. You can see why banks are concerned here. Because financial data is highly sensitive.
In addition to accidental public leaks, there is also the possibility of intentional corporate espionage. At first glance, this seems like a tech industry problem, but in the end, trade secret theft is he one of the risks here. But big tech companies have moved to streaming, so I wonder if that’s an issue in the creative field as well.
There is always a push and pull between privacy and utility when it comes to technology products. In many cases, such as Google and Facebook, users trade their privacy for free products. Google’s Bard clarifies that the queries are used to “improve and develop Google’s products, services and machine learning technologies.”
Maybe these big, smart, confidentiality-focused companies are just being delusional and have nothing to worry about. But let’s say they are right. If so, we can think of several possibilities for the future of AI chatbots. The first is that the AI wave turns out to be exactly the same as the metaverse, not the starter. Second, AI companies are under pressure to overhaul and clearly outline their security practices. Third, every company that wants to use AI would have to build their own model, or at least do their own processing, which seems ridiculously expensive and difficult to scale. . And the fourth is the online privacy nightmare. Airlines (or debt collectors, pharmacies, etc.) routinely leak data.
I don’t know how this rocks. But if the most security-conscious companies are limiting their use of AI, others may have good reason to do so too.
