Can you trust Apple when it comes to using generative AI?
That was the question the company was hell-bent on answering with its announcement of “Apple Intelligence,” its tagline for an entire generation of AI features the company promises to bring to iPhone, iPad and Mac users in the next version of its operating system software this fall.

“Apple is a game changer,” Apple CEO Tim Cook and his team said in a keynote address at the company's annual meeting. Annual Developers Conference Apple on Monday described Apple Intelligence as a “personal intelligence system” that can understand the context of all of a user's personal data to provide “highly useful and relevant intelligence” and “make your device even more useful and enjoyable.”

Look at this: Apple Intelligence: What you need to know about Apple's Gen AI
To make the meaningful connections needed to “understand and create language and images, take actions across apps, and draw from personal context to simplify and speed up everyday tasks,” Apple needs to be able to mine and process all the data stored in the software and services you use across your devices. This includes text, messages, documents, email, photos, audio files, videos, images, contacts, calendars, search history, and Siri conversations.
Then, using its next-generation AI large-scale language models and custom computer chips to process that information, Apple says it will be able to compose emails and texts, transcribe and summarize messages, edit grammar, easily check messages, emails and calendars for upcoming events, organize photos, create movies of your memories, and get better search results using Siri and the Safari browser.

An example of Genmoji shown by Apple during the keynote.
You can also create and share your own original creations Genmoji,gen AI-enabled emojis are generated based on natural language descriptions entered by the user (e.g., relaxed smiling, wearing a cucumber) and photos of friends and family.
All of this relies on users trusting Apple to keep their data private and secure, which is why the company said in its keynote, public press releases, privacy press releases, and posts on its security site that it had created “a new standard for privacy in AI.”
Analysts are not giving Apple the benefit of the doubt for now. One security researcher also disputed claims made by Elon Musk on his social media site X on Monday that the deal with OpenAI could undermine the security of Apple users.
“Apple has been clear that it intends to keep data private, both on device and in the cloud,” said Carolina Milanesi, founder of consulting firm The Heart of Tech and a longtime Apple analyst. “It's clear that Apple is very transparent about its technology and in control of the end-to-end experience. Most consumers trust Apple and wouldn't think twice because of the reward they get with Apple Intelligence.”
AI Privacy is All About Trust

Apple Intelligence was at the heart of everything Apple showed off at WWDC.
Apple is certainly not the only AI company asking you to trust them with all your data. Companies like Google, Microsoft, Meta, and others are looking to offer new ways of doing things that they say are only possible with gen AI. Similarly, they also require LLM and gen AI chatbots to ingest, process, and AI your data. They also say they will protect the privacy of their users and will not share any personally identifiable information with anyone.
But IDC analyst Francisco Jeronimo has a bit more confidence in Apple's approach because the company's brand and business model are based on protecting user privacy: Unlike Google and Meta, which make most of their revenue by serving users attractive personalized ads based on information about their personal preferences (again, both companies say user data is anonymized and not shared), Apple makes money from hardware like the iPhone and services like the App Store, iTunes and Apple TV.
“Everyone knows that unlike other companies, Apple doesn't make money selling our data, and it's one of the ways it differentiates itself from its competitors,” Geronimo said in an interview. “If you can't trust Apple with all your data, who can you trust?”
Use only the data you need

Apple Intelligence needs your data, and Apple has its own plan for protecting user data.
Apple's new standard for AI security is to ensure that data is protected and secure, whether all data capture, analysis and manipulation takes place on a person's device (also known as on-device or local processing) or complex AI tasks must be handed off to more powerful computer servers in the cloud running custom Apple chips.
As part of its new private cloud computing standard, Apple has promised that, as with on-device processing, “your data will only be used to fulfill your requests, will never be stored, and will not be accessible to anyone, including Apple.”
During a press conference following the WWDC keynote, Apple's head of software Craig Federighi spoke about Apple's private cloud efforts and the amount of personal information needed to provide contextual information.

Look at this: Apple Intelligence: What you need to know about Apple's Gen AI
“Cloud computing typically involves some compromises when it comes to privacy guarantees, because when you send a request to the cloud, the cloud traditionally receives that request and the data it contains, has direct access to your log files, stores it in a database, and potentially even places it on your profile,” he said, noting that you “put a lot of trust” in the companies to protect your information.
“As we deploy more AI and rely on more personal requests, it will be essential to know that no one else has access to the information used to process your request,” Federighi added.
IDC's Geronimo also praised the company for bringing in independent security researchers and cryptographers to examine the code that runs on its private cloud computing servers to assess whether it works as Apple claims.
“Security researchers need to be able to verify with a high degree of confidence that the privacy and security guarantees of private cloud computing are consistent with our stated commitments,” the company said in a security blog post on Monday.
“Hypothetically, if a security researcher had sufficient access to the system, they could verify the assurances. But this last requirement, verifiable transparency, goes a step further and eliminates hypotheses. Security researchers must: Must be able to verify Guarantees security and privacy for private cloud computing; and Must be able to verify We verify that the software running in the PCC's operating environment is the same software that we inspected during warranty validation.”
“The Hardest Problem in Computer Security”
Matthew Green, an associate professor of cryptography and computer science at Johns Hopkins University, said in a thread on X that he appreciates Apple's approach but still has questions, including whether users will be able to opt out of having their requests processed on a “private cloud.” He said Apple hasn't yet revealed details of its plans.
“Building a trustworthy computer is literally the hardest problem in computer security – and, to be honest, almost the only problem in computer security,” Green wrote after reading the Private Cloud Compute blog post. “But while it remains a hard problem, we've made a lot of progress, and Apple uses almost all of it.”
We'll just have to wait and see how this plays out. Apple Intelligence says the beta will be available in the US this fall as part of iOS 18, iPadOS 18, and macOS Sequoia.
But even if reading about AI, privacy and security, on-device processing, and cloud computing sounds boring, it's still worth knowing about them all. According to IDC, AI-enabled devices will be the fastest growing segment of smartphones and PCs. The market research firm believes that AI smartphones will reach 170 million units in 2024, and AI PCs will account for nearly 60% of all PCs sold by 2027.
AI will become an essential part of next-generation devices and our daily lives.
Editor's note: CNET has used its AI engine to create and label dozens of articles accordingly. The notes you're reading are attached to articles that substantively address AI topics, but are all written by our expert editors and writers. For more information, see the CNET AI Blog. AI Policy.
