Google's Android chief says AI will be a new weapon against Apple's iPhone

AI For Business


Sameer Samat, head of Android, speaks on stage at a Google event.
Google

  • Google's Sameer Samat says AI presents unique opportunities for the Android ecosystem.
  • Gemini AI on Android operates at the system UI level and provides context-aware features.
  • On-device AI with Gemini Nano powers features like encrypted messaging.

Sameer Samat is a powerful man these days.

I recently saw him take down Zac Brown, head of McLaren's F1 racing team, on the Netflix TV show Drive to Survive. Google was a big sponsor and Samat wanted to see improvements.

And last month, Samat was elevated to the top position at Google in a reorganization. He is currently the President of Android Ecosystems and runs his Android platform, the world's most popular smartphone. This work also includes Android TV, Android Auto, and new augmented reality and mixed reality technologies.

I had the opportunity to interview Samat at Google IO. I started by asking how AI is changing the smartphone market, competition with Apple, and the distribution of Google's technology.

“AI is hot right now, and it's a huge opportunity for the Android ecosystem,” he said. “We're going to act very quickly to seize this opportunity. It's a once-in-a-generation moment to reinvent what you can do with a mobile phone. We're going to seize the moment. intend to.”

Google's new Gemini AI model “will allow you to do things that were previously impossible on a smartphone,” he added.

Apple, the 800-pound smartphone gorilla — these weren't the words Samath said during the interview. At one point, he said “another OS,” referring to Apple's iOS mobile platform, which is leading in the U.S. but still lags far behind Android globally.

More than an app

On that “other OS”, Google's Gemini is just an app. According to Samat, it's even bigger on Android.

He showed an example of pressing and holding the power button on a Pixel 8 smartphone. This called Gemini and it appeared at the top of his YouTube app, which he was using.

The video played and he asked Gemini questions about the clip. Gemini analyzed the footage and answered from the relevant parts of the video. Then he took out his Samsung S24 and did the same thing, but touched the bottom right of the screen and dragged it up to bring up Gemini.

“System UI” level

This is possible because Google is building its Gemini AI model and assistant technology into the “system UI” level of Android devices. This is below the app level and involves technically important processing.

“You can't do this with just an app on your device,” Sammat said. “Being able to do this on Android allows Gemini to step into the situation on top of or beside what’s going on.”

Without being tied to an app, Gemini is free to roam around your device and understand the context of what you're doing at any time.

Samath emphasized that this only happens when a user invokes the AI ​​with a deliberate action, such as pressing a button on the Pixel 8 or swiping on the S24.

On-device AI

He cited on-device AI using the smaller Gemini Nano model as another example. It works on his Pixel 8 and S24, with more Android devices coming soon.

This allows Gemini to perform useful functions without sending user data to cloud data centers.

One use case for this approach: If you use an encrypted messaging service on your mobile phone, you can't send that data to a data center for processing by an AI model. Therefore, AI assistants and agents cannot write replies or support other useful features when sending messages.

With Nano, Google has on-device AI that can process these encrypted messages and help you compose replies and take other actions. Samat said none of that data leaves the device.

Gemini on iPhone?

I then asked Samath a big business strategy question. Does Google want to distribute its best Gemini models more prominently on Apple devices?

Google already pays Apple billions of dollars a year to make search the default in Safari. For example, will we have a similar deal when distributing Gemini on the iPhone?

Sammat declined to comment. More generally, he said, Google's broader goal is to serve all users around the world.

However, he emphasized that this doesn't mean the company can't build its own experiences on Android devices, including many new AI experiences.

Search by circle

he quoted Search by circle As an example. This allows you to search by simply circling, scribbling, or highlighting what's on your phone's screen. For example, if you're watching a video and see a hat or sunglasses that you want to buy, just launch the search function in Circle and circle the item.

It works through a combination of Google Search, Gemini AI technology, and Android. Samat says this is not possible on other platforms.

These experiences require end-to-end optimization, which is what Google is doing in collaboration with its own Pixel devices, Samsung, and soon other Android providers.

“AI is a fundamental differentiator for Android, and like Pixel, Samsung is a big part of that,” he said.

“Is this all about Pixel devices? No!” he added. He explained that Samsung and other Android device makers will be critical to the next wave of AI-powered devices.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *