Google is teasing some interesting new AI features a day before its I/O developer conference. The company shared a short video with the X that appears to showcase a new camera-powered AI feature that can recognize what's in the frame in real time.
The video, labeled “Prototype,” shows what appears to be a Pixel device with its camera open, showing the I/O keynote stage. The person with the camera asks, “Hey, what do you think is going on here?”
One voice replied, “Looks like people are preparing for a big event, maybe a conference or a presentation.” You can also identify that the letters “IO” are related to Google's developer conference and mention “new advances in artificial intelligence.” As the two voices go back and forth, a transcript of the text appears on the screen.
It's not clear exactly what this feature is, but it bears some similarities to Google Lens, the company's camera-powered search feature. But what's shown in the teaser video appears to operate in real-time and respond to voice commands, much like the multimodal AI in Meta's smart glasses. The fact that the demo is shown on his Pixel devices is also interesting, as Google often releases new AI-powered features to his Pixel lineup first.
It's somewhat unusual for Google to preview one of its announcements right before a big keynote, but the company released the video because OpenAI showed off similar capabilities with its new GPT-4o model during a live event. Probably not a coincidence. However, it won't be long before we learn more about whatever Google is planning. Google I/O opens tomorrow, May 14th, and Engadget will be broadcasting the keynote live from Mountain View.
