Google is gearing up to hold its annual Google I/O developer conference next week, and unsurprisingly, it's going to be all about AI. The company has made no secret of it. Since his I/O last year, he debuted Gemini, a new, more powerful model aimed at competing with OpenAI's ChatGPT, and has been hard at work testing new features for search, Google Maps, and Android. It's done. I hope to hear more about such topics this year.
Google I/O begins with a keynote on Tuesday, May 14th at 10am PT/1pm ET. You can watch on our site or on our YouTube channel via the livestream link also embedded at the top of this page. (A version with American Sign Language interpretation is also available.) Make sure to allow enough time for this. I/O tends to last several hours.
Google will likely also focus on plans to turn smartphones into more AI gadgets. This means more generative AI capabilities for Google's apps. For example, we're working on AI features that will help you find food, shop, and EV chargers on Google Maps. Google is also testing a feature that uses AI to call businesses and put them on hold until someone is available to speak.
I/O may also see the debut of a new, more personal version of the digital assistant, rumored to be called “Pixie.” Gemini-powered assistants are expected to integrate multimodal features, such as the ability to take pictures of objects and learn how to use them, and get directions to where items can be purchased. I am.
This sort of thing could spell bad news for devices like the Rabbit R1 and Human Ai Pin, which respectively launched recently and struggled to justify their existence. Right now, perhaps the only advantage they have is that it's difficult (though not impossible) to use a smartphone as an AI wearable.
