Google has announced that the AI-mode search feature supports visual search for conversations, allowing users to shop using natural language descriptions and reference images instead of traditional filters.Robby Stein, vice president of products at Google Search, said the update, which is rolled out to US users this week in English, can explain the product “like a way to talk to friends.” Users can now search for items using ambiguous descriptions such as “Not too buggy barrel jeans” and improve results with follow-up requests such as “show me acid-washed denim” and “i want bank ankle length.”
Multimodal Search Technology
This new feature utilizes Google's Visual Search Fan-Out technique. It runs multiple queries in the background to understand the subtle details of the image and secondary objects. Building on Gemini 2.5's advanced multimodal capabilities and Google's shopping graphs of over 50 billion product lists, AI modes can now recognize subtle visual contexts and provide more relevant results.Users can start searching by uploading reference images, snapping photos, and combining images with text descriptions. The system offers shopping options with direct links to retailer websites with reviews, transactions and availability information. Google refreshes its list of over 2 billion products every hour to ensure current results.
Beyond shopping
While optimized for shopping, Visual Search upgrades work for general exploration, including interior design inspiration. On mobile devices, users can search within a specific image and ask conversation follow-up questions about what they are seeing.Stein acknowledged that the previous text-heavy response in AI mode to image queries was “silly”, and could be driving the development of this more visual approach. This feature is based on Google search using lens and image search technology.Google has warned that it may take several days to reach all users. This update represents the latest enhancements as AI mode became available to all US users in March.
