Google CEO Sundar Pichai speaks on stage at the annual Google I/O developer conference in Mountain View, Calif., May 8, 2018.
Stephen Lam | Reuters
Artificial intelligence will be a central theme at Google’s annual developer conference on Wednesday. That’s because the company plans to announce a number of generative AI updates, including the launch of a general-purpose Large Language Model (LLM).
According to an internal Google I/O document seen by CNBC, the company will be unveiling its latest advanced LLM, PaLM 2. PaLM 2 includes over 100 languages and operates under the internal codename “Unified Language Model”. We also run a variety of coding and math tests, as well as creative writing tests and analysis.
At the event, Google will make presentations on how AI can “help people reach their full potential,” including “generative experiences” for Bard and Search, the document said. shows. Pichai addresses a crowd of live developers while pitching on his AI advances for his company.
The update comes at a time when the AI arms race is heating up, with Google and Microsoft vying to incorporate chat AI technology into their products. Microsoft is using its investment in OpenAI, which created ChatGPT, to power its Bing search engine. Google quickly mobilized Bard technology and his LLM of its own to various teams.
Google first announced the PaLM language model in April 2022. In March of this year, the company said businesses can “get more information from text, images, code, video, audio, and simple natural language prompts.”
Last month, Google said its medical LLM, called “Med-PaLM 2,” could answer medical exam questions at “expert level” and was accurate 85% of the time.
Google also plans to share search progress with Bird in a “generative experience” that includes Bird used for coding, math and “logic,” as well as expansions to Japanese and Korean. The document shows.
The company is working on a series of more powerful Bard models and officially launched the tool as an experiment in March.
Internally, the company has been working on a multimodal version called “Multi-Bard,” which uses larger data sets and solves complex math and coding programs, according to another document seen by CNBC. also tested versions called “Big Bard” and “Giant Bard”.
Google also plans to expand its ‘Workspace AI Collaborator’, including discussing template generation in Sheets and image generation in Slides and Meet products. The company announced plans in March to give a small number of users access to his AI capabilities in Gmail and Google Docs, as part of a test, and to introduce additional generative AI capabilities to its Meet, Sheets and Slides applications. Did.
One image seen by CNBC showed a sliding sidebar with a chat box with the option for users to enter text and “create” an image based on the words.
Additional updates include examples of using the image recognition tool Google Lens. The company plans to show advances in “multi-search” for cameras and voice after allowing users to ask questions about what they see in images last year.
As CNBC previously reported, outside of the AI space, Google will be showing off its new foldable phone, the Pixel Fold. The company claims that the Pixel Fold will have the “most durable foldable phone hinge” and offer a phone trade-in option. Google plans to market the Pixel Fold as water-resistant and pocket-sized.
A Google spokesperson did not immediately respond to a request for comment.
clock: Google is “borrowed time”
