The world of AI agents is rapidly evolving from simple chatbots to sophisticated (semi-)autonomous systems capable of making complex real-world decisions. This week we introduced: Gemini 3 Pro Previewthe most powerful agent model designed to serve as the core orchestrator of these advanced workflows.
We worked extensively with open source partners to integrate and test the model. This post describes the new agent features in Gemini 3 and how to start building next-generation agents using open source frameworks such as LangChain, Vercel’s AI SDK, LlamaIndex, Pydantic AI, and n8n.
Why choose Gemini 3 as your agent?
Gemini 3 introduces features designed to give developers fine-grained control over cost, latency, and depth of inference, making it the most capable foundation for agents to date.
- Control reasoning with thinking_level: Adjust logic depth on a per-request basis. Set thinking_level to expensive It helps you plan carefully, find bugs, and follow complex instructions. set to low Delivers latency comparable to Gemini 2.5 flash with superior output quality for high-throughput tasks.
- Using stateful tools with thought signatures: The model now generates an encrypted “thought signature” representing its internal reasoning before invoking the tool. By putting these signatures back into the conversation history, the agent maintains a precise train of thought and ensures reliable multi-step execution without losing context.
- Adjustable multimodal fidelity: Balance token usage and details to media_resolution. use expensive To analyze fine text in images, Medium For optimal PDF document parsing, and low To minimize delays in video and general image descriptions.
- Big context consistency: Combined with thought signatures, a large context window reduces “inference drift” and allows agents to maintain consistent logic over long sessions.
Agentic Open Source Ecosystem: Day 0 Support
We’ve been working with the open source community to ensure that libraries can take advantage of Gemini 3 out of the box. Here are some of the major frameworks that provide Day 0 support.
rung chain

LangChain provides agent engineering platforms and open source frameworks, LangChain and LangGraph, to millions of developers. By representing workflows as graphs, developers can build stateful, multi-actor AI agents that directly leverage Gemini and Gemini embedded models.
“The new Gemini model is a powerful step forward for complex agent workflows, especially for those who require advanced inference and tool usage. We’re excited to support it across LangChain and LangGraph, making it easy for developers to build and deploy agents they can trust from day one.” – Harrison Chase, LangChain
Get started with LangChain for Gemini.
Vercel’s AI SDK

AI SDK is a TypeScript toolkit designed to help developers build AI-powered applications and agents using React, Next.js, Vue, Svelte, Node.js, and more. Google providers allow developers to implement features such as text streaming, tool usage, and structured generation in Gemini 3.
“In-house benchmarks for Gemini 3 Pro show significant improvements in inference and code generation, with approximately 17% higher success rate compared to Gemini 2.5 Pro, placing it in the top two of the Next.js leaderboard. We’re excited to support this new level of functionality with our AI SDK, AI Gateway, and v0 on Day 0.” — Aparna Sinha, Vercel
Get started with Vercel’s AI SDK.
llama index

LlamaIndex is a specialized framework for building knowledge agents using Gemini connected to data. This includes tools across agent workflow orchestration, data loading, parsing, extraction, and indexing using both the LlamaIndex open source tool and LlamaCloud.
“In early access testing, Gemini 3 Pro outperformed previous generations in handling complex tool calls and maintaining context. It provides the high-precision foundation developers need to build reliable knowledge agents that interact with their own data.” – Jerry Liu, LlamaIndex
Get started with LlamaIndex.
Pidantic AI

Pydantic AI is a framework for building type-safe agents in Python. Direct support for the Gemini model allows developers to take advantage of Python type hints to define agent schemas. This ensures that agent workflows produce predictable, type-correct data suitable for integration into downstream production systems.
“Gemini 3’s advanced inference combined with Pydantic AI’s type safety provides the reliability developers need for production agents. We are excited to have validated the integration and provided full support for the library from day 0.” – Dwe Maan
Get started with Pydantic.
n8n

n8n is a workflow automation platform that enables technical and non-technical teams to build AI agents without writing code. With Gemini 3 Pro, n8n brings advanced reasoning power to operations, marketing, and business teams.
“Gemini 3 brings the power of advanced reasoning to everyone, not just software engineers. By integrating this model into n8n, we empower non-developers to build sophisticated, reliable agents that can transform their daily work without writing a single line of code.” — Angel Menendez
Let’s get started with n8n.
Best practices and next steps
Ready to upgrade? Follow these guidelines to ensure the agent runs successfully on Gemini 3.
- Simplify prompts: Stop using complex “chain of thought” prompt engineering. Handle inference depth natively using the thinking_level parameter.
- Keep temperature at 1.0: Do not lower the temperature. Gemini 3’s inference engine is optimized for 1.0. Setting it too low can cause loops and degraded performance for complex tasks.
- Process thought signatures: You need to capture the thoughtSignature from the model’s response and return it. This is enforced for function calls. Missing signatures will result in API errors.
- Optimize visual tokens. Set media_resolution_medium for PDFs (quality saturates here to save tokens) and reserve high values only for dense details in images.
- Check out our guide: Read the complete Gemini 3 developer guide for important details about migration, rate limits, and new API parameters.
