Last updated: December 4, 2025
AWS introduces Graviton5—the company’s most powerful and efficient CPU

Amazon expands Nova family of models and pioneers “open training” with Nova Forge

AWS unveils 3 frontier agents, a new class of AI agents that work as an extension of your software development team

Trainium3 UltraServers enable customers to train and deploy AI models faster at lower cost

AWS simplifies model customization to help customers build faster, more efficient AI agents

AWS AI Factories transform customers’ existing infrastructure into high-performance AI environments

AWS Transform now modernizes legacy code and applications up to 5x faster
Before recycling a decommissioned server rack, AWS staged a live demolition of tech debt at re:Invent 2025 to demonstrate AWS Transform’s new capabilities. Amazon Bedrock AgentCore helps developers build production-ready AI agents with new policy, evaluation, and memory capabilities

Simplifying purpose-built AI infrastructure with Amazon Bedrock AgentCore
Amazon Bedrock AgentCore addresses these challenges by offering essential, fully managed services. AgentCore supports any framework (like CrewAI, LangGraph, LlamaIndex, Google ADK, OpenAI Agents SDK, and Strands Agents) or model while handling critical agentic AI infrastructure needs. In just five months since preview, organizations including Amazon Devices Operations & Supply Chain, Cohere Health, Cox Automotive, Heroku, Natera, MongoDB, PGA TOUR, Pulumi, Thomson Reuters, Workday, Snorkel, and Swisscom are already using AgentCore to build agents, and developers have downloaded it more than 2 million times.
Bedrock AgentCore customer momentum
PGA TOUR, a pioneer and innovation leader in sports, has built a multi-agent content generation system to create articles for its digital platforms. The new solution, built on AgentCore, enables the PGA TOUR to provide comprehensive coverage for every player in the field by increasing content writing speed by 1,000% while achieving a 95% reduction in costs.
MongoDB, a database platform, leveraged AgentCore to reshape how it designed and operationalized AI within the company. Through AgentCore’s implementation, the company eliminated weeks of evaluation cycles and consolidated multiple disparate tools into a single, production-ready solution. By seamlessly integrating MongoDB’s AWS infrastructure and utilizing MongoDB Atlas as the embedded Knowledge Base for Amazon Bedrock, its development teams deployed an agent-based application in just eight weeks. This process previously took months of infrastructure work and continuous maintenance. This streamlined approach enabled MongoDB to scale its AI initiatives with greater accuracy, contextual awareness, and consistency, while significantly reducing manual overhead.

Kiro powers: Access specialized expertise to accelerate software development
As developers increase their use of AI agents for a wider range of software development tasks, they want agents that have deep knowledge of the tools they use every day and that are specialized in their workflows, like user interface or application programming interface development. Kiro powers enable developers to give Kiro agents instant expertise in these tools and workflows in a single click. Powers can be comprised of a combination of MCP servers for specialized tool access, steering files with best practices, and hooks to trigger specific actions helping developers equip Kiro agents with workflow-specific knowledge spanning the application lifecycle: design, development, deployment, and observability. By loading only when needed, powers help developers work with efficient token usage, precision, and speed.
Checkpointless training on SageMaker HyperPod: recover from model training faults in minutes
Amazon SageMaker HyperPod simplifies infrastructure management for model training and deployment, reducing costs by up to 40%. As training scales across hundreds or thousands of accelerators, faults like hardware or software failures can occur. Traditional checkpoint-based recovery can take up to an hour, which is expensive, consumes storage, and leaves multi-million-dollar compute clusters idle during recovery. AWS is announcing checkpointless training on SageMaker HyperPod—automatically recovering from infrastructure faults in minutes with zero manual intervention, enabling training cluster efficiency of up to 95% on clusters with thousands of AI accelerators.
Strands Agents SDK now in Typescript (preview)
Strands adds support for edge devices
Amazon Bedrock’s largest expansion of new models to date
AWS added 18 new open weight models to Amazon Bedrock, reinforcing its commitment to offering a broad selection of fully managed models from leading AI providers. With access to top models and the flexibility to swap them without rewriting code, Amazon Bedrock makes it fast and easy for customers to evaluate, test, and adopt new models, so they can find the best option for their use case—all without disrupting production systems.
The news includes the launch of two new sets of models, available first in Amazon Bedrock, from Mistral AI. Mistral Large 3 is Mistral AI’s most advanced open weight model optimized for long-context, multimodal, and instruction reliability, and Ministral 3 is a series of models that set a new benchmark for compact, general-purpose, and multimodal AI. The launch also features other popular models, including Google’s Gemma 3, MiniMax’s M2, NVIDIA’s Nemotron, OpenAI’s GPT OSS Safeguard, and more.
AWS launches new Amazon EC2 instance powered by NVIDIA GPUs
AWS Lambda Managed Instances: The benefits of serverless without constraints
Customers rely on Lambda to build serverless applications because of its simplicity, automatic scaling, and intuitive operational model. But they also need consistent, massive compute and precise control of the infrastructure to use Lambda for use cases like analytics pipelines, financial risk modeling, and multiplayer games. AWS Lambda Managed Instances bridges the gap between serverless simplicity and infrastructure control, allowing customers to run their Lambda functions on the Amazon EC2 instance of their choice.
AWS Lambda durable functions enables reliable multi-step applications and AI workflows
Each month, over 1.8 million customers use AWS Lambda to process more than 15 trillion requests. Developers rely on Lambda to quickly build functions to run code, but also want to build multi-step applications that execute reliably over extended periods, like payment processing, customer onboarding, or orchestrating AI workflows. AWS Lambda durable functions empowers customers to create Lambda functions that can preserve their progress despite interruptions and allow them to suspend execution for up to a year.
Amazon GuardDuty Extended Threat Detection now supports EC2 and ECS environments
Modern cloud environments are dynamic and distributed, often running virtual machine, container, and serverless workloads at scale. Building on existing support for detecting compromised IAM credentials, Amazon S3 buckets, and Amazon EKS, GuardDuty Extended Threat Detection now expands to Amazon EC2 and Amazon ECS, providing broader visibility into sophisticated attack sequences and enabling faster remediation across a customer’s AWS environment.
GuardDuty Extended Threat Detection uses AI and machine learning models trained at AWS scale to correlate signals such as anomalous process creation, persistence attempts, reverse-shell activity, and crypto-mining into a single, critical-severity event instead of separate alerts that each trigger a siloed investigation.

Amazon S3 Vectors scales to two billion vectors per index
Amazon S3 Vectors is now generally available with significant scale and performance improvements, enabling AI systems to store and query vectors natively in Amazon S3 for semantic search and context understanding. Designed to provide the same elasticity, scale, and durability as Amazon S3, S3 Vectors scales up to two billion vectors per index (40x preview capacity), supports up to 20 trillion vectors per bucket, delivers 2-3x faster frequent-query performance, and reduces costs by up to 90% over alternatives—eliminating overhead for customers building AI applications.
Increased maximum Amazon S3 object size
Faster Amazon S3 Batch Operations
Amazon S3 Tables optimize storage costs and enable automatic replication
Since launch, Amazon S3 Tables for Apache Iceberg workloads has quickly grown to more than 400,000 tables. S3 Tables has launched over 15 new features and capabilities in the last 12 months, rapidly innovating on S3 native Iceberg support for data lakes. Today, AWS is adding two major capabilities to S3 Tables: support for the Intelligent-Tiering storage class and automatic replication across AWS Regions and accounts.
Intelligent-Tiering brings the same automatic cost optimization that has saved S3 customers more than $6 billion. It automatically optimizes table data across three access tiers (Frequent Access, Infrequent Access, and Archive Instant Access) based on access patterns—delivering up to 80% storage cost savings without performance impact or operational overhead.
Automatic replication enables distributed teams to query local data for faster performance while maintaining consistency across Regions and accounts. Customers can now automatically replicate tables, eliminating manual updates and complex syncing—simplifying compliance and backup management while keeping complete table structures intact and ready to use.
Amazon FSx for NetApp ONTAP data accessible from Amazon S3
AWS unifies security, operations, and compliance data in CloudWatch
AWS eliminates local storage provisioning for EMR Serverless
AWS announces Database Savings Plans
Keeping AWS the best place to run commercial databases
Building apps at the speed of ideas with AWS databases, Vercel Marketplace, and v0
Coming soon, AWS databases, including Amazon Aurora and Amazon DynamoDB, will be available as native integrations on Vercel Marketplace and v0. Developers can connect to Aurora PostgreSQL, Aurora DSQL, or DynamoDB in seconds from their Vercel dashboard or in v0 prompts, accelerating the velocity in building new applications. From ideation to production, customers benefit from the security, reliability, scalability, and operational excellence of AWS database services.
AWS Security Hub, now generally available, delivers near real-time threat correlation
In addition, Security Hub now provides advanced trends and historical insights through enhanced visualizations, helping organizations understand changes in their security posture over time. These insights help organizations identify potential attack paths, understand how threats, vulnerabilities, and misconfigurations could chain together, and quickly surface and prioritize active risks in their cloud environment. Visit the Security Hub News Blog, Security Hub Page and Security Hub Documentation for more information.
AWS debuts AI-powered support with 2x faster response times at entry tier to boost reliability and speed innovation
Customers and partners can now access new and enhanced AWS Support offerings with three experience-driven tiers: Business Support+, Enterprise Support, and Unified Operations. Combining the speed of AI with AWS engineer expertise, AWS Support now provides faster response times compared to any previous offering. Additionally, customers previewing AWS DevOps Agent can engage with AWS Support with one-click when needed, giving AWS experts immediate context of the situation for a faster resolution. The result for AWS customers: less time spent fixing issues, giving them more time to innovate.
- Business Support+ delivers AI-powered assistance that understands the context of customer operations, with 24/7 access to AWS experts. For critical production issues, support engineers engage within 30 minutes to accelerate recovery.
- Enterprise Support provides designated Technical Account Managers (TAMs) who blend generative AI insights with human judgment to provide strategic operational guidance to customers on resiliency, cost, and efficiency. It also includes AWS Security Incident Response at no additional cost, which customers can activate to automate security alert investigation and triage.
- Unified Operations, the premier support plan, is for customers with the largest and most complex workloads—offering a global team of designated experts who deliver architecture reviews, guided testing, proactive optimization, and personalized responses within five-minutes for critical incidents.
Amazon Connect launches agentic AI capabilities for seamless customer experiences
Amazon Connect delivers natural voice interactions with advanced speech models
Agentic assistance creates true collaboration between humans and AI
For years, Amazon Connect has provided AI-powered assistance that analyzes customer interactions to proactively deliver customer service representatives the information and the tools they need in real-time. Amazon Connect is taking this further with agentic assistance that creates true collaboration between humans and AI. While customer service representatives talk with customers, Amazon Connect analyzes conversation context and customer sentiment—not only suggesting next steps, but also actively completing tasks such as preparing documentation and handling routine processes. Customer service representatives can now focus on building relationships and handling complex situations while AI manages the background complexity, enabling them to serve more customers effectively.
AI-powered recommendations create deeper customer engagement
Amazon Connect has helped businesses personalize customer interactions through unified customer profiles that sync data from disparate applications. Now, Amazon Connect is introducing AI-powered product recommendations that turn customer conversations into opportunities for deeper engagement. By combining real-time clickstream data with rich customer history, AI agents and customer service representatives can deliver interactions with highly personalized product suggestions at exactly the right moment. Instead of waiting for customers to ask, businesses can also anticipate needs based on real-time behavior, increasing satisfaction while creating new revenue opportunities.
AI agent observability, testing, and performance evaluations
As businesses deploy more AI agents, understanding how they make decisions has become critical for maintaining quality and compliance. Amazon Connect is introducing AI agent observability that provides complete transparency—showing you what the AI understood, which tools it used, and how it reached its decisions. This visibility helps you optimize performance, ensure compliance, and build confidence in your AI-powered experiences. Amazon Connect enables businesses to test workflows before going live and evaluate both AI and customer service representative performance with automated assessment, custom criteria, and aggregated insights. Businesses can now confidently deploy AI agents at scale, knowing they have full visibility and control over every customer interaction.
AWS Interconnect – multicloud preview begins with Google
Previously, customers connecting different cloud workloads faced a choice: use public connectivity with no bandwidth guarantees or build complex private connectivity.
TwelveLabs launches world’s most powerful video understanding model on Amazon Bedrock
TwelveLabs has launched Marengo 3.0, a breakthrough video foundation model now available through Amazon Bedrock. Unlike traditional models that analyze video frame-by-frame, Marengo 3.0 understands video as a complete, dynamic system—connecting dialogue, gestures, movement, and emotion across time to deliver truly human-like comprehension at AI scale.
Marengo 3.0 addresses a critical business challenge: 90% of digitized data is video, but most of it remains unusable because it’s too time-consuming for humans to analyze and previous AI models couldn’t grasp everything happening on screen. Marengo 3.0 changes that by compressing audio, text, movement, visuals, and context into something that can be searched, navigated, and understood at enterprise scale.
The model delivers immediate business value with a 50% reduction in storage costs and 2x faster indexing performance. It offers industry-first capabilities including team and player tracking for sports, composed multimodal queries that combine image and text, and support for four-hour videos across 36 languages.
“Video represents 90% of digitized data, but that data has been largely unusable,” said Jae Lee, CEO of TwelveLabs. “Marengo 3.0 shatters the limits of what is possible.”
Trane Technologies and AWS accelerate building decarbonization for Amazon Grocery
“At Trane Technologies, sustainability is at the core of everything we do. This strategic collaboration demonstrates how sustainable solutions can drive strong returns while benefiting the planet,” said Riaz Raihan, SVP and chief digital officer of Trane Technologies. “Together, we’re not only transforming these fulfillment centers but also driving meaningful progress towards Amazon’s business objectives and bold sustainability goals.”
The results have exceeded expectations, with pilot sites achieving energy-use reductions of nearly 15%. Following the success of these initiatives, deployment is planned for the remaining Amazon Grocery fulfillment and distribution centers across more than 30 sites in the U.S. Furthermore, plans to pilot in grocery stores will begin in 2026.
“At Amazon, we’re continually looking for data-driven, scalable solutions to reduce our carbon footprint while maintaining operational excellence,” said Christina Minardi, vice president of Worldwide Grocery Stores Real Estate and Store Development at Amazon. “By working with Trane Technologies and the BrainBox AI team, we’re turning our buildings into intelligent systems that learn and adapt, helping us meet both our sustainability and performance goals in real time.”
AWS powers Sony’s enterprise AI and engagement platforms
The Engagement Platform will connect Sony’s diverse portfolio of businesses—from electronics to PlayStation games to music, movies, and anime—making it easier for fans to discover and enjoy content across all of Sony’s offerings. Behind the scenes, Sony Data Ocean, running on AWS, helps Sony understand what fans enjoy and delivers more personalized experiences by processing up to 760 terabytes of data from more than 500 different sources across Sony’s businesses.
The platform will extend core functions of the PlayStation infrastructure such as accounts, payments, data capabilities, and security to create seamless experiences across Sony’s entertainment services.
WRITER and AWS bring enterprise-grade security and flexibility to AI agents
WRITER, a leader in enterprise AI, is making it easier for companies to build and manage AI agents securely at scale. Through a new integration with Amazon Bedrock, WRITER customers can now access a wide variety of leading AI models directly within WRITER’s platform—alongside WRITER’s own Palmyra family of models—all under unified governance and security controls.
The integration gives enterprises the flexibility to choose the best models for their needs while maintaining the security and compliance standards they require. Companies like Vanguard, Mars, and AstraZeneca can now deploy Amazon Bedrock models within WRITER’s prebuilt or custom agents, with Amazon Bedrock Guardrails and WRITER’s observability tools connecting seamlessly to WRITER’s AI Studio.
WRITER also unveiled a new agent supervision suite that acts as a control center for enterprise AI. The suite gives IT teams full visibility into how agents are being used, with capabilities including detailed monitoring of user and agent behaviors, centralized approval workflows before deployment, and integration with existing security platforms. These controls help organizations scale AI confidently without sacrificing oversight.
Adobe and AWS team up to reshape creativity and marketing in the AI era
Adobe and AWS are transforming how people create and connect with audiences through artificial intelligence. Announced onstage at AWS re:Invent by Adobe CEO Shantanu Narayen, this collaboration leverages AWS’s cloud infrastructure and services—from generative AI model training to AI agent deployment—to help Adobe deliver cutting-edge AI tools to creators, marketers, and businesses worldwide.
AWS is the engine behind Adobe’s most innovative features. Adobe Express uses AWS AI capabilities for conversational editing that makes design intuitive. Adobe Acrobat Studio taps Amazon Bedrock to bring personalized AI assistants to PDFs. And Adobe Firefly—Adobe’s commercially safe generative AI—trains its text-to-image and text-to-video models on AWS’s advanced EC2 P5 and P6 instances, enabling creators to bring ideas to life instantly.
For marketers, AWS enables Adobe to orchestrate personalized customer experiences at an unprecedented scale. Adobe Experience Platform allows brands to unify real-time data and deliver standout experiences across every channel. Adobe GenStudio for Performance Marketing now activates display ads directly with Amazon Ads, dramatically shortening campaign launch times.
Dartmouth becomes the first Ivy League institution to deploy AI campuswide with AWS and Anthropic
“We look forward to empowering Dartmouth, in partnership with Anthropic, as they continue to approach AI ethically, strategically, and securely to provide transformational student experiences and operational excellence,” said Kim Majerus, vice president of Global Education and U.S. State and Local Government at AWS.
Dartmouth will use Amazon Bedrock to build custom AI applications for campus operations and student services, with AWS’s Digital Innovation Team providing direct support using their working backwards methodology. Comprehensive training and support will ensure community members can take advantage of the tools that best meet their needs.
“This is more than a collaboration,” said President Sian Leah Beilock. “It’s the next chapter in a story that began at Dartmouth 70 years ago. This collaboration will ensure that the institution where the term AI was first introduced to the world will also demonstrate how to use it wisely in pursuit of knowledge.”
“Dartmouth has always understood that technology is most powerful when it’s paired with human wisdom and critical thinking—and that’s exactly how we built Claude,” said Daniela Amodei, president and co-founder of Anthropic.
Bonterra and AWS launch mobile-friendly giving hub to simplify nonprofit donations
The Giving Hub addresses a critical challenge facing nonprofits: meeting donor expectations for simple, transparent giving while managing diverse donation types. By integrating Bonterra’s GiveGab solution with Amazon Business technology and AWS cloud infrastructure, the platform enables donors to discover organizations, browse nonprofit wish lists, and make cash contributions—all through a single, branded interface.
“Nonprofits are under tremendous pressure during the holidays, and donors increasingly expect giving to be simple, transparent, and meaningful,” said Ben Cohen, chief revenue officer at Bonterra. “This partnership is removing long-standing barriers in the giving process and unlocking new ways for donors to make an impact.”
For nonprofits, the platform streamlines operations by enabling custom wish list creation, facilitating cash donations, tracking in-kind contributions, and simplifying fulfillment processes.
This builds on Bonterra and AWS’s partnership to develop AI-powered, cloud-native solutions for the social good sector, including Bonterra Que—the first fully agentic AI platform built specifically for nonprofits, foundations, and CSR teams.
Lyft brings agentic AI to drivers
“The future of customer service has fundamentally shifted through AI,” said Ameena Gill, vice president of Safety and Customer Care at Lyft.
Nissan accelerates software-defined vehicle innovation with AWS
The results speak for themselves: testing that’s 75% faster than before and a unified environment where more than 5,000 developers across the globe can collaborate effortlessly. Teams can now update vehicle features more quickly and work together across international boundaries like never before.
“Software development for SDVs is an extremely important strategy for Nissan,” said Kazuma Sugimoto, general manager at Nissan. “The Nissan Scalable Open Software Platform is key technology that enables us to rapidly deliver innovative value to customers.”
Looking ahead, Nissan isn’t slowing down. The company is planning to integrate more AI capabilities, including an advanced version of their ProPILOT system for complex driving environments by 2027.
Visa collaborates with AWS to deliver secure agentic payments
The companies will also publish open blueprints on the Amazon Bedrock AgentCore public repository to help developers create intelligent agentic workflows for retail shopping, travel booking, and payment reconciliation. These blueprints will enable AI agents to handle complex, multi-step transactions—from product discovery and price comparison to secure checkout and order tracking.
“Visa Intelligent Commerce is designed to be the trust layer for the agent economy,” said Rubail Birwadker, SVP and global head of Growth at Visa. “With AWS’s scalable cloud capabilities and Visa’s global payment network, Visa Intelligent Commerce enables AI agents to transact securely and contextually at scale.”
The collaboration brings together industry partners including Expedia Group, Intuit, lastminute.com, Eurostars Hotel Company, and others to review blueprint designs spanning travel, retail, and B2B payments. For example, users could instruct an AI agent to “Buy me basketball game tickets if the price drops below $150,” and the agent would execute those tasks on behalf of the user.
BlackRock teams with AWS to deliver Aladdin on secure, scalable, performant cloud infrastructure
Aladdin on AWS enables clients to leverage advanced risk modeling, enterprise-grade analytics, and smart investment decision-making capabilities while benefiting from AWS’s proven track record running mission-critical financial services workloads for over 20 years.
“By expanding Aladdin to AWS, we are giving clients more choice in where and how they deploy their technology ecosystem,” said Sudhir Nair, senior managing director and global head of Aladdin at BlackRock.
“With Aladdin running on AWS, clients gain access to secure, scalable, and resilient infrastructure for advanced risk modeling, enterprise-grade analytics, and smart investment decision-making while maintaining the highest security and resiliency standards,” said Scott Mullins, managing director of Worldwide Financial Services at AWS.
General availability for Aladdin Enterprise clients hosted in the United States is expected in the second half of 2026.
Deepgram brings advanced speech AI capabilities to AWS
Deepgram is delivering streaming speech-to-text, text-to-speech, and voice agent capabilities to Amazon SageMaker AI and integrating its enterprise-grade speech technology with Amazon Connect and Amazon Lex. Together, these integrations enable customers to build and deploy voice-powered applications with sub-second latency while maintaining the security and compliance benefits of their AWS environment.
Deepgram clients now have the option to deploy real-time speech capabilities across AWS services—from contact centers to custom voice applications—providing choice in their technology environment without compromising the performance and reliability that make Deepgram a trusted solution for enterprise voice AI.
“By bringing our streaming speech models directly into SageMaker, enterprises can deploy speech-to-text, text-to-speech, and voice agent capabilities with sub-second latency, all within their AWS environment,” said Scott Stephenson, CEO at Deepgram.
“Integrating Deepgram’s advanced speech technology with Amazon Connect enables organizations to build voice interactions that understand context and respond with appropriate pace and tone, transforming automated interactions into opportunities for deeper customer relationships,” said Pasquale DeMaio, VP of Amazon Connect at AWS.
