Access your first agent in minutes: Announcing new features for Amazon Bedrock AgentCore

Machine Learning


Running the agent requires resolving a long list of infrastructure issues before testing the suitability of the agent itself. Connecting frameworks, storage, authentication, and deployment pipelines means that by the time the agent handles its first real task, you’re spending days on infrastructure rather than agent logic.

We built AgentCore from the ground up to allow developers to focus on building agent logic instead of backend plumbing, allowing it to work with the frameworks and models they already use, such as LangGraph, LlamaIndex, CrewAI, and Strands Agents. Today, we’re introducing new features that further streamline the agent building experience and remove infrastructure barriers that slow teams down at every stage of agent development, from the first prototype to production deployment.

Go from idea to agent execution in 3 steps

Every agent has an orchestration layer that includes a loop that calls the model, decides which tools to call, returns results, manages the context window, and handles failures. To run that loop, you need some infrastructure underneath it. That means compute to host your agents, a sandbox to safely run your code, secure connectivity to your tools, persistent storage, and error recovery. This infrastructure forms the agent harness and allows the agent to actually run.

Until now, building that harness has been something every team has had to do from the beginning. This means choosing a framework, writing orchestration code, connecting to tools and memory, and ensuring authentication all happen before the agent can process a single request. Although this is a necessary task, it is not the task that determines whether an agent is helpful. Most teams we worked with spent many days working on this infrastructure before running their first real test.

AgentCore’s new Managed Agent Harness feature replaces all pre-builds with simple configuration. Declare an agent without writing any orchestration code and run it in just three API calls. It defines what the agent does: the model it uses, the tools it can invoke, and the instructions it follows. AgentCore’s harness stitches together compute, tools, memory, identity, and security to create a running agent that can be tested in minutes. Trying a different model or adding tools is a configuration change, not a code rewrite. You can test several variations of your agent in minutes by changing API parameters on the fly.

This speed doesn’t come at the expense of flexibility. AgentCore’s harness leverages Strands Agents, an open source framework from AWS. If you need custom orchestration logic, specialized routing, or multi-agent coordination, switch from configuration to code-defined harnesses using the same platform, the same microVM isolation, and the same deployment pipeline. AgentCore maintains session state in a persistent file system, allowing agents to pause in the middle of a task and resume exactly where they left off. This makes the human-involved pattern practical without requiring custom plumbing or redesigning the agent if needed later. Get started in minutes, add more features, and control as your needs evolve without rebuilding.

“We are building an AI agent that will revolutionize e-commerce,” said Rodrigo Moreira, VTEX Vice President of Engineering. “Previously, prototyping a new agent required orchestration code and infrastructure setup that would take days before validating the idea. AgentCore’s harnessing capabilities change that. Replacing a model, adding a tool, or adjusting an agent’s instructions is now a configuration change rather than a rebuild. We look forward to further accelerating agent development with these new features, as we can now validate agent ideas in minutes instead of days.”

Build, deploy, and operate agents from the same device

Now that I have the agent working, I would like to run it in production. This usually means getting out of the editor, setting up a deployment pipeline, configuring an environment, and piecing together a process that bears no resemblance to the workflow you used to build the agent in the first place.

With the new AgentCore CLI, you can perform a single workflow (prototyping, deployment, and operation) across the entire lifecycle from the same terminal you’re already working on. Iterate on the agent locally and deploy it when it’s ready without switching tools or building another pipeline. AgentCore powers deployment through infrastructure as code (IaC) with CDK support and Terraform (coming soon), so agent configurations are reproducible and versioned. What you test locally is exactly what will run in production.

Provide proper context to coding agents

Throughout the agent development process, most developers will work with a coding assistant such as Claude Code or Kiro. However, a coding assistant’s effectiveness depends on the context it has. A general-purpose MCP server allows access to APIs and documentation, but it doesn’t encode important opinions, such as which patterns to use, how to combine features, or what the recommended paths are for common tasks. AgentCore’s new pre-built skills go beyond raw API access. These provide coding agents with curated, up-to-date knowledge of AgentCore best practices, so the recommendations they receive reflect not only what endpoints exist, but also how the platform is intended to be used. Kiro already includes this as built-in Power, with plugins for Claude Code, Codex, and Cursor coming soon. In a rapidly evolving platform, when coding agents have accurate context, they make fewer mistakes from the first line of code.

Let’s get started

of managed agent harness in agent core teeth available In preview today with four AWS Rregion: US West (Oregon), US East (N. Virginia), Asia Pacific (Sydney), and Europe (Frankfurt). agent core CLI persistent agent file system, is available AWS Commercial RAegion where agent core is provided. Coding agent skills are expected to be available by the end of April. You only pay the fee resource that you use without addition CLI, harness, or skill fees (learn more in AgentCore pricing page). visit agent core document to start.

These features allow you to stay focused on your agent’s logic without worrying about infrastructure setup. As your agent evolves, add evaluation, memory, tool connectivity, and policy enforcement without redesigning it. The platform you build your prototype on is the same one you run in production.


About the author

Madhu Parthasarathy

Madhu Parthasarathy is the GM of Amazon Bedrock AgentCore and has over 20 years of expertise in building large-scale distributed infrastructure. Madhu has been with Amazon for over 16 years, leading several initiatives at Amazon Retail, Elastic Block Store, and most recently AgentCore. Mr. Madu has held various leadership positions at other companies, including LinkedIn, where he led the enterprise platform that powers all of LinkedIn’s enterprise businesses, and a neocloud startup, where he led AI infrastructure and drove its vision for security and developer experience. Madhu is currently based in Santa Clara, California.



Source link