Cambridge, Massachusetts — at MIT Technology Review This week’s EmTech AI conferences, sessions, and participant conversations alike often focused on agent AI: how we use it, how we want to use it, and how we want to manage it. Wednesday’s after-lunch session drew many attendees to learn about building their own agents. We also interacted with a digital twin simulation of the agent.
But of all the buzz, agent development still causes ripples of concern among businesses. on wednesday, MIT Technology Review Executive Editor Amy Nordrum and Chief Technology Analyst Brian Bryson circulated a statement to attendees asking for their support: “AI agents will create more disruption than value over the next 12 months.” 70% of the people in the room selected the “Confirm” paddle.
Whether implementing agent workflows to improve internal productivity, using agent AI in external-facing products, or developing personal agents for administrative tasks, business leaders expect agent AI to solve problems and drive revenue. But they are treading carefully and ramping up their AI efforts with monitoring and human-involvement capabilities.
Attendees reflected in digital twin simulation
Design and innovation consulting during the conferenceIDEO set up an experiment called Digital Twin Simulation of Agents. Reflecting the meeting agenda, floor plan, specific attendee list, and even the outdoor weather, the simulation showed attendee “agents” moving around the sixth floor of the MIT Media Lab.
Crowds of intrigued or anxious conference attendees lined up to see what their digital twins were doing in this mirrored EmTech AI world. Some agents were discussing recent and upcoming sessions. Some lamented jet lag and Boston’s cloudy weather. Some people were in the elevator. Everyone spoke with their personal data in mind.
The data was pulled from EmTech AI’s registration information and public web searches, explained Jenna Fiszell, managing director of AI and emerging technologies at IDEO. “The aim is to capture everyone’s personality and enough detail to give them enough of a basis to derive some value from their conversations with each other,” she told me in an interview.
My agent digital twin relied on data from my recent stories and social media posts, my Informa TechTarget contributor page, and even my awards page. And I have to admit that my Agent Twin career was very similar to my real life career.
IDEO presented a digital twin simulation on a TV monitor at the MIT Media Lab, allowing attendees to see individual agents. Details included the agent’s background, goals, and conversations with other twins.
Fisel explained that simulation architectures involve different types of actors. Each attendee was represented by an agent who was assigned a conversational style based on the attendee’s background information. There were also director agents who shaped the conversation. These directors explicitly instructed the agent twins to express a variety of opinions and welcome disagreement to reduce the possibility of pandering (the tendency for agents built on large-scale language model (LLM) architectures to overly agree with each other).
“This was an experiment created for this event,” Fiesel said. “This is supposed to be a provocation about what it’s like to be a twin. We all have shadow versions of ourselves online and in databases. So this is an attempt to see the emotional response, expressing it very specifically and clearly. What’s it like to be modeled? How uncomfortable does it make you feel? And what superpowers might it give you?”
How does it feel to be a model? How does that make you feel uncomfortable? And what superpowers might it give you?
Jenna FizellManaging Director, AI and Emerging Technologies, IDEO
Some participants highlighted the power of networking with each other through digital twins. Their agents were talking “on the go” with other “agent” attendees (as a rule not from the same company), which sometimes started real-world conversations. In other cases, the simulation led participants to ask people around them what they thought about becoming twins. Some felt that the twins looked a lot like their true selves, but many felt uncomfortable that the twins looked nothing like them.
Fisel expected some participants would be uncomfortable with the twins. IDEO provided participants with the option to remove themselves and their data from the simulation. “As far as I know, no one has ever opted out,” Fizell said. “But I expect at least some people will, and that’s totally fine. The point of doing this is to understand where that level of confidence is.”
Testimonials touting the success of agent use cases in business
We started hearing about agent AI starting in late 2024, and 2025 was quickly dubbed the year of the AI agent. Agents are now easier to implement in real-world business applications.
Kellie Romack, ServiceNow CDIO, explained in the session “How to Build an Agentic Workforce” how her company successfully introduced agentic AI to its internal IT service desk and switched more than 90% of desk tasks to autonomous AI. This did not replace the workers. 85% upskilled in ServiceNow to move into higher-value work, while the remaining employees chose to stay in their roles and manage agents and the remaining 10% of tasks.
These agent-based AI use cases are the driving force behind improving business productivity, Romack said. For example, by using agent AI to streamline communication between finance and sales teams, a process that used to take four days was reduced to just about eight seconds. “We talked to humans. We reinvented the process and made agent AI work,” she said.
Ascendion, an AI-first software engineering provider, uses agent AI to help companies update their software operating models. Ascendion CEO Karthik Krishnamurthy said in an interview that working with AI agents can cause significant frustration unless the right operating model is found. The company’s platform “AAVA” isCoordinate systems that allow agents and humans to collaborate.
According to Krishnamurthy, using agent AI in this way significantly increases the speed, cost, and quality of results. For example, one corporate bank customer reduced a 26-month goal to 8 months. Another client, a healthcare company, significantly reduced medical registration timelines by using this tool. “That’s the power of agent technology,” Krishnamurthy said.
We had conversations with humans. We reinvented the process and put agent AI to work.
Kelly LomackCDIO, ServiceNow
Ash Edwards, Head of Forward Deployment Research and Engineering, spoke during the session “Agent Engines” about how his company, Poolside, enables software engineers to create more with AI agents. These agents primarily perform background work to allow engineers to explore more scenarios.
“It’s never been so fun to be a person.” [software] “I’m an engineer,” Edwards said. Before agents, you might have an idea and five different directions to go in, but you had to choose one. Now you can try all five and do more serious exploration, he said.
Build your own AI agent
After lunch on Wednesday, the MIT Media Lab was standing room only to watch a live demo of building an AI agent using OpenClaw. The session, “From Prompt to Agent: Build an Autonomous AI in 45 Minutes,” was led by Project NANDA researcher Maria Gorskik and MIT Media Lab scientist Santanu Bhattacharyya.
OpenClaw is an open source AI agent that resides on a user’s device and can connect to third-party LLMs to autonomously complete tasks for the user. During the session, Gorskikh guided attendees on how to set up and run their own OpenClaw AI agents on their personal devices using maritime, a platform for cloud-hosted AI agents. She also showed how to connect that agent to NANDA NEST. NANDA NEST is an open platform for coordinating network-native AI agents and is part of Project NANDA’s long-term goal of building an open agent web of AI agents collaborating securely over the Internet.
Although potential Wi-Fi issues left some participants with pending agents, the number of participants who successfully built a personal agent by the end of the session more than doubled.
During the session, Gorskikh and Bhattacharya guided participants through the agent building process. Attendees raised their hands when the agent was activated.
Orchestration of agents requires monitoring and human involvement
one of MIT Technology Review The top 10 things to watch in AI right now are agent orchestration. Teams of agents work together to accomplish increasingly complex goals.Orchestration involves many tasks, such as managing agent communication, assigning tasks, and resolving conflicts.
“There’s a lot of work in progress in general,” said David Cox, vice president of AI infrastructure at IBM Research and IBM director of the MIT-IBM Watson AI Lab. “If you have multiple agents running around, how do you get them to do the right thing?” This is where orchestration comes in and becomes important.
As companies become more dependent on these [agentic] Systems, observability, and understanding will be a very important part.
Ash EdwardsHead of Forward Deployment Research and Engineering, Poolside, Inc.
As agent orchestration becomes more complex, enterprises must prioritize rigorous agent monitoring and human-centric strategies.
“There are a lot of interesting concerns when you have multiple agents interacting,” Cox said. He pointed out that the following research is underway: sudden action Intra-agent: New and sometimes problematic behavior resulting from interactions between an agent and its environment.Given the potential for sudden behavior, the field is working on ways to avoid unintended side effects when multiple agents work toward a goal.
“We’re going to see more and more agent surveillance,” Cox said. Agent monitoring includes examining individual agent behavior and collective behavior, identifying problems with agent interactions, and determining appropriate levels of autonomy for various functions.
Prioritizing agent monitoring is a concern for many companies. “As companies become more dependent on these [agentic] “Systems, observability, and understanding are going to be a really important part,” said Poolside’s Edwards, and ServiceNow’s Romack agreed, saying that monitoring agents through a comprehensive dashboard was essential and wished it had been done first rather than later.
As with many conversations in AI, current and future agents will require humans to remain at the forefront. Human-involved protocols and accountability are priorities. “It’s a huge responsibility for the people building these systems to build them with great care,” Edwards said. During the session, he emphasized the role humans should play in considering and controlling the actions of agents.
A subject matter expert at an AI consulting services firm told me at a conference that it’s important to keep humans involved. Especially in high-risk industries, today’s agents act only as advisors, and humans always have the final say and use agents to improve their capabilities, he said.
ServiceNow’s Romack says prioritizing people goes hand-in-hand with prioritizing employees. Deploying agents should focus on people and their role in AI development. This means educating employees about AI, giving them access to experiment with AI tools to find their own use cases, and upskilling them into new or modified roles as agents are onboarded.
“Improving employee skills” [and] “Focus on people. People are not going away. People are important and they are the safety net for everything,” Romack said.
Olivia Wisbey is a site editor in Informa TechTarget’s AI & Emerging Tech group. She has experience covering AI, machine learning, and other emerging technologies.