Cambridge, Massachusetts – Change in AI is happening at a fast pace. From advances in agent technology and robotics to evolving regulatory landscapes and public perception, it can be difficult for leaders to stay on top of AI and track its progress.
MIT Technology Review I am not new to new technology. For the first time at the EmTech AI conference this week, MIT Technology Review published a list of 10 important things about AI In 2026, it will be a compilation of AI advances, trending topics, and new sentiments.
In the session “Exclusive First Look: The 10 Things That Matter in AI Right Now,” MIT Technology Review Executive editors Amy Nordrum and Niall Firth looked at each of the 10 noteworthy topics and briefly unpacked what they mean in the larger AI context. Here are 10 topics to watch:
humanoid data. By training humanoid robots on LLM-inspired data, companies are betting that future humanoids will outperform humans at certain tasks.
LLM+. Everyone is asking, “What happens next after LLM?” Well, maybe more LLMs, or LLM+. MIT Technology Review call it. Future LLMs may enable tackling more complex multi-part problems using advances in expert mixture models and context windows.
Excessive fraud. From phishing to security bugs, it’s becoming easier to use AI for fraud crimes.
world model. These will continue to gain traction as viable methods for training robots and agents in real-world environments.
New war room. The military is increasingly leveraging AI for tasks such as assessing political sentiment and creating threat intelligence reports. Advanced examples could include chatbots that provide military strategy after training on sensitive data.
Weaponized deepfakes. AI-based deepfakes are becoming more viable and dangerous, often targeting minorities or used for political purposes. Many experts hypothesize that the social effects may be lasting.
Agent orchestration. Recent advances in agent AI show that orchestration, where multi-agent ecosystems and networks collaborate on complex tasks, is the next big thing.
China’s bet on open source. The wave of excitement from DeepSeek may have died down, but it spawned a chain of open source AI products. This is fundamentally different from the way AI vendors work in Silicon Valley in the United States.
Artificial scientist. AI for science is rapidly gaining popularity, and in the future, agents could generate hypotheses, perform experiments, and act as scientists themselves. As OpenAI recently said: MIT Technology Review This is their new “North Star.”
resistance. Humans are losing their jobs to AI, are fed up with data centers and their effects, and fear the ethical implications of AI. This is leading to growing resistance and protests against AI.
EmTech AI uses an interactive wall to allow attendees to share their thoughts on the top 10 things to watch in AI. The walls quickly filled with excitement, theories about what would happen next, and anxiety about how these topics would impact society.
The following interview took place at MIT Technology Review EmTech AI Conference. Nordrum and Firth discussed how the Top 10 list came about and what trends and advances they are particularly focused on. We also explore several topics with significant societal implications, including AI failure, weaponized deepfakes, and a lack of guardrails to keep up with innovation.
Editor’s note: The following interview has been edited for length and clarity.
Identifying the “top 10” is difficult for anything, let alone AI. how MIT Technology Review Do you have a summary of the major AI developments to watch in 2026?
Amy Nordrum: We make so many 10-item lists that we get used to it. Here is a list of 10 breakthrough technologies. We also compile a list of the top 10 climate change technology companies each year. I like doing things like this because it’s a different approach to our editorial reporting, it forces us to think differently about all the technologies we cover, and to take a step back from day-to-day news reporting and think about what really has the biggest impact.
There’s so much going on with AI. I knew it would be useful to do the same exercise with AI. We have an incredibly strong team of AI reporters and editors working hard on this story every day. We just started collectively brainstorming about what’s going on with AI. The overall trends, latest advances, and most important ways technology is evolving that people should know about.
Are there any particular trends or developments that you think are particularly noteworthy?
Nile Firth: For me, it’s LLMs+. We created this term because what everyone wants to know is what happens after the LLM. And three of the editors and reporters were debating different approaches to what to do next. And as we talked about it, it seemed more and more like it was being bolted onto the LLM. That will be the next evolution. So we refined it further and came up with our own interpretation of what to do next.
Nordrum: One of my favorites on the list was China’s open source bet. Because we’ve all heard about DeepSeek and its big moment. However, regarding the item, Chinese reporter Tsai Wei said, [Chen] We were talking about everything that’s happened since then and updating people on the state of the industry and how it’s not just about DeepSeek and its one model. This is a wave of companies building open source in China and exporting that approach to many other parts of the world.
[AI malaise] This reflects how many people feel about AI and perhaps have a hard time expressing it in words. amy nordrumExecutive Editor of MIT Technology Review
It is becoming the most popular model. Model families are becoming increasingly open sourced and made in China, which is very important for AI development in many other parts of the world, and is a very interesting and different strategy than what American tech companies are doing. So I realized that that item was my favorite because it summed up this whole trend and a big development that people might have heard of but would like to know more about.
Also, this term is not on the list, but I believe it is in Matt Honan’s essay where he talks about this concept. AI fatigue Just capture something in the moment. This reflects how many people feel about AI and perhaps have a hard time expressing it in words. [People are] I was overwhelmed in a way, overwhelmed at the same time, and frustrated in a way. There are more important things going on there.
and [the term] resistance It captures a small number of people speaking out and demonstrating against AI. But I think fatigue is more of a mass emotion that probably more people are experiencing. It doesn’t necessarily cause them to protest, but they’re just checking out. It’s almost the exact opposite reaction. It was an interesting conversation we had, and it resonated with me, like hearing people talk about AI and the general zeitgeist we’re in right now.
I’m so glad you brought up the AI woes. Because I empathize with it too. What I’m wondering is: how much attention should business leaders pay to both AI resistance and AI stagnation? How do you think that might impact enterprise settings?
Nordrum: There was a huge wave of enthusiasm, and many people felt they had to try something or try something at work. And if you lead a company, some of your employees may be feeling this way right now. Where they’ve tried it, they’ve tried their best to make it work, but it doesn’t work, it doesn’t do what they wanted, or it’s not as easy. It helps to be aware of that.
i was talking to someone [EmTech AI] The participant mentioned earlier said that it’s becoming more common for people working in AI and developers, especially software developers, to talk about the problems they face. It used to be something that I almost didn’t talk about because it might reflect negatively on my own skills, or because talking about limitations makes it seem like I don’t have those skills. So I think it’s starting to feel more expansive.
Admittedly, this may just be part of the technology lifecycle. Everyone may have their moments where they become a little unattached, but then they get better and find a better way. And ultimately, it actually helps us do our jobs and get things done. I don’t know, but I think it’s healthy and good to just acknowledge that maybe that’s the situation people are in, that not everything is going well, and that it’s not going to immediately solve all the problems.
Amy Nordrum and Niall Firth published “10 things that matter right now in AI,” including people’s growing hatred of AI and the resistance to it.
One of the things I wanted to hear about was the over-the-top deception and weaponization of deepfakes. What is it about these two trends that allows each to earn its own space in such a carefully curated list?
Nordrum: That’s interesting. I think one is like a subset of the other.
Firth: I feel like deepfakes are directly targeting people, especially women, and for political reasons.Deepfakes are about creating something for an end goal, whereas scams are simply about low-level, low-level fraud that tricks people for money.
We’ve been told for years that if we don’t do something, this is what will happen, and if we don’t prepare for it, this is what many people’s lives will be like.
Nile FirthExecutive Editor, MIT Technology Review, Newsroom
Nordrum: Yes, and some extreme scams include deepfakes, like the one I told a CFO about that defrauded his company of money. Or the story I’ve heard is that people get a phone call and hear a sad voice of their loved one on the other end of the phone saying something like, “Please send me money because I’m in jail.” They hear such stories in the voices of their loved ones, but it’s a parody and a deepfake, in that case an audio deepfake.
However, there is also another type of powerful scam that does not involve such types of deepfakes. For example, better phishing emails or using some of these models to look for bugs in software code and exploit those vulnerabilities. It’s true that there is some overlap, but each part also has parts that have little to do with the others. In the case of deepfakes and other types of fraud, it’s just harassment.
Firth: We’ve been talking about this issue for years, before this particular AI cycle came along.We talked about deepfakes by ex-boyfriends, especially women’s deepfakes. Obviously, now it’s incredibly easy, much more realistic, and ubiquitous.So it’s just targeted harassment.We’ve been told for years that if we don’t do something, this is what will happen, and if we don’t prepare for it, this is what many people’s lives will be like. And as we are told, it is happening now.
Many of the 10 developments on this list require guardrails to mitigate negative impacts. Do you think AI is advancing faster than necessary guardrails?
Nordrum: Yes, this is common when it comes to technology.
Firth: It seems relatively easy to put guardrails on many of these things. It’s more a matter of political will. Tech companies have a lot of say and a lot of plans when it comes to lobbying and political pressure. Europe has quite a lot of regulation around these things, especially chatbots and deepfakes, but that doesn’t make sense from a global perspective. And most of the companies doing this are American companies. Sure, it’s not that difficult to solve, but no one really wants to do what they have to do.
Olivia Wisbey is a site editor in Informa TechTarget’s AI & Emerging Tech group. She has experience covering AI, machine learning, and other emerging technologies.