Cognitive warm-up. Microsoft seems to be living in a bubble. There seems to be little recognition or even attempt at self-reflection about how badly they have done to AI. From claims that 30% of Microsoft's code is written by AI to botched Windows 11 updates in the months that followed, breaking critical features on millions of PCs. The desire to bring Windows 11 to life with an “Agent OS” was met with a backlash as soon as that dream was unveiled with X. I classified this as confusion.
Adding to this never-ending story are Microsoft's Copilot and LG's webOS TV.
The TV manufacturer recently deployed Copilot to users' TVs in a way that makes the AI impossible to disable or uninstall. First, LG installed it quietly, hoping no one would notice. Second, why not give users a choice? After some backlash, LG announced that they will now be able to remove the Copilot shortcut from their smart TV home screens, and will be able to completely uninstall Copilot with the next webOS update. I fear that even after the hype dies down, we will still find remnants of AI.
outlook on the past
2025 was not the year when AI became intelligent, autonomous, or out of control. This was the year that the excited and noisy AI industry discovered where the real bottlenecks were. Trust, relevance, oversight, economics, integration, geopolitics, power. The model has probably been improved. Claims about intelligence and cleverness grew. But reality has quietly caught up with us, and the supposedly smartest beings on Earth are often left floundering.
OpenAI's GPT-5 and the illusion of “thinking by default”
OpenAI has positioned GPT-5 as a model where inference has become essential, with a much-hyped late-summer release. More sensible than anything before it, claims of doctoral-level information were regularly removed in the preceding months. Multi-step problem solving, tool usage, project consistency—AI has finally learned to think, not just respond.
“Thinking by default” in GPT-5 refers to a new architecture that uses internal routers to automatically use deeper inference models for complex tasks, moving towards step-by-step problem solving. It was to emphasize unprompted agent actions and a shift to intelligent conversations that integrate reasoning, large context memory windows, and multimodality.
reality: GPT-5 didn't invent machine reasoning, but it feels more like it covered the ground. What felt like “thinking” is still probabilistic reasoning, just better tailored and more confidently packaged. The real change is psychological, with the model sounding so confident and intentional that users may have stopped questioning the output.
This is also dangerous because GPT-5 did its best to normalize trust, even though the AI industry is far from solving illusions, verification, and accountability.
Microsoft Copilot integration is confusing
Microsoft's $13 billion OpenAI bet was supposed to pay off through Copilot, with AI built into every Office app, Windows OS, and enterprise workflow. The story was simple. While competitors fight over the model, Microsoft will own the AI integration layer, which will result in an even more expensive Microsoft 365 subscription.
Reality: Copilot has become a case study in premature and haphazard commercialization. The fact that the Copilot icon appears twice within the same interface at certain points in Outlook for the web might lead you to believe that every team within Microsoft has a goal of Copilot integration in some way. I can't blame the company I paid for the premium subscription and felt it was half-baked. Microsoft's rush to monetize its OpenAI trump card meant shipping these integrations before any real productivity gains were guaranteed.
There was an even bigger mistake in my book. Microsoft's exclusive position on OpenAI's models began to unravel with the partnership with Anthropic announced in late 2025 to bring Claude models to Microsoft 365. A “co-pilot everywhere” strategy looks more like expensive technical debt than a necessity.
DeepSeek and the geopolitical AI battle
At the beginning of 2025, China's DeepSeek-V3 emerged as a wake-up call to all AI companies around the world. Built at a fraction of the training costs that previously defined AI models, it redefined competitive performance. Adding a geopolitical element, the building was built during the era of export restrictions. The message was clear. This means that banning chips alone will not stop the development of AI. But for AI companies, there was real cause for concern. Perhaps little did we know, but that was setting the tone for a rather stressful 2025 for them.

reality: DeepSeek exposed the uncomfortable truth about AI competition. First, computational efficiency is just as important as raw scale. Second, the regulatory moat is weaker than Silicon Valley expected. Third, the global AI landscape is fragmenting at a rate that no single jurisdiction can control.
Satya Nadella writes about the Jevons paradox (this is the economics concept that efficiency lowers the cost of resources, thereby increasing consumption). OpenAI's Sam Altman argued that such competition would be “energizing.”
It's one thing to put on a collective brave face, but the hits kept coming.
By late 2025, DeepSeek was no longer just a model, but proof that the AI race was truly multipolar. At a time when most AI companies and AI hardware manufacturers were hoping the world wouldn't see through their circular economy efforts.
Circulating funding carousel
Speaking of which… it soon became clear that the AI was being kept afloat by deploying a lifeboat called the Circulating Funds Loop. As just one example, consider this: OpenAI has signed a $300 billion deal with Oracle Corp. to power OpenAI's AI infrastructure. Oracle is doing that by spending billions on Nvidia chips, and Nvidia itself plans to invest up to $100 billion in OpenAI, which has pledged to use Nvidia's systems to build 10 gigawatts of data center capacity. It's not illegal, but it's definitely creative accounting disguised as organic growth.
reality: This isn't just about Nvidia or any particular AI company. They all seem to approach it the same way. It's about a whole ecosystem where growth metrics are beautifully decoupled from actual value creation. Everyone measures input (computing consumed) rather than output (problems solved). What happens when someone demands ROI? This is a twist of the knife and AI companies are simply taking a risk. This is because, even in 2025, we have convinced many companies around the world that this is still the stage of “spending money to figure out AI” and not the stage of “visible AI results”.
AI, say hello to physics
Throughout 2024, AI discussions focused on model capabilities. By mid-2025, we've moved on to infrastructure constraints such as power grid capacity, water usage, cooling constraints, land permits, and of course, significant investment (with government protection if all else fails, thank you very much).
reality: This is the first time the AI hype train faces non-negotiable constraints. It cannot help you escape from a power shortage. You can't scale computing faster than regulators approve substations. The future of AI is being shaped not only by the latest model architectures, but also by the power grids and climate policies we want to leverage. Silicon Valley is learning that physics always has the final say. We hope common sense will join the fray in 2026.
