This article was first published on The Edge Malaysia Weekly’s forums from 20 April 2026 to 26 April 2026.
Just a few years ago, artificial intelligence (AI) seemed like just a nice toy, a chatbot that simulated intelligence by constructing complete sentences in response to a user’s prompts, but ended up being much less sophisticated than a sophisticated search engine. But now this has proven to be a great tool that allows me to perform tasks that I never thought possible in my lifetime.
For example, I used AI to search online datasets, manipulate them, perform statistical tests, and create sophisticated tables and graphs with intelligent comments about the meaning of the results, their relevance to the academic literature, and the strengths and weaknesses of the analysis. AI can perform tasks in less than 30 minutes that would take a research assistant days.
Today’s AI models can almost seem to be able to read people’s minds. Unlike programming or writing code, you don’t have to specify exactly what you’re looking for, so there’s no room for misunderstandings. This model will “intuit” what you are looking for and fill in any missing details (though it’s always best to check, as evidenced by a law firm who submitted a brief containing a hypothetical AI-generated quote). Or, except that, the interface will prompt you until your query is clear.
It’s comforting to think that AI can be a tool that helps us all be more productive and do our jobs better. This has certainly increased the efficiency of research. We reduce costs for entrepreneurs by providing marketing and consulting services at low prices. This allows junior customer service agents to leverage the skills and experience of senior staff. It will also enable gig workers and artisans to provide more sophisticated and technically demanding services.
Unlike many previous technologies, AI is uniquely positioned to help the unskilled and uneducated: workers at the bottom of the economy. Giving each of us greater capabilities potentially yields the most meaningful benefits to those with the greatest shortcomings to begin with. This means that it can function very differently than automation, whose primary purpose is to replace assembly line or sales/office workers.
The concern, of course, is that the AI will do more than that and the results will be uncertain. For now, I consider selecting and structuring research questions to be my own prerogative and my main source of competitive advantage. But I can imagine that at some point you’ll want to ask the AI to generate the questions itself. In fact, the AI tools I use already encourage me to do so. At the end of an exercise like the one I sketched above, they will gently suggest fruitful avenues of scrutiny that I can follow up on.
AI replaces thinking in other, more subtle ways. It is already shaping the way I think about existing research. It not only summarizes what’s out there, but also tells me how adjacent research relates to my work and how I should think about it. It makes connections to different parts of literature that I hadn’t thought of.
There is an even greater danger lurking there. Public discussions about the impact of AI on society have largely focused on the potential displacement of workers and job loss. But an even bigger risk is the displacement of human thinking. When we allow AI to do the work of thinking for us, we cross an important threshold. Our collective thinking capacity declines, and so does our motivation to learn to think. And because the line between applying thinking to a problem and thinking itself is already blurry, it’s easy to cross it.
In an interesting recent paper, MIT’s Daron Acemoglu, Dingwen Kong, and Asuman Ozdaglar formulate intuitions about how such cognitive offloading can produce devastating consequences. They ask what happens when AI models become very good at providing context-specific knowledge that helps people perform the specific tasks they are working on. Such outputs enable people to achieve better outcomes, even if they learn less.
But there is a problem here because knowledge has important externalities. When you think about how to solve your own problems, you also contribute to the general stock of knowledge about how other people solve their own problems. Less investment in one’s own learning reduces the stock of general knowledge. In limited dystopias, general knowledge disappears completely.
Admittedly, this is only a theoretical possibility at this point, and depending on what assumptions we make about the strength of competing effects, we might even get better results. But the danger is real. When we allow AI to learn and think for us, we risk diminishing our own human capabilities and ultimately destroying the knowledge base on which the AI itself relies.
Addressing these issues requires the development of social and professional norms regarding the appropriate use of AI. For example, researchers may need to include detailed disclosures about how they used AI. While this process can be automated by the AI tools themselves, publishing and promotion decisions are largely a product of the human mind. Organizations like the Partnership on AI can help develop and disseminate general principles. As almost all new technologies require, new forms of government regulation will also be required.
A necessary condition for such a remedy is a new way of thinking about AI. Above all, public debate requires a different framework. The question we should be debating is not what AI will do for us, but what we want it to do for us. — Project Syndicate
Dani Rodrik, professor of international political economy at Harvard Kennedy School, is past president of the Association for International Economics and author of Shared Prosperity in a Fractured World: A New Economy for the Middle Class, the Global Poor, and Our Climate (Princeton University Press, 2025).
Save by subscribing to us for your print and/or digital copy.
P.S.: The Edge is also available on Apple’s App Store and Android’s Google Play.
