by Saeed Nazakat
Kuala Lumpur debate highlights why questioning the power, governance and outcomes of AI is more important than certainty as global investments reach heights of speculation

Often moderated A more difficult session than speaking. Speaking allows you to follow your own train of thought. Moderating requires having multiple threads in mind at the same time. It means listening carefully, anticipating how the conversation will develop, and guiding questions without overshadowing the voices that deserve emphasis. The goal is not to control the conversation. This is to create space. Space for questions. Space for disagreements. A space where insight is born.
I had that in mind Thursday morning as I moderated a session at the GIJN conference in Kuala Lumpur. Across from me were four remarkable women. In their own ways, their work, skills, and clarity have helped shape the global conversation around open data and artificial intelligence.
Each of them had a perspective honed through years of research, innovation, and fearless exploration. These are women who have spent countless hours tracing the fault lines where technology, power, and society collide.
Natalia Viana, Brazilian investigator A journalist, he has spent years tracking the fault lines where technology, power, and democracy collide. Alongside her, South Africa’s Asandiwe Saba leads the AI-powered newsroom effort at Code for Africa, investigating how algorithms are impacting the daily lives of millions of people. Pulitzer Prize-winning British journalist Alison Killing brings the precision of an architect and the forensic rigor of a satellite investigator. and written by Karen Hao Empire of AI: Sam Altman’s OpenAI dreams and nightmares They were joined by a mix of curiosity and wariness, with ripples spreading far beyond Silicon Valley.

The conversation started with a seemingly simple question. Who governs AI and who sets its rules? But the question quickly expanded to far more pressing areas. Who will truly benefit from these systems? And who will be left behind? How can AI shape public opinion and undermine the fragile trust that holds society together? Which stories about AI will be amplified and which stories will remain untold? And, perhaps most disturbing, what will happen when the power to shape global information and knowledge is concentrated in the hands of a few companies, far from public scrutiny?
Later that day, I realized that In a conversation with another veteran journalist and close friend, data readPaula Frey, Commissioner of Investigation into Media and Digital Platforms at the South African Competition Commission; Talking with her reminded me of something we often forget in a world obsessed with speed and answers. That said, sometimes the most important thing we can do is keep asking questions.
The answer feels good. They assure us (sometimes illusory) that the world is simple and that we understand it. But the question wakes us up. They keep us curious. They force us to look deeper and rethink things. And in a moment shaped by technology that promises certainty, control, and predictability, asking questions can also be an act of courage.
In the world of AI, everyone are chasing the next big miracle, questions are our safety fence. They stop us from giving in to the hype. They challenge us to think and not just react. And perhaps most importantly, questions draw us into a common conversation. As technological decisions increasingly shape everyday life, they create spaces for collective ownership, dialogue, and scrutiny.
As the global conversation unfolds about the role of AI in our lives and our shared future, one question remains. What kind of world will AI ultimately create if it becomes integrated into the everyday fabric of society? And related to this, there are growing concerns about the current scale of investment in the field, whether it reflects its true long-term potential or signals the emergence of a speculative AI bubble.
Those questions resonate far beyond the rooms in Kuala Lumpur, and interestingly, they are now echoing within Silicon Valley itself.
Just a few days ago, Google CEO Sundar Pichai said: In an interview with the BBC, he warned that there was an “irrational element” to the multi-trillion dollar AI investment boom. Pichai, who is usually cautious about what he says in public, was particularly outspoken. If the bubble bursts, no company, including Google itself, will be affected.
While Pichai argued that the long-term impact of AI will be significant, he compared today’s enthusiasm to the excesses before the early internet crash and acknowledged that “we may have gone too far.”
One thing is undeniable. That means we are at a pivotal moment in the development of AI. Big businesses and governments are amassing talent and capital to pursue their promises. From policy institutes in Washington and Brussels to innovation hubs in Bangalore, Nairobi and Shenzhen, AI is no longer a niche technology. It is a central driver of economic growth, social change, and international competition.
Still, this moment demands intentional, informed, and transparent conversations. Beyond the headlines and hype, we need to ask tough questions about governance, ethics, accountability, and fairness. We need to examine the impact on employment, education, healthcare, and public trust, as well as the ever-present risks of bias, discrimination, and misinformation amplified by AI. And we must ensure that the AI boom does not obscure the fragile, human reality beneath it.

As we navigate this uncharted territory, one thing is becoming increasingly clear. That said, the real question is not what AI will do to us, but how we are prepared to accept what AI will do.
This recognition places responsibility on all of us. This highlights the need for intentional, informed and persistent dialogue. Transparency in the design, data, and decisions built into these systems is required. In doing so, the system remains subject to oversight and accountable to the public it affects.
After all, AI is a story we are writing together. And it is imperative that we remember that the mirrors we build today will shape the society we inherit tomorrow.
(The author is CEO and founder of DataLeads. Ideas are personal.)
