Technology of the 2020s A fascinating and complex beast. Innovation remains the primary driver of that insatiable appetite, as seen in real time in the new AI arms race. However, in the last 10 years, regulation It’s becoming a new new thing in technology.
More than 26,000 people have now signed the Future of Life Institute’s open letter calling for a six-month moratorium on AI development beyond GPT 4.0. Every day, many new academic papers are published pondering the right combination of design and standards, frameworks, and guardrails to ensure that AI nourishes the next positive evolution of humanity rather than death.
The word “responsibility” came to mind. Just as the letter “e” was added to the companies claiming the first dot-com boom, “responsible” is now the prefix for thousands of new non-profit and for-profit companies, all is taking it seriously. Generative AI boom safe for human application and consumption.
Applying the precautionary principle to AI development is a major turning point for a sector that has always prided itself on funding and launching new companies with minimal viable products. For well over a generation, big tech platform companies have spent hundreds of millions of dollars lobbying governments and regulators.

So how did you get here?
OpenAI’s ChatGPT and other Large Language Models (LLMs) fundamentally changed the way we think about AI. What AI can and cannot do, and how future and competing versions will impact work, education, social cohesion, and more.
“You need a minimal understanding of data and AI,” said Dr. Ian Oppermann, NSW’s chief data scientist, speaking at the recent launch of the Australian Computer Society’s research publication Data & the Digital Self. “If AI is the next application for data, it needs to be secure and we need to understand the absolute basics.
“We need to understand AI on a par with electricity. Electricity and data are very similar. You don’t put your tongue in the outlet or the fork in the toaster. We all know that.” But anyone can change a light bulb and toast a piece of toast.”
of ACS data and digital self This publication is a compelling voice on the data-related challenges facing Australia in a world that is not only hyper-connected and digital, but also powered by AI. The authors thoughtfully deal with thorny issues of trust, data sharing, privacy, and the legal framework.
The power of AI, and the pace and scale of its current development, is rightly scratching the surface for many. Because AI is a platform technology, it will be integrated into everything from search, education and media, to robotics, health, insurance and banking applications, to job search and recruitment.
AI can destabilize labor markets and political systems, concentrating vast amounts of power in the hands of a few unelected companies.
Moreover, all this massive change is happening at a time when algorithms, institutions, media, science and mutual “trust” are collapsing.
Over the past decade in Australia alone, there have been scandals and royal commissions against robodets, banks, aged care and disability services, child sexual abuse, mental health, natural disaster and emergency management, energy, trade union governance and corruption. has lost trust.
These are times when everyone and everywhere seems to be feeling uneasy in unison. Add an unchecked platform layer of networked AI to that mix and things can quickly go from bad to worse.
“Many of the risks we see today are no It comes from an ultra-advanced AI prediction system. It’s the result of scaling up simple automation (as we’ve seen with Robodebt),” said a law professor at the University of Sydney and principal investigator at her ARC Center of Excellence for automated decision-making and society. said Kimberlee Weatherall of
“A simple checklist is fine if one bureaucrat can manage and apply these checks and use their discretion as to whether it makes sense in a particular context. As it spreads, it causes sudden and uncontrollable damage.”
Max Tegmark, leader of the US Pause AI movement, puts it bluntly:
History shows that worrying about new technology is nothing new. Nor is it unfair. Gutenberg’s printing press ushered in the Renaissance and the Enlightenment. But before we got there, there was the Hundred Years War. During the early years of the Industrial Revolution, the Luddites were skilled weavers, fighting unemployment and molting. Today, Amazon workers protest the harsh conditions of highly automated warehouses. Already, several U.S. states have banned AI for recruitment and talent acquisition.
Along the way, thousands of books and articles have been written about the loss of human agency, purpose, and privacy, and what it means to be human in an increasingly technology-driven world.
So every wave of technology that changes the world will be regulated… Ultimately, railroads, food. Drugs; Cars; Airplanes; Nuclear Technology; Guns and Biological Weapons;
AI will be regulated. This is a question of time and scope, with many competing agendas colliding with each other globally and locally.
Ben Evans, a UK technology sector analyst, historically identifies three main approaches to regulation.mostly Reasonable “best effort”It’s like trying to prevent fraud and terrorism. Some things are impossible, such as banning inflation. Or block all misinformation and hate speech at once.
Designing and enforcing effective and equitable regulation has always been a challenge. Regulating AI will stretch and challenge the best minds before things like clarity and consensus become viable for everyone.
Most technical and external parties would argue that current thinking about designing AI regulations is primarily aimed at meeting. reasonable best efforts Privacy, transparency and consent issues are fundamental pillars of the debate.
But it’s just the beginning, and a myriad of conflicting agendas and cultural clashes are already underway. In tech startup terms, the state of AI regulation is similar to pre-seed. Too many visions and no viable product in sight. strap in. It will be a wild ride.
do you know more? Please contact James Riley by email.
