Last week, amid all the hype and hype surrounding artificial intelligence (AI), an open letter written by a few prominent figures threw a bucket of cold water on all the excitement. Future of LifeHe was apparently inspired by a letter from the Institute that raised concerns that AI would threaten jobs in the country, and hinted at the doom that would befall us if we didn’t regulate AI immediately. .
Last week, amid all the hype and hype surrounding artificial intelligence (AI), an open letter written by a few prominent figures threw a bucket of cold water on all the excitement. Future of LifeHe was apparently inspired by a letter from the Institute that raised concerns that AI would threaten jobs in the country, and hinted at the doom that would befall us if we didn’t regulate AI immediately. .
As anyone who reads this column regularly knows, I’m bullish on AI. I believe it will be revolutionary technology that will create the next big orbital shift in the way society works. Like all “technical” changes, it will change the way we work, rendering many of the jobs that exist today irrelevant. But in its place are new jobs and fresh skills that must be learned to make the most of the opportunities humanity has to offer. That’s why I don’t share their pessimism.
As anyone who reads this column regularly knows, I’m bullish on AI. I believe it will be revolutionary technology that will create the next big orbital shift in the way society works. Like all “technical” changes, it will change the way we work, rendering many of the jobs that exist today irrelevant. But in its place are new jobs and fresh skills that must be learned to make the most of the opportunities humanity has to offer. That’s why I don’t share their pessimism.
subscribe and read more
That said, there are certainly benefits to starting to think about how AI should be regulated. It will undoubtedly become a ubiquitous technology that permeates many aspects of our lives. In that case, much of the regulatory framework we rely on today would be superfluous. And it’s never too early to start thinking about how to deal with this.
As it happens, a number of countries have been trying to do just that over the last few years. The US Office of Science and Technology Policy has published a blueprint for the AI Bill of Rights. laissez-faire approach. Aside from reiterating the need to protect users from dangerous and ineffective systems, ensure AI systems are designed to be non-discriminatory and address privacy concerns around notifications and user autonomy. We assure you that we will take steps to Have to.
Meanwhile, the European Commission has submitted a full-fledged bill outlining in great detail how “high-risk AI systems” should be regulated. Ensuring that only error-free datasets are used for training, and imposing an obligation to establish an audit trail for transparency. Regulation (GDPR). Violations are subject to fines of up to 6% of global turnover.
Both of these regulatory proposals seek to correct what we believe is wrong about algorithmic systems, based on current experience. They are trained and contain implicit biases to prevent the discrimination these systems are making. We also seek to mitigate the privacy harms that may arise when AI systems use information for purposes other than those for which it was collected or process information without notice.
These are issues that need to be addressed, but designing a regulatory strategy to fix problems only after they occur will not help address a technology that can evolve as rapidly as AI. It’s just as irrelevant as applying a traditional approach to responsibility.
From what we’ve seen so far about generative AI, it is capable of unpredictable emergency behavior that is completely unrelated to the programming it receives. These systems are adaptive and capable of reasoning much better than human developers could have imagined. They are also autonomous, often making decisions that have nothing to do with the explicit intentions of their human creators. And often run without their control. If our regulatory solution is to hold the developers of these systems personally accountable for this urgent action, they should be held accountable for their very urgent action, which is their strength. Further development is forced to stop for fear of the liability that must be incurred. .
What if there is another way? What if we took an agile approach to AI regulation based on a set of cross-cutting principles that describe at a very high level what to expect from AI systems (and what not to do)? It can be applied to all the different ways AI is deployed, or will be deployed, across a wide range of sectors and applications. Sector regulators can refer to these principles to identify marginal harms and take appropriate corrective action before their impact becomes widespread.
This is the approach the UK government seems to take in a recently published paper. An innovative approach to AI regulationRather than introducing a new regulatory framework, it is intended to follow an agile and iterative approach designed to learn from practical experience and adapt continuously. We recognize that strict laws can slow innovation, and we do not intend to base these principles on statutory grounds. Instead, we are considering issuing it on a non-statutory basis so that it can be implemented by existing regulators. Regulators will leverage their domain-specific expertise to tailor regulations to specific situations where AI is used.
So far, India has refrained from regulating AI, but there have been calls from a few to speed it up. However, if you do want to finally start, I recommend following the British approach. AI has a lot to offer us, but we should not stifle its potential.
Rahul Matthan is a partner at Trilegal and also has a podcast called Ex Machina. His Twitter handle is @matthan.