It’s now clear that AI needs government-scripted frameworks

AI News


On May 16, the U.S. Senate Judiciary Subcommittee held a hearing on the oversight of artificial intelligence, inviting three key witnesses. Christina Montgomery, IBM Chief Privacy and Trust Officer. and Gary Marcus, professor emeritus at New York University.

This is a clear requirement for AI regulation by the U.S. government and is also highly relevant to the wider international community. The explosion of AI Large Language Models (LLMs) such as ChatGPT could replace many white-collar jobs. (According to Goldman Sachs, a quarter of the work is done in the U.S. and Europe, but many experts say advanced versions of AI could potentially even threaten human existence. Are concerned).

Industry leaders rarely welcome regulation, but key figures, including Sam Altman, are urging governments to work together to prevent the technology from being abused. Professor Gary Marcus, also the founder of Geometric Intelligence, argued that LLMs should be regulated to increase transparency and avoid potential subtle manipulation.

Christine Montgomery also called for transparency to know what algorithms are being trained on, explaining that IBM wants AI accuracy regulations.

Misinformation, the need to know AI-generated information, unemployment, invasion of privacy, manipulation of individual behavior and opinions, manipulation of the political system, and copyright regulation were the main concerns.

The overarching dilemma behind all these points was that if the United States overregulated itself, other countries could exceed our capabilities, raising national security concerns.

AI as a job creator

The unknown threat is difficult to regulate, but unemployment and unemployment cannot be regulated. The spread of AI is a moment of change for mankind. By shaping the development and application of AI, government policies can help mitigate job losses while facilitating new job opportunities that leverage the benefits of AI.

This could include “human-participating” AI systems. In this system, regulations encourage the use of humans working alongside AI in ways that complement rather than replace human thinking.

By imposing guidelines on transparency, accountability and fairness, governments can also ensure that decisions about leaving jobs are not based solely on economics, and that human safety takes precedence over profit.

Finally, governments can build social safety nets for those affected by AI automation. This includes things like unemployment benefits, job placement programs, and retraining efforts.

While these policies will save some jobs, there is an urgent need to create new jobs and upgrade the skills of existing workers. The regulation will encourage AI R&D through grants, tax incentives, and public funding, and help create new jobs in AI-related fields such as data science, machine learning, and robotics.

Government policies can promote an environment conducive to AI startup growth, while also investing in AI education and training to ensure that the workforce has the necessary skills to seize these new opportunities. increase.

Countermeasures against climate change?

What will international cooperation on these big issues look like? I was.

Lightning-fast advances in science and AI require urgent global cooperation to avoid dangerous consequences. There is no time to establish new governing bodies.

Surprisingly, the International Panel on Climate Change (IPCC) was mentioned as an organization that could bring together scientific knowledge to inform global policy. It’s a creative idea that should be pursued immediately.

Existing global schemes can be leveraged to help transition to a world partially run by AI. While there are considerable and perhaps even existential risks posed by AI, climate change is already wreaking havoc on billions of people.

We need to leverage the systems in place to mitigate and adapt to climate change to address the complex challenges surrounding AI.

The UAE is particularly well positioned for this dialogue. The appointment of the world’s first AI Minister in 2017 was a smart move in recognition of the threats and opportunities of new technologies to our lives. Other countries are expected to follow this example.

On the other hand, my personal advice remains the same. It’s about learning how to use these models and understanding how they work. Understand how content is created and how it can help and improve you.

Failure to do so is guaranteed to be manipulated by them.

Nancy W. Gleason

The author is Associate Professor of Practical Politics and Director of the Hilary Barron Center for Education and Learning at New York University Abu Dhabi.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *