regulator around it of world They are devising ways to regulate artificial intelligence. of Fast-moving, nebulous technology It is difficult to define, and it is a hurdle for lawmakers.
What is AI?
AI is generic term For applications that perform complex tasks traditionally performed by humans. Machine learning is an important subset within AI, focused on building systems that learn or perform better based on the data they consume. “There’s a lot of ambiguity there, and you see ambiguity that will affect different behaviors,” MIT professor Alexander Mandri told Quartz.
The AI field is changing rapidly.moved from linear regression model There are concerns that the law will limit the development of AI, as breakthroughs are coming so quickly that it will make financial forecasts for large language models that can generate new content, Mandry said. Stated. Policy makers are not necessarily in the best position to define which parts of AI should or should not be regulated, he added.
But that doesn’t mean policymakers need to be able to define AI, said Mandry, who previously testified. House Subcommittee in March. He added that even if you tell them the algorithm, they won’t be able to tell you. How much of it was created by AI or humans, reflecting the ambiguous nature of AI. Therefore, enacting legislation to enforce the definition of AI is a risky business. “We don’t want the system to do bad things, but it’s difficult,” he said. “It’s very difficult to be prescriptive about AI.”
How U.S. and European regulators are tackling AI
of AI law Focusing on understanding what is expected to be the world’s first rule on AI (pdf) from the European Union. What goes into an AI model, (pdf) Includes definitions of data sources, the intended purpose of the AI system, and the logic of the model. This is difficult given the ambiguous nature of AI.
Disagreements between the US and Europe How to regulate AIThe latter takes a more precautionary approach, using the government as a mediator, while the US relies more on the tech industry to come up with its own safeguards.
In May, U.S. Vice President Kamala Harris invited the CEOs of four major AI companies to Responsible development of AI and commit to participate in the evaluation of AI systems consistent with the responsible disclosure principles. That said, even Biden officials appear to be divided on how to regulate AI tools, with some backing the EU’s proposed guidance, while aggressive regulation is the US’s policy. Some say it puts the company at a competitive disadvantage, sources familiar with the matter said. Arguments told to Bloomberg.
AI regulation needs to focus on outputs, not inputs
But Mandry argues that there is a solution to this, and that it’s about focusing on the results from AI systems rather than what goes into the algorithms. So, for example, if an employer uses his AI-powered recruitment tools to help evaluate job applicants and evidence is found that these AI tools discriminate against job applicants, AI law will Potential focus on accountability consequences. There will also be new issues that regulators will need to address, such as disclosures about previously unregulated AI uses.
“We worry about consequences, so let’s talk about consequences, not all the ways how to avoid consequences like this,” Mandry said. “Because this is not the area of expertise for policy makers, even engineers at top companies don’t know.
