
Are you worried that AI is moving too fast and that it will have a negative impact? Do you wish there were national laws to regulate it? Well, it’s a club with a burgeoning list of members. Unfortunately, if you live in the United States, there are no new laws designed to limit the use of AI, so self-regulation remains the next best thing for companies adopting AI, at least for now.
It’s been years since “AI” replaced “big data” as the biggest buzzword in technology, but the launch of ChatGPT in late November 2022 has taken many AI observers by surprise. The AI gold rush has begun. In just a few months, the bonanza of powerful generative AI models has captured the world’s attention. This is thanks to their amazing ability to mimic human speech and comprehension.
The tremendous rise of generative models in mainstream culture, fueled by the advent of ChatGPT, has raised many questions about where this is headed. The wonder that AI can produce compelling poetry and whimsical art has given way to concerns about the ill effects of AI, from consumer harm and unemployment to illegal imprisonment and even the extinction of humanity. It is
Some people are very worried about this. And last month, a consortium of AI researchers called for a six-month moratorium on developing new generative models larger than his GPT-4, the large-scale language model deployed by OpenAI last month.
The open letter, signed by Turing Award winner Joshua Bengio and OpenAI co-founder Elon Musk, reads: “Advanced AI has the potential to profoundly change the history of life on Earth and should be exercised with due caution.” It needs to be planned and managed with resources.” others. “Unfortunately, this level of planning and management has not been done.”

Elon Musk says AI could lead to ‘the destruction of civilization’ (DIA TV/Shutterstock)
Not surprisingly, calls for AI regulation are growing. Polls show that Americans view AI as untrustworthy and want it to be regulated, especially when it comes to impactful things like self-driving cars and receiving government benefits. Musk said AI could lead to “the destruction of civilization.”
However, while there are some new local laws that target AI (such as one focused on the use of AI in employment in New York City), their enforcement has been delayed until this month, but they specifically target AI. No new federal regulation is imminent. A goal in Congress (although AI falls within the realm of legislation already on the books for highly regulated industries like financial services and healthcare).
With all the excitement of AI, what should companies do? It stands to reason that companies want to reap the positive benefits of AI. After all, the urge to be “data-driven” is seen as a prerequisite for survival in the digital age. But companies also want to avoid the negative consequences (real or perceived) that can result from not using AI appropriately in a culture of litigation and cancellation. thinking about.
“AI is the Wild West,” says Andrew Bart, founder of AI law firm BNH.ai. Data Nami Earlier this year. “Nobody knows how to manage risk. Everyone does it differently.”
That said, there are several frameworks companies can use to manage AI risks. Burt recommends the AI Risk Management Framework provided by the National Institute of Standards (NIST) and finalized earlier this year.

When is it OK to give someone more than others? AI expert Cathy O’Neill said:
RMF helps companies consider how their AI works and what negative impacts it may have. Use a “mapping, measuring, managing, and governance” approach to understand and ultimately mitigate the risks of using AI in various products.
Businesses worry about the legal risks of using AI, but Burt says these concerns are now outweighed by the benefits of using AI. “Businesses are more excited than worried,” he says. “But as we have said for years, there is a direct link between the value of AI systems and the risks they pose.”
Another AI risk management framework is CEO of O’Neil Risk Consulting & Algorithmic Auditing (ORCAA), which in 2018 Data Nami attention person. ORCAA proposes a framework called Explainable Fairness (you can check it here).
Explainable fairness provides a way for organizations to not only test their algorithms for bias, but also to explore what happens when differences in results are detected. For example, if a bank is determining student loan eligibility, what factors can it legally use to approve or deny the loan or charge a higher or lower interest rate?
Clearly, banks should use data to answer these questions. But which data, i.e. which elements that reflect the loan applicant, can be used?Which elements are legally allowed and which are not?These questions are simple and easy to answer. Not even, he says O’Neil.
In a discussion at Nvidia’s GPU Technology Conference (GTC) last month, O’Neill said, “That’s the whole point of this framework, that these legitimate elements need to be justified.” What is considered is very context dependent…when is it okay to give someone more than someone else?”

The European Union categorizes potential AI harms into a “Pyramid of Severity”.
Even if new AI laws aren’t enacted, companies should start asking themselves how to implement AI fairly and ethically to comply with existing laws, says data analytics and AI software vendor Dataiku. said Triveni Gandhi, Responsible AI Leader.
“People need to start thinking: How do we take the law as it stands, and how do we apply it to the AI use cases that exist today?” she says. “There are some regulations, but there are also a lot of people thinking about ethical and value-oriented ways they want to build AI. That’s the question I’m starting with.”
Gandhi encourages the use of frameworks that help companies begin their ethical AI journey. In addition to the NIST RMF framework,
“There are so many frameworks and ways of thinking,” she says. “So you just have to pick the one that works best for you and start using it.”
Gandhi encourages companies to start researching the framework and familiarize themselves with the various questions. This allows companies to embark on an ethical AI journey. The worst they can do is delay starting in search of the “perfect framework”.
“People expect perfection right away, and that’s where roadblocks come in,” she says. “You never start with a perfect product, pipeline or process. But if you start, at least it’s better than nothing.”
The AI Law creates a common regulatory and legal framework for the use of AI that affects EU residents, including how AI can be developed, the uses AI can be used by companies, and the legal consequences of not complying with the requirements. . The law is likely to require that companies get approval for some use cases before adopting AI, and prohibits certain other AI uses deemed too risky.

Fractal’s Sray Agarwal (Zia-Liu/Shutterstock) says a global AI regulation is desirable.
If U.S. states follow Europe’s lead on AI, as California did with the California Consumer Privacy Act (CCPA) and the EU General Data Protection Act (GDPR), the AI Act could model US AI regulation. may become.
Sray Agarwal, data scientist and principal consultant at Fractal, says there needs to be a global consensus on AI ethics.
“We never want U.S. privacy laws or ethics laws of any kind to come into conflict with other countries we do business with,” said Agarwal, who has worked as a free UN expert on the topic of ethical AI. increase. “We need a global consensus. That is why fora such as the OECD, the World Economic Forum, the United Nations and many other international organizations must come together to reach a consensus. must be created.”
But Agarwal isn’t holding his breath that we can reach consensus any time soon. “We are not there yet. We are nowhere.” [near] Responsible AI,” he says. “We are a relatively simple machine learning model and have not implemented it holistically and comprehensively across different industries.
But the absence of regulation should not prevent companies from pursuing their own ethical practices, says Agarwal. Instead of government or industry regulation, self-regulation remains the next best option.
eye
Related