Business Reporter – AI and Automation

AI For Business

Nasuni's Russ Kennedy explains how new regulations could impact the artificial intelligence industry.

The news that Meta will suspend its AI development in Europe due to privacy concerns raised by the Irish data regulator is just the latest example of the potential risks surrounding AI technology and the scope of its regulation.

Companies are finding it difficult to keep track of AI's rapid developments: Nearly a third of UK workers have used AI tools, according to a Slack survey, while more than half (56%) of US workers have used generative AI (GenAI), according to a Conference Board survey.

But as the case of Meta, where regulators raised concerns about the data used to train the company's large language models (LLMs), illustrates, company leaders will increasingly need to develop practical strategies for navigating new AI laws and regulations.

The most significant new regulatory measures on AI are the EU AI Act (expected to be ratified by 2026), the US Government Executive Order of October 2023, and the UK Bletchley Declaration of November 2023, which are in the early stages of implementation and development of potentially game-changing tools.

Given the potential security, privacy, and existential anxieties surrounding AI, senior executive leaders should think seriously about new laws and directives in various geographic regions and their potential impact on their ability to innovate. It is important to understand the broader impact of published directives and regional factors that may improve regulation of AI tools and foster their wider adoption.

AI Regulation

The three aforementioned declarations aim to anticipate and mitigate risks associated with AI technologies and capabilities at different levels, which can be broadly categorized into four approaches:

crisis management

All three prioritize risk mitigation, identifying deviations from outputs, new security vulnerabilities, and unexpected outcomes as challenges requiring corporate oversight. However, they take different approaches to regulation.

Safe Innovation

All these frameworks broadly claim to be intended to balance safety and ethics while supporting innovation. However, this objective may vary by region, as the US executive order is premised on the possibility of working with large domestic technology companies. And while these declarations generally focus on high-risk applications, all three acknowledge that lower-risk AI systems also require transparency, and they generally avoid placing strict limits on AI development itself.


The US government's executive order focuses on security, while the EU AI law prioritizes citizens' rights. The Bletchley Park Declaration calls for cooperation between countries, but its motives are unclear. Both the UK Declaration and the EU AI law, unlike the US executive order, call for the establishment of a central regulatory body.

All of the proposals emphasize the need for collaboration between governments and the private sector. What is missing is identifying which companies can help shape this dialogue.

Regulatory Scope

At this stage, the proposed EU AI bill appears to have the most detailed implementing rules across Member States, while other frameworks favour broader principles.

We are beginning to see the big picture and can now visualize how AI governance will work at a global level. But what local factors do enterprise leaders need to plan for when considering AI implementation?

Regional factors

Common Standards

Governments and the technology industry need to promote certification to ensure AI tools and their underlying models meet trusted international standards. For example, the Institute of Electrical and Electronics Engineers (IEEE) Standards Institute Certification Program covers ethics for autonomous systems.

Practical standards are essential for businesses adopting new technologies while limiting the risks of AI adoption, giving users confidence that their new tools are trustworthy, secure, and enterprise-ready.

The AI ‚Äč‚Äčindustry will need clearer guardrails for the development of AI tools based on stronger cooperation between governments as these frameworks are translated into common regulations for AI development, but these regulations should not disadvantage local AI industries and companies that follow the rules when others do not.

Flat field

On the other hand, restricting AI development too tightly could prevent equal opportunity: it could lead to AI monopolies, as smaller providers and startups are regulated or find innovation impossible due to high prices. While enterprise buyers generally favor incumbent vendors, common standards give smaller, more agile AI providers broader opportunities to develop new products.

Data Order

Underlying all these considerations is data security. Many organizations have yet to update their data protection and recovery postures. With the growing use of data in LLM training, the phenomenal growth of GenAI, and evolving cyber threats, companies risk security and compliance headaches, with regulatory penalties potentially greater than the cost of paying a ransom.

While Europe still leads the way in data governance and regulation with the GDPR, laws such as the California Consumer Privacy Act (CCPA) are now being adopted across the U.S. Without investing in data protection and compliance, companies leveraging business datasets for AI services could face fines, ransom demands, lawsuits, and potential business interruption.

Trust is the most important thing

While businesses will need to prepare for a regulatory framework that allows enterprise-ready and data privacy compliant AI tools, the top priority for regulators will always be providing assurance to the public.

As the Meta story shows, even as AI becomes an essential part of our lives, the needs of businesses will always take precedence over AI being trusted by the public.

Russ Kennedy is Nasuni's Chief Evangelist

Featured image courtesy of Dragon Claw

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *