The European Union (EU) has just passed the world's first Artificial Intelligence Act, which will set regulations for the use of artificial intelligence. Although the law only applies to the EU, experts who support the law are hoping for a “Brussels effect,” a phenomenon in which laws and regulations passed in the EU are often passed in other jurisdictions as well. The Brussels effect was evident in Kenya, where the country's Data Protection Act was passed shortly after the EU passed its Data Protection Act known as GDPR.
As the use of artificial intelligence (AI) increases worldwide, human rights activists are stepping up their lobbying for regulation to curb its negative effects. Researchers and experts say that while AI offers great benefits, it also poses very serious threats that could become dangerous to society if not curbed by regulations.
The first negative impact is that AI may lead to job cuts in several industries. Many sectors rely on the use of AI because of its advantages over human resources. AI does not face physical limitations like human labor. It does not get sick, does not try to take time off, and is accessible even after working hours.
When companies use AI, they do not face the challenge of complying with labor laws. Also, AI is much cheaper compared to human labor, for which companies have to pay salaries and taxes related to human resources.
Given these benefits, some companies may choose to lay off employees and move to AI, which in the long run will have serious implications on social welfare if there are high numbers of unemployed people.
AI has created a new type of competition in the market, with companies competing for work with AI-driven solutions. For example, in the graphic design and branding sector, some companies choose to use AI-driven solutions rather than procuring the same services from suppliers. This type of competition can lead to what can be classified as “unfair competition.”
Secondly, AI may lead to breaches of data protection and intellectual property laws, which is not only a legal issue but also raises serious ethical questions. AI may not be as accurate.
But the most serious negative impact would be if AI fell into the wrong hands, i.e., if it were used to profile victims during war. This would have extremely dire consequences for humanity.
The debate over the need to regulate AI has been ongoing for about five years and gained momentum with the emergence of OpenAI. Since the launch of OpenAI's ChatGPT in 2022, lobbying for regulation has intensified, pitting human rights activists against tech companies. Some experts have argued that AI is now a global issue and should be given the same risk status as a global pandemic or nuclear war.
The EU is leading the world in enacting the world's first large-scale AI law. The general policy direction of the EU law is to provide a human-centric and ethical approach to AI. The provision aims to ensure that human protection takes precedence over technological development. As such, the law provides for four types of regulation depending on the level of risk posed by AI. Level 1 is a low risk and does not require regulatory intervention, while Level 4 risks are unacceptable and prohibited.
