European companies need to balance rapid innovation with responsible use of AI

Applications of AI


Author profile photo

Not just self-driving cars, but also deepfakes during elections. The increasing prevalence of AI technology is evident in both constructive and alarming applications. European companies face the challenge of balancing rapid adoption and responsible use of AI. The AI ​​Act provides a clear framework. However, useful tools such as synthetically generated data can also be useful.

Why you need to know this:

Europe wants to use the AI ​​Act to regulate AI. This law protects the people.

Europe aims to be at the forefront of AI. From established companies like SAS and Hugging Face to startups like Germany's Aleph Alpha and France's Mistral, we expect a lot from AI on European soil in the coming years.

AI law

At the same time, we need to protect the fundamental rights and safety of our citizens through AI laws. The European Parliament recently approved this law. AI systems are classified according to their risk, and the law imposes strict requirements on high-risk applications to prevent potential health hazards and violations of fundamental rights. Companies can be fined up to 35 million euros if they violate the law. The AI ​​law is scheduled to come into force in June.

SAS, a market leader in AI and analytics software, has been involved in the law's legislative process from the beginning. Recently, SAS presented a briefing on ethical AI and new legislation. said Kalliopi Spiridaki, the company's chief privacy strategist. “This law covers AI as it applies to toys, aircraft, government systems, etc. It also includes measures to prevent AI from interfering with elections. Consider, for example, the use of deepfakes. It must be clear to consumers whether the content is genuine or generated by AI.” Applications that pose an “unacceptable” risk will be completely banned. For example, consider collecting facial images into a database like in China.

Guidelines and tools

SAS experts investigated the fundamental components organizations need to establish reliable AI systems. Their discussion covered important aspects such as guidelines for maintaining high data quality. Josephine Rosen, Trusted AI Specialist at SAS: “Data models are like milk. Over time, they lose their 'freshness.' The key is to adapt to ever-changing realities. People need to continuously monitor model performance and sound the alarm in a timely manner. ”

In addition to the guidelines, Rosen said there are also tools to help companies apply AI ethically, such as synthetic data. This generated data is useful when a company wants to train his AI model, but the available data is insufficient or contains privacy-sensitive information. According to research, this year it is predicted that 60% of the data used in the development of AI applications will be generated synthetically. In healthcare, where patient data is not always freely available, this is a useful alternative. For example, Erasmus MC uses it to train its AI models. SAS also provides useful programs for synthetic data.

There is also an AI model for explanation. They are specifically designed to help humans understand the decision-making process of AI systems. These reveal the internal logic and reasoning behind AI decision-making, allowing companies to intervene if, for example, the AI ​​uses discriminatory processes. For example, Utrecht-based startup Deploy focuses on correctly applying explanatory AI models within enterprises. Deeploy's customers include medical pension fund PGGM and comparison site Independent.

Overregulation is a pitfall

It remains to be seen how the new law will play out in Europe. The AI ​​Act has wide support. However, many companies and governments are reluctant. Recognition of this law in member states was difficult. Germany and France expressed resistance. Countries were concerned about the rules being too strict, putting European developers at a disadvantage. Compared to the US AI Bill of Rights, the European law is more comprehensive and detailed. AI experts such as ML6's Jens Bontinck have also warned of the risks of overregulation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *