AI regulators need to target data provenance and privacy

AI For Business


The speed at which AI dramatically transforms the global economy has not escaped the attention of regulators.

As concerns about the impact of artificial intelligence (AI) on potential use cases continue to grow, legislators around the world are preparing to establish appropriate legal frameworks to contain AI.

The Biden administration issued a formal request for comment on Tuesday (April 11) to formulate specific policy recommendations on AI, and China’s internet regulator also issued its own detailed measures on Tuesday to curb AI. announced. Ensure accuracy and privacy, prevent discrimination, and protect intellectual property rights.

read more: Generative AI Tools Center of New Regulation-Innovation Tug of War

Still, observers say the pace at which new AI systems are being updated and released makes attempting effective regulation a real challenge for policy makers who are already finding themselves set back in nature. I think.

Microsoft-backed OpenAI’s ChatGPT AI tool was able to grow its user base to 100 million within two months. It took just 34 days to train the model. Between November 30, 2022 and March 14, 2023, OpenAI successfully launched two new generations of its disruptive Large Language Model (LLM) AI solutions.

The U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA) has solicited comments for 60 days of public feedback on how to regulate AI, but China’s Internet regulator has set a time limit for public feedback on its proposed measures. We are postponing the implementation of regulations to give .

Who knows what the AI ​​landscape will be like by then.

Competition to Safely Develop the Industry

In a recent blog post, OpenAI said, “We believe powerful AI systems should undergo rigorous safety evaluations. Yes, and we are actively engaging with governments on the best possible form of such regulation.”

The Microsoft-backed AI company is widely recognized as a market leader, and its ChatGPT tool has captured the public imagination while both transforming traditional industries and spurring the creation of new ones. rice field.

According to the Chamber of Commerce report, the fact that U.S. businesses are more concerned about the growing threat of government over-regulation and the risks it poses to business, as reported by PYMNTS, reflects the realities of regulation. complicating the

That latest Chamber of Commerce briefing follows last month’s briefing that called on the government to regulate AI.

As reported by PYMNTS, Italy has become the first western country to go so far as to ban OpenAI’s ChatGPT-4 chatbot. This comes after the country’s data protection authority announced an investigation into alleged General Data Protection Regulation (GDPR) privacy rule violations in AI solutions. It argues that “there is no legal basis” to justify the massive data collection and storage of personal information used to “train” GPT-4 chatbots.

Italy’s move has prompted other privacy regulators in Europe and around the world to take a closer look at ChatGPT and other AI tools.

The Canadian Privacy Commission launched its own investigation into OpenAI last Tuesday (April 4) after complaints about ChatGPT’s use of personal information without consent.

“We need to keep up with fast-changing technological advances and stay ahead of them.

Reuters reports that China’s central bank-controlled payments industry group has banned the use of ChatGPT and other AI tools on Monday (April 10) due to the risk of “cross-border data leaks.” ) issued a warning.

See also: Former Google CEO says industry needs to develop AI ‘guardrails’

Protecting Children, Respecting Privacy, and Improving Accuracy

Data is the lifeblood of AI models. How companies collect, harvest and use data to power AI solutions should be a central focus of the regulatory framework.

Enact guardrails around the provenance of data used by LLM and other training models, revealing when AI models generate synthetic content such as text, images, or even voice applications and flag that source. allows governments and regulators to protect consumer privacy without compromising it. Private sector innovation and growth.

Top AI concerns include protecting children, respecting privacy, and improving accuracy of results to avoid “hallucinations” and scalable misinformation. At the heart of all this is the proper use of data and ensuring the integrity of that data.

“These are all situations that require very large amounts of data. Data is the foundation for building models and training AI, and the quality and integrity of that data is important,” said the Technisys sister. Michael Haney, head of Cyberbank Digital Core for fintech platform Galileo, a company, told PYMNTS in an earlier conversation.

Individual US states, including California, Connecticut, Colorado, Utah, and Virginia, recently passed general data privacy laws inspired by similar provisions in the EU’s GDPR.

As PYMNTS writes, a sector like healthcare has become the standard for best practices around data privacy protection and data set integrity and provenance as the world continues to tectonic due to the technological capabilities of AI applications. You have the opportunity to act as a responsible bearer.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *