How many companies are ignoring the environmental risks of AI?

AI For Business


What context?

Companies around the world are failing to address the environmental and social risks of AI, a new report finds.

MEXICO CITY – Companies around the world are rapidly adopting artificial intelligence, but are failing to identify and mitigate its risks to society and the environment, according to a report from the AI ​​Company Data Initiative, the world’s largest dataset on machine learning adoption in enterprises.

The November report examines publicly available data on AI policies and adoption at more than 1,000 companies in the Americas, Europe, the Middle East, Africa, and Asia Pacific across 13 different sectors.

The report found that most companies are not considering the potential harms of AI implementation, and most lack a public AI strategy that informs employees how they are mitigating potential risks.

“The biggest challenge is getting companies to understand that responsible AI practices can also be a sustainable growth solution for their companies,” said Katie Fowler, director of responsible business initiatives at the Thomson Reuters Foundation.

The AI ​​Company Data Initiative is part of the Thomson Reuters Foundation, which also operates the news platform Context.

The report found that almost all companies surveyed did not consider the carbon footprint of using AI.

To train the models that power AI systems, computers housed in data centers require large amounts of electricity and water to cool them. For example, Google’s data centers consumed 6.1 billion gallons of water in 2023, according to the company.

The physical force behind AI-generating tools, these data centers are exempt from environmental requirements and are being built in countries such as Mexico, where local populations are already suffering from water shortages and power outages.

68% of companies did not consider the social impact of AI systems because building and improving them relies heavily on data annotators working in volatile conditions in the Global South.

“This is often referred to as the hidden supply chain, because there is so much informal and often forced labor involved in data enrichment,” Fowler said.

More than half of the companies surveyed did not have a publicly available AI strategy or policy, largely due to a lack of local regulations regarding AI.

“We don’t know what ‘good’ looks like,” Fowler said. “It just creates fear that individual companies will be the first to put out information.”

Only 41% of companies that have an AI policy make it available to their employees.

Mr Fowler noted that employees are demanding disclosure of AI policies amid concerns about the role of technology and the changing nature of work causing job losses.

Machine learning is already changing entire sectors. Hard hit has been the entertainment industry, with professional animators, filmmakers and voice actors pushing for regulations to protect them from AI copying and the illegal scraping of their work to feed chatbots.

But Fowler said AI in the workplace is becoming more disruptive, and employees are concerned about the environmental and social impact of the AI ​​tools they use at work.

“If companies really want their employees to reap the benefits of AI, they also need to do a really good job of helping employees understand how they are mitigating harm in the process,” Fowler said.

The AI ​​Company Data Initiative offers free research to help companies map where AI is used across products, operations, and services and compare their performance to their own sectors.

(Reporting by Diana Baptista; Editing by Anastasia Moloney and Ayla Jean Yackley)



Source link