As AI data centers move into Louisiana, LSU students are raising many concerns about the ethics and environmental impact of using AI.
To discuss some of these issues, the Student Alliance for AI Reform, formerly known as the Student Alliance for AI Regulation, partnered with Joe Green to host a talk titled “Environmental Impacts of AI in Louisiana: Where Are We Heading?” The talk was given by LSU professor and researcher Dr. Supratik Mukhopadhyay this Wednesday at the Greek Theatre.
Mukhopadhyay used his talk as an opportunity to show students that AI is neither perfect nor irreparable. Although there are many applications of AI programs to improve research, there are also many negative effects associated with the use of AI.
“It’s important to be aware of AI and how to use it for your benefit and how not to abuse it,” Mukhopadhyay said. “Today’s AI is impacting the world.”
Throughout the course of his talk, Mukhopadhyay talked about many of his research projects that utilize AI and highlighted how these programs have brought about significant advances in the field of environmental science.
One of the research projects he talked about was contributing to an AI program that was trained to identify and categorize specific geological formations, such as trees, buildings, bodies of water, and roads, from satellite images. The focus of this project was to map aboveground biomass with a focus on tree cover across the country.
“This basically has to do with how much carbon is getting into the ground,” Mukhopadhyay said. “This is very important for environmental scientists because people want to know how much carbon is out there.”
This study provided valuable information to environmental scientists, allowing them to more accurately determine the carbon consumption of American plants.
“Using satellite images of tree cover, we created a map of the entire land cover using AI,” Mukhopadhyay said. “Today it’s part of something called DeepSat. It’s part of something called NASA Earth Exchange.”
This research improved the field of environmental science because AI programs were not implemented until they were trained to near-perfect accuracy. Before this program, scientists had to piece together satellite images. The satellite images were incredibly vast, with some geological formations appearing as mere pixels. This created faster and more accurate solutions to difficult tasks.
Mukhopadhyay also spoke about a research project he worked on to help predict and detect wildfires, known as Deep Fires, in an effort to prevent large-scale destruction of communities in high-risk areas.
“We have built a predictive tool that can predict wildfires with over 90% accuracy,” Mukhopadhyay said.
To predict wildfires, Mukhopadhyay and other researchers created an AI program that uses surrounding weather data, such as the area’s vegetation type, wind speed, and storm conditions, to determine the likelihood of a wildfire outbreak.
The program then predicts the time until a wildfire is likely to start. This forecast can range from just a few days to more than a month in advance.
Once forecasts are made, the program focuses on sensors and cameras in high-risk areas, designed to detect wildfires that are starting to spread. Detecting a fire can predict how it will spread, giving first responders a better idea of how to fight wildfires.
The AI program is also designed to track lightning strikes in high-risk areas, a major cause of wildfire outbreaks.
“We’ve made it possible to predict where lightning will strike,” Mukhopadhyay said. “This is one of the most powerful tools for predicting wildfires, but prediction alone is not enough.”
Mr. Mukhopadhyay reminded the audience that predictions are just that. While there is no guarantee that this program is 100% accurate, it can give local people time to prepare and gather resources to fight wildfires and prevent large-scale disasters.
By talking about these advances made by AI, Mukhopadhyay was able to show that using AI in research is not inherently bad. AI programs specifically trained for research accomplish things faster than humans, leading to faster development and more advanced research that was previously impossible due to time and resource constraints.
However, Mukhopadhyay also touched on the negative effects of AI applications, especially regarding generative AI programs.
One of the effects he mentioned is the “hallucinations” that AI tends to have. The AI answers a question incorrectly and convinces the user that it is the correct information.
He gave the example of a hypothetical AI trained to only distinguish between giraffes and horses.
“If you give them a giraffe, they will say giraffe,” Mukhopadhyay explained. “If you give it a horse, it’ll say horse. It’s very good at that. It’s better than humans at that. We’ve already reached a level where we’re better than humans. But what I’m doing now is feeding elephants.”
Although the introduction of an elephant may seem harmless, its lack of human-like insight leads to major errors in the program.
“Normally, when people see an elephant, they think, ‘Oh, it’s not a giraffe or a horse,'” Mukhopadhyay says. “But with AI, it’s different. AI is trained on giraffes and horses, right? AI doesn’t know anything other than giraffes and horses. So it looks at whether an elephant looks like a giraffe or a horse, and says it’s a horse or a giraffe accordingly.”
Although this mistake seems trivial, it can lead to major problems in other situations, such as cancer cell identification studies, especially if these mistakes go unnoticed.
“The problem is that elephants do it without warning, so they don’t even realize that they’ve received something they’ve never seen before and shouldn’t have received,” Mukhopadhyay explained. “And it made a mistake, and it made that mistake quietly.”
He also pointed out that the data centers used for many generative AI programs consume large amounts of energy and water, depriving them of critical resources needed for survival.
“A lot of heat is released and if you don’t cool the system, it can be damaged,” Mukhopadhyay said. “So you need a lot of fresh water, not salt water.”
AI data centers are on their way to boot, and these concerns are big for Louisiana residents.
“AI is coming to Louisiana. One of the largest data centers in the world is coming to Louisiana,” Mukhopadhyay said. “We need to be aware of how AI will impact our lives, especially the environment, because when the environment is negatively affected, our health is also negatively affected.”
Mukhopadhyay’s talk made it clear that the use of AI is not a one-sided discussion. The debate for and against the use of AI is very nuanced, with strong arguments on each side. While it improves the quality and speed of research in various fields, it also consumes resources quickly and sometimes spits out inaccurate information.
For SAFAR, this meeting was more than just a meeting. This was an opportunity to share the organization’s goals and present information to students about the use of AI beyond the language learning models they typically interact with.
SAFAR event planner Anderson Krupala, a freshman international studies and honors double major, hosted these talks and explained SAFAR’s mission to advocate for AI reform on LSU’s campus.
“We know that AI is here to stay,” Krupala said. “Especially in an academic setting, we want to promote understanding between students and faculty and help ensure that it is implemented in an appropriate manner within the school.”
Krupala doesn’t think AI should be completely removed from academic settings, but he draws a hard line when it comes to thinking it shouldn’t be used.
“It’s time for AI to start making decisions for you,” Krupala says. “When you don’t listen to your own intuition. For example, when you go to submit a paper and ask the AI if you can submit it first instead of yourself, or when you go to write the paper and let the AI substitute your ideas.”
SAFAR President Jude Terrell, a junior political science major, said the event was fun because it showed participants the purpose of SAFAR.
“I think this event ties into our core philosophy,” Terrell said. “First and foremost, we want to educate people about AI issues so that they have as much information at their disposal as possible when making choices about AI.”
For Ian Frick, president of Geaux Green and a junior majoring in coastal environmental science, the talk broadened his perspective on the use of AI in research.
“I can say that this event really changed my perspective and allowed me to speak more clearly about how AI can be positively utilized in research and environmental situation modeling settings,” Frick said. “But it is still harmful to the environment in other ways, especially for generative AI.”
Frick explained that this talk has prepared him to talk to others about the use of AI and its benefits and harms.
“I feel like I can now counter the big claims that are being made on both sides of the argument, ‘We need AI,’ ‘AI isn’t that bad,’ or people who say, ‘There’s no benefit to AI,’ or ‘There’s no reason we need to use AI,'” Frick said. “I feel much more knowledgeable to contribute to the conversation and correct people’s biases and misconceptions than I did before this event.”
SAFAR and Geaux Green are student organizations dedicated to making LSU a better place for everyone who visits campus. Whether it’s through their efforts to protect the environment, the nature of our campus, or our academic integrity, these organizations help shape the LSU community we know today.
Anyone interested in attending such talks in the future should check out the list of events on Geaux Green or SAFAR student organizations’ TigerLink, or check out their Instagram accounts @geauxgreenlsu and @safarlsu.
