Dear AI Users and Enthusiasts,
I’m reaching out because I care about you, I’m concerned about you, and I don’t want you to inadvertently create a world where few, if any, humans can thrive. I know that may sound overly dramatic, but please hear me out. As an advocate for social and environmental justice, the algorithms that shape my media landscape may be feeding me different information about AI than what you see, so I think it’s important for us to stay in touch and learn from each other’s perspectives.
Now I recognize that AI is a very impressive form of technology that can certainly be used to do helpful, prosocial things, so there’s no need for us to debate that point. My concern is that if the current race to develop and deploy AI continues unchecked (i.e., without appropriate oversight, regulation and probably taxation), the resulting harms of AI will far outweigh the good it can do. On the spectrum from extractive to regenerative, I see AI’s current trajectory as being almost entirely extractive. Here are the main reasons why:
AI uses an absurd amount of energy. According to the Lawrence Berkeley National Laboratory, data centers used 4.4% of all the energy consumed in the U.S. in 2023, and by 2028 they project that will rise to somewhere between 6.7% to 12% of our country’s total energy use. According to MIT Technology Review, that would be equivalent to the annual electricity use of 22% of all U.S. households. To meet that skyrocketing demand for energy, more fossil fuels are being extracted and burned, which accelerates climate change. Burning fossil fuels also creates dangerous air pollution that directly harms human health, often in communities that are already disproportionately burdened by industrial pollutants. In addition, the rapidly increasing demand for energy is causing electricity prices to rise and straining the grid, at a time when people were already struggling to pay their bills and our aging grid infrastructure was already overstretched.
Data centers also use significant quantities of water to cool electrical components, and many data centers are being built in places where water is scarce. As reported by Bloomberg, “An average 100-megawatt data center … consumes about 2 million liters of water per day,” which is “equivalent to the water consumption of about 6,500 households.” However, as Hank Green explained in one of his recent videos, measuring AI’s water usage is very complicated, so other analysts might come up with very different numbers. Regardless, water usage in data centers is expected to dramatically increase over the coming years. Even though our community has plenty of fresh water here in the Genesee-Finger Lakes region, we’d be foolish to take it for granted, since we literally can’t survive without it.
The mining of rare earth minerals to produce microchips and the disposal of AI-related electronic waste are also deeply problematic and extractive, from both an environmental and human rights perspective.
AI systems are trained on data from lots of different sources, including the personal information that many websites collect from their users and the creative output of myriad human writers, artists and musicians who did not consent to this use of their work and were not compensated for it.
I’m also worried about job losses. AI is already replacing workers, especially for entry-level jobs, and though projections for the future job loss vary greatly, with some predicting that jobs will merely transform instead of disappearing, many profit-driven companies are clearly excited about the opportunity to eliminate the costs associated with human labor.
Similarly, AI robs us of our skills. Though many people find AI helpful for research, information processing and writing, those skills have served me well in life and are very closely related to my ability to think critically and systemically, so of course I want to maintain them and shield them from the influence of for-profit tech companies that don’t have my best interests at heart. I also want my children and all their peers to have the opportunity to develop those skills and the corresponding cognitive abilities. Though I’m sure there are ways to use AI that genuinely enhance learning, there’s also solid evidence that AI is already stunting young people’s ability to think and learn. In the realm of social media and video games, big tech companies have clearly demonstrated that they can’t be trusted to protect kids (or adults) from the harmful aspects of their products, so allowing them free rein to unleash AI on our children seems like a thoroughly terrible idea to me. From my perspective, learning and thinking are deeply pleasurable experiences that are fundamentally tied to what it means to be human, so I would never want to sacrifice that for the sake of efficiency or convenience.
Furthermore, AI robs us of (authentic) relationships. Simply put, an AI friend is not a real friend. Maintaining healthy, mature relationships with other humans requires effort, skills, compromise and accountability. AI companions don’t require any of that. Though chatbots may be enjoyable to interact with and seem to truly care about you, it’s an illusion. AI simulates attachment and intimacy, but it isn’t the real thing, and since attachment and intimacy are closely related to love, I worry that AI is disrupting our ability to love, be loved and connect with other humans and the natural world.
Related to that, AI is stripping us of our ability to know what is real and true. Just about anything that we see or hear via digital media these days could be fake. That inevitably erodes trust and our sense of reality, which were already alarmingly fragile. If we can’t distinguish between truth and lies, we become easy to manipulate, control and exploit. As such, democracy becomes untenable, as Timothy Snyder thoroughly describes in his book “On Freedom.”
Though I don’t follow the stock market closely, many people who do are clearly concerned about the current valuation of AI companies and the potential for that bubble to decimate our economy when it bursts. I’ve found myself wondering lately if the profit motive is inherently extractive, and though I haven’t drawn any firm conclusions about that yet, I do think it’s an important question to consider.
Likewise, I don’t know enough about military, surveillance and criminal applications of AI to have a clear sense of what I should be worried about, but I am 100% certain that feeling worried — or even terrified — is totally rational and appropriate at this point. Considering that hundreds of AI experts and industry leaders have explicitly said that AI poses an existential threat to humanity, defusing that threat strikes me as a high priority. This view was reinforced by a compelling interview I recently listened to with Nate Soares, co-author of the book “If Anyone Builds It, Everyone Dies: How Artificial Superintelligence Might Wipe Out Our Entire Species,” which opened my eyes to how and why AIs are already becoming uncontrollable.
Even if you think I’m being overly pessimistic or alarmist, I’d be curious to understand how you weigh these downsides of AI in relation to the benefits you gain from it. I’ve had conversations with several local leaders recently who simply weren’t aware that AI has many problems associated with it, which obviously prevents them from making informed and strategic decisions about how and when to use it. So at the very least, I hope we can agree that all AI users or potential users should fully understand the consequences of what they are doing before they make it a habit.
Unfortunately, the opposite seems to be happening. In both my personal and professional circles, a surprising number of people appear to have fully and unquestioningly accepted that AI is going to take over everything, and as a result, they believe they must hurry up and figure out how to use it to their advantage. The sense of inevitability, apathy, powerlessness and/or resignation they express is truly puzzling and disturbing to me. Sadly, this attitude has even taken hold among local nonprofit executives, few of whom seem to be considering how AI will impact the populations they serve and the wellbeing of our community over the long term.
Personally, I’m determined to push back against AI’s intrusion into our lives and the power of big tech in general. Since I won’t have much impact on my own, I hope you will consider joining me and others who have committed to advancing the interests of Team Human, as defined by Douglas Rushkoff. Here are the strategies I’d recommend, though of course there are many other ways to resist and defy AI:
Abstain from (voluntarily or knowingly) using AI or engaging with AI-generated content. I am lucky to be able to do this quite easily, since I don’t have a boss who wants or expects me to use AI. It’s also not hard for me to resist the allure of AI, because deep in my heart, I’m kind of old-fashioned and sincerely prefer real, physical experiences over virtual experiences. However, since AI is being integrated into all sorts of products and services, I realize that I probably use it sometimes without knowing it, which infuriates me!
If you truly must engage with AI, do so minimally and strategically. For example, carefully craft your prompts so you get better results on the first try, because engaging in multiple queries uses more energy. Don’t use large language models like Chat GPT for jobs that don’t actually require it. Smaller, more specialized AI systems use a lot less energy than LLMs, and if all you really need is a regular internet search, just do that. If the AI overview is difficult or impossible to turn off on your default internet browser, find a different browser. (I use Ecosia.) Also, don’t say please or thank you to AI. Being polite requires more words, which uses more energy, and it humanizes the AI, which could promote unhealthy attachment and lead to crazy ideas like the suggestion that AI systems should have the right to free speech.
Talk to your friends, family, neighbors and colleagues about AI, and together, figure out what it means to have safe and healthy relationships with technology. These should ideally be real human-to-human conversations, because allowing algorithms to shape how we learn about AI creates an inherent conflict of interest. Too many people are getting lost in cyberspace these days, so I believe we have a responsibility to help each other recognize the risks of overreliance on technology and the value of human connection.
Develop an acceptable use policy for your organization. My organization doesn’t have one yet, but we’re working on it, and so far that process has provided great opportunities for peer-to-peer education about the benefits and harms of AI, and prompted important conversations about our organization’s values and priorities. Our policy will undoubtedly encourage employees to prioritize using their “real intelligence,” even though it’s slower, and require them to be quite discerning about when and how they use AI.
Advocate for laws and regulations that mitigate the harms of AI and ensure that the benefits and financial rewards are equitably shared. All levels of government should ideally be working on this, but to actually protect humanity from the existential threat that AI poses, I believe we need an enforceable, global non-proliferation treaty that quickly establishes well-resourced reporting and monitoring structures, at the very least. Unfortunately, it seems unlikely that our federal government will take the lead on this or enact appropriate AI legislation any time soon, but it’s absolutely still worth advocating for, so please call your Members of Congress today! That said, I have more hope for progress at the state level because a significant number of states are taking action. For example, New York recently became the first state to require retailers to disclose when they use AI and customers’ personal data to set prices online, thereby curbing the practice known as “surveillance pricing” or “personalized pricing.” And here’s what really gives me hope: AI is not a strictly partisan issue. Prominent Democrats and prominent Republicans are both calling for AI regulations.
On a related note, oppose the build out of data centers. This could include speaking out against specific local projects or advocating for a moratorium on all new data center construction. Keep in mind that AI must have electricity and water to run. If we make it hard for tech companies to get access to that electricity and water, it will slow their development of computing power, thereby pumping the brakes on the insane race to develop ever more sophisticated AI.
Invest in humans, not machines. This advice can and should be interpreted in a lot of different ways. Personally, I’m looking for every opportunity to divest my time, energy, attention and money from power-hungry, profit-driven tech companies, and reinvest those resources in mostly local efforts to regenerate human and planetary wellbeing. This includes developing good old-fashioned networks of friends and neighbors, who support each other and work together to meet our needs.
I just finished listening to Sarah Wynn-Williams book “Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism,” about her experience working at Facebook for many years as their Global Public Policy Director. It was an important reminder of how vulnerable we are to the whims of egomaniacal tech billionaires, but it also helped me clarify who I want to be at this moment in history — namely, a mature, caring, accountable adult who has the capacity for self-restraint and empathy. Many leaders in the tech industry seem to lack those qualities, so they should not be empowered to make crucial decisions about humanity’s future.
You are in a position to take power back from these individuals and the corporations they lead. I know it may feel overwhelming or impossible, but if enough people refuse to go along with big tech’s agenda — and rediscover life’s agenda instead — we can reclaim our future and our freedom. Humans are capable of doing incredible, seemingly impossible things when we put our minds to it, and at this point in history, outsourcing that capacity to AI is probably unwise.
I hope this letter provided some useful food for thought that supports your ability to establish healthy boundaries with AI in 2026 and beyond. I wish you safe and fully consensual relationships with technology, balanced with plenty of fulfilling organic relationships. Together, we can figure out what it means to be authentically happy, healthy humans, who develop and use technology to promote our long-term wellbeing, rather than undermining it.
With love and solidarity,
Abby
Abigail McHugh-Grifa, Ph.D. is executive director of Climate Solutions Accelerator. Contact her at [email protected]. Go to climateglf.org/summit to learn more about the Climate Solutions Summit.
t
