A version of this appeared in an article on Artificial Intelligence in Issue 19 of Word on Fire’s Evangelization & Culture journal, which can be found here .
I'll confess: I'm not afraid of artificial intelligence. But maybe I should be.
Artificial intelligence (AI) is giving humans the ability to not only program machines to do what they are programmed to do, learn To act in a more efficient way, Enhance It programs new capabilities and strategies that humans cannot predict or control.
That is, Glimpsesomething to be afraid of. But as someone born in 1969, I've spent my whole life in existential fear of forces I can't understand or control, and it's getting a little old.
I personally share the same fears as any other American of world hunger, of a new ice age, of Japanese economic dominance, of Chinese economic dominance, of economic collapse, of terrorists. The world is finding new things to fear, but there is nothing new under my sun.
I hear concerns about political extremism, but I remember watching Patty Hearst on the evening news. My kids have seen UAPs with their own eyes on YouTube, but I've seen UFOs with my own eyes in the daily paper. Critics raise the credible threat of an AI disaster, but I survived the credible threat of a Y2K disaster.
Not all of these fears are unfounded. Some are not 100% true. In fact, our fears have put up guardrails and helped us move forward. Technology has always done that.
Every technology has planned benefits and unforeseen losses, but the Church responds to them all the same: we embrace them. So the first thing the printing press printed was the Gutenberg Bible, the Catholic Vulgate. But then, after Martin Luther posted his 95 Theses the old-fashioned way on a church door, the printing press made it the first posting to go “viral.”
Guglielmo Marconi broadcast a speech by Pope Pius XI on the radio in 1931 and introduced the broadcast with what could be considered the Church's technological mission statement: “With the help of God, who has placed many mysterious natural powers at man's disposal, I have been able to prepare this instrument which will give to the faithful throughout the world the joy of hearing the voice of the Holy Father.” The Church continued to give the world that joy again and again for the next 100 years, harnessing mysterious technological powers such as the phonograph, cinema, television, compact discs, and the Internet.
I am a member of the church, and so I refuse to join those who embrace technology and cower in fear. The way I see it, it's a case of “Shame on you once you've been fooled. Shame on you once you've been fooled by decades of dystopian movies, elections since Reagan, recessions since Carter, and a lifetime of unspeakable fear of our times.” myself.”
The boy had been crying wolf my whole life and I had grown used to his voice.
But then I remembered something very unsettling: What makes “The Boy Who Cried Wolf” such a compelling story isn't that the boy was wrong so many times, but that when he finally spoke up, right.
So, is he right this time?
Gorilla Problem
What are we afraid of about AI? I think we fear the things that a chess piece would fear.
in In the age of AI, Henry Kissinger and his co-authors describe how AlphaZero beat Stockfish at chess in 2017. Stockfish is a classic computer chess opponent in which programmers feed the machine the best human chess strategies, and the machine can instantly recall the best moves ever made. AlphaZero, developed by Google's DeepThink, was not taught anything about human strategies; it was given only the rules and objectives of the game.
After just four hours of training playing against itself, AlphaZero defeated Stockfish 155-6 with one draw. how The victory was terrifying: AlphaZero sacrificed some of its own most valuable pieces, including the Queen, to pounce on its opponents with a ruthless efficiency unimaginable to humanity.
“Chess has been shaken to its core,” Grandmaster Garry Kasparov said after the game. Kissinger and his team fear that “security and world order” will soon be “shaken to its core” as well. AI's unique capabilities “may make it inevitable that we will have to hand over important decisions to machines.” And if that happens, what precious knights and queens will AI sacrifice to its goal?
Books by AI entrepreneur Mustafa Suleiman The incoming waves, He worries that his company, DeepMind, and Inflation AI could be part of the unexpected rise of a new kind of superpower.
He envisions a future in which anyone “with graduate-level training in biology or a commitment to independent online learning” can get their hands on a DNA synthesizer and “create new pathogens that are far more contagious and deadly than those found in nature.” Other malicious actors may go beyond the “garage repairman” to weaponize AI technologies in ways we can’t even imagine.
According to him, a tsunami of AI applications will sweep away our preconceptions, and our sense of safety and security. In fact, “garage tinkerers” and bad actors may be better equipped to make AI breakthroughs than bureaucracies that skirt due diligence and legal constraints. Suleiman fears huge power shifts, a rapid “hyper-evolution” of AI capabilities, and an endless acceleration toward “all-purpose use” of AI applications, and when it’s all over, he asks, “Will humans be in the loop?”
“Historically, technology has been 'just' a tool,” Suleiman said. “But what if those tools were alive?” This would leave us with the 'gorilla problem': Just as weaker humans put stronger animals in zoos, AI “may mean that humans are no longer at the top of the food chain.”
Descent into Egypt
When I asked Dr. Charles Sprouse, professor of engineering at Benedictine University in Kansas, where I work, about fears about AI, he gave me a startling list that proves that fears, like politics, are both global and local.
Sure, we fear AI weapons, drones, and robots that hunt and kill with superhuman strength and ability, but we also fear autonomous cars. What decisions will they make? And what malfunctions might change those decisions?
We also fear a bolstered version of “fake news” as clever programmers with dubious intentions mislead the masses with political deepfakes. But we should also fear fake communication. If you use the metaverse feature to start chatting with your wife in virtual reality, how can you be sure you’re really talking to her?
We fear government surveillance by machines that can recognize our faces, our bodies, the way we walk, and monitor what we do in our backyard. But we should also fear corporate AI that knows what and how much we like to eat, where we go, how often we go out, and what we think about online.
Many of us fear technology will take our jobs. Authors, legal professionals, and educators fear ChatGPT, but so do software designers, drug researchers, and lab technicians who fear powerful tools.
These all seem like very new fears, (at first glance) different in nature from the older fears. But is this really the case?
That's why we fear AI as monster, AI as master: a Terminator that doesn't or can't care about what gets in its way, or a Matrix that enslaves us to its ends. AI could take away our autonomy, our freedom, the lives we choose, our privacy. Or it could wipe out civilization as we know it.
But is this really a new kind of fear?
In fact, AI feels like a throwback to the time of the Egyptian slave masters, when “a new king arose in Egypt, but he did not know Joseph. And he said to his people, 'Behold, the people of Israel are more numerous and mighty than we are; come, let us deal with them subtly'” (Exodus 1:8-10). We fear robot drones, but if we recall the Old Testament, entire tribes were also wiped off the map with impunity.
It would be the height of irony if, separated from God, all our ingenuity were capable of conjuring up a new and greater slave master, an artificial Pharaoh who would enlist us in a massive undertaking to build a pyramidal monument to Mammon, a undertaking none of us can imagine because its scale is too great for any one human mind to comprehend.
But maybe that's not the real fear after all.
The real monster is loneliness
I started off saying I'm not afraid of artificial intelligence, and I really am not. At least, not in the way that I've described fear. One thing I've learned in my life with new technologies is that we're always afraid of the wrong things.
Perhaps what we should really be afraid of is what Sigmund Freud said: Civilization and its discontents. He wrote:
“If there had been no railroads for long distance travel, my child would never have left his native land, and I would never have needed the telephone to hear his voice. If sea travel had never been introduced, my friend would never have gone to sea, and I would never have needed the cable to ease my fears for him.”
We feared all but the dire consequences of each of these technologies, the even worse of which was loneliness.
And that’s what we should fear most about AI: a world that becomes even more disconnected from what makes us human: each other.
Image: Bua Noi, B20180
