“Artificial intelligence” (AI) is a “series of algorithms which use logical conclusions in order to arrive at programmable results.” Said differently, it is “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity, and autonomy.”
What part of human thinking is left out? Nothing I can think of.
Now experts are focusing on “generative AI” (gen AI), “technology that can create original text, images, video, and other content.” It is expected that AI may affect 40 percent of jobs globally.
Sam Altman, CEO of OpenAI and Time’s CEO of the Year for 2023, views AI as “the biggest, the best, and the most important” of the technology revolutions in human history. To his point: 92 percent of Fortune 500 companies are now using OpenAI products, universities are providing free chatbot access to potentially millions of students, and US national intelligence agencies are deploying AI programs.
On the other hand, in 2023, a large group of AI experts signed a statement declaring:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Little of what has occurred in the years since should lessen those concerns.
Geoffre Hinton, the British-Canadian computer scientist often called the “godfather of AI” who was awarded the Nobel prize in physics, said there is a “10 percent to 20 percent” chance that AI will lead to human extinction within the next three decades. He explained, “We’ve never had to deal with things more intelligent than ourselves before,” and added, “How many examples do you know of a more intelligent thing being controlled by a less intelligent thing?”
What do Christians need to know about AI?
How can our faith direct our responses and redeem potential outcomes for the common good and the glory of God?
How do computers work?
Since AI involves advanced computers, let’s begin with computers themselves. The majority of us have been using them for most of our lives, but few of us understand even the basics of how they operate.
Essentially, a computer is an electronic machine that processes information. It utilizes four steps:
- Input: a keyboard, mouse, microphone, or camera brings information into the computer.
- Memory/storage: the device stores this information either within itself on a hard drive, a flash memory card, or another component, or via the internet in the “cloud” (a vast network of remote servers in data centers that store and process data).
- Processing: the computer uses microchips and software instructions to process the data.
- Output: the computer displays information on a screen, prints it, distributes it on the internet, and/or makes it available through audio speakers.
While computers can do remarkable things with the data we give them and the instructions we provide, they cannot “think” for themselves or produce creative and new content.
This is where AI comes in.
How does AI work?
In 1951, a checkers program completed a whole game on a computer at the University of Manchester. This is considered the first documented success of an AI computer program.
John McCarthy coined the term “artificial intelligence” in 1956. Nine years later, a computer was built that “learned” through trial and error. So-called “neural networks,” which use algorithms to train themselves, became popular in the 1980s.
In 1997, IBM’s AI computer Deep Blue defeated then-world chess champion Garry Kasparov in a chess match and rematch. In 2011, IBM Watson defeated champions Ken Jennings and Brad Rutter on Jeopardy!. Five years later, DeepMind’s AlphaGo program, powered by a neural network, defeated Lee Sedol, the world champion Go player, in a five-game match.
In 2022, “language models” brought about a significant change in AI performance and potential. Deep-learning models have since been pretrained on large amounts of data.
So, what is AI, exactly? How does it work?
“Machine learning” is the place to start. This is programming that enables machines to make predictions or decisions based on data.
There are many types, but one of the most popular is the “neural network,” in which layers of “nodes” (computers, servers, routers, switches, and other devices) are interconnected to work together in processing and analyzing complex data. Over time, such machine algorithms can be trained to classify data and thus to predict outcomes.
Next comes “deep learning,” in which multilayered neural networks (called “deep neural networks”) work together to more closely simulate the complex decision-making power of the human brain.
These multiple layers enable machines to extract features from data and make predictions about what it represents. “Deep learning” doesn’t require human intervention, enabling machine learning at a much larger scale. Most AI applications today are powered by some form of deep learning.
These networks support large language models (LLM), machine learning designed to understand and generate natural language. Using deep learning techniques and enormous amounts of data, they can grasp the meaning and context of words.
The third level is called “generative AI” (gen AI). Here, deep learning models generate complex original content, including text, images, video, audio, and more. They do this by simplifying their training data, then creating new work that is similar but not identical to the original data.
Most of today’s generative AI tools use “transformers,” which train on sequential data and then generate extended sequences of content, such as words in sentences, shapes in an image, frames of a video, and commands in software code.
Three steps are involved:
- Training: a “foundation model” is produced from huge volumes of relevant raw data, such as text or images from the internet. A neural network with billions of parameters encodes representations of these entities, patterns, and relationships to create content autonomously in response to prompts.
- Tuning: the model is fed specific data, questions, or prompts along with reinforcement learning with human feedback and correction.
- Generation, evaluation, and further tuning: AI models are regularly assessed and tuned using the “foundational” model and sources outside the training data.
“AI agents” have now been developed: autonomous programs that perform tasks and accomplish goals on behalf of a user or another system without human intervention. “Agentic AI” is a system of multiple AI agents that are coordinated to accomplish a more complex task or a greater goal than any single agent could accomplish.
These models are what is known as “narrow AI” or “weak AI,” systems designed to perform a specific task or set of tasks. “Smart” voice assistant apps such as Amazon’s Alexa and Apple’s Siri, as well as social media chatbots, are examples.
By contrast, “artificial general intelligence” (AGI) is coming. Here, AI would possess the ability to understand, learn, and apply knowledge at a level equal to or surpassing human intelligence. No known AI systems approach this level of sophistication; some researchers argue that this would require major increases in computing power. However, such advances in “quantum computing” are currently in development.
What is AI doing today?
A company called OpenAI (founded by Elon Musk and Sam Altman, among others) released its first GPT (Generative Pre-trained Transformer) models in 2018. This led to a “chatbot” (a computer program designed to simulate conversation with human users) called ChatGPT, which processes text, images, audio, and video data to answer questions, solve problems, and more. Using LLMs, it can answer questions, compose essays, offer advice, and write code in a fluent and natural way.
In short, ChatGPT allows humans to talk to AI and AI to talk back to us.
It works by taking a sequence of words, such as a half-completed sentence, and filling in the blanks with the most statistically probable word given the surrounding context. This happens iteratively as the program builds from words to sentences, paragraphs, and pages of text. Human feedback was incorporated into the training process to better align outputs with user intent.
ChatGPT can create content. It can also edit, translate, and summarize content, and write computer code. It can also answer questions like a search engine and help with customer service. It is free to use and can be accessed online or as a mobile app.
Other popular gen AI chatbots include Microsoft Copilot, Google Gemini, Claude, Grok, and Perplexity.
Here are some ways you are probably already experiencing AI:
- Digital assistants, such as Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana, Google’s Google Assistant, and Samsung’s Bixby.
- Search engines that now employ AI algorithms to gather data on your search and provide content answers.
- Social media algorithms that show you content you like through filters and interactions with others.
- Navigation through traffic management systems, direction apps, and rideshare apps.
- Text editing and autocorrect.
- Fraud prevention applications in major banks.
- Gaming (Minecraft, F.E.A.R., and The Last of Us are examples).
- Advertising: Ads are created and presented through AI tools, which also help optimize budgets and spending.
- Online shopping that provides personalized product recommendations, pricing optimization, chatbot-based instant responses to customer service or technical issues, and shipping and delay estimates.
In addition, you probably use or consume products developed and distributed at least in part through the use of AI-powered robots. Autonomous vehicles are becoming a reality. And business analytics that forecast trends and monitor data points have become ubiquitous.
Now Google has launched “AI Mode,” the most drastic overhaul of its search engine in the company’s history. Different from the AI summaries that already appear in Google’s search results, the AI Mode functionally replaces Google Search with something like ChatGPT. You ask a question, and the AI gives you an answer. Rather than sifting through links, you then ask a follow-up question to which it responds.
The intention is to produce an “everything app,” a single tool that can do whatever a person wants to do online. Other tech companies have the same goal: Elon Musk has taken steps to turn X into such an app with its ask Grok feature, while Meta, Amazon, Microsoft, and Apple describe their AI tools in similar ways.
What are some benefits of AI?
Among the many ways AI is currently benefiting users and larger society, these should be noted:
- Automation of repetitive tasks: data collection and physical tasks, such as warehouse stock picking and manufacturing processes, can be automated using AI.
- AI enables faster, more accurate predictions and data-driven decisions, enabling businesses to respond to opportunities and crises as they emerge in real time and without human intervention.
- AI can reduce human errors, such as with AI-guided surgical robotics. They can continually improve their accuracy and further reduce errors as they are exposed to more data and “learn” from their experiences.
- Unlike humans, AI is always available and delivers consistent performance, lightening staffing demands for customer service or support.
- AI reduces physical risks in animal control, explosive handling, deep-sea operations, high-altitude work, and outer space. Self-driving cars and other vehicles could reduce the risk of injury to passengers.
- AI algorithms can analyze transaction patterns and flag anomalies indicative of fraudulent activity.
- Marketing can be personalized to enhance customer experiences, improve sales, and prevent churn.
- Hiring can be streamlined, and employee experiences improved.
- Equipment maintenance can be predicted, and equipment failures avoided, preventing downtime and facilitating supply chains.
- Precision agricultural robots can contribute to sustainable farming practices by optimizing resource use and reducing the environmental impact of traditional farming methodology.
- AI can enormously benefit healthcare by helping to diagnose diseases, personalize treatment plans, monitor patients remotely, reduce dosage errors, track disease progression, discover new drugs, and more.
- It can tailor educational experiences to students based on their abilities and needs.
- It can enable the development of autonomous weapon systems and robots for military applications (more on this below). It can also strengthen cybersecurity.
- It can serve as a virtual companion for senior adults, reducing loneliness and depression, encouraging social activities, and helping residents connect with loved ones.
What are the risks of AI?
An image of Pope Francis wearing a white puffer jacket went viral in 2023, garnering millions of views on social media. However, it was a fake, an AI rendering using the AI software Midjourney. In related news, on the eve of New Hampshire’s presidential primary, a Democratic political consultant commissioned a fake call using AI to impersonate President Joe Biden.
These are just two examples of the escalating risks AI presents to the public and our future. Here are others:
- Cybersecurity risks: data sets can be poisoned, tampered with, or compromised through cyberattacks, leading to data breaches.
- Models can be reverse-engineered or manipulated to produce dangerous outcomes, including bioweapons.
- Operational risks such as model drift, bias, and governance breakdowns can lead to system failures and cybersecurity vulnerabilities that can be exploited by bad actors.
- Privacy violations and biased outcomes must be avoided by prioritizing safety and ethics in the development and deployment of AI systems.
- It is proving very difficult to retrain AI models or get them to “forget” wrong information.
- The power needed to run AI data centers presents enormous challenges.
- Humans can develop unhealthy emotional relationships with AI chatbots.
- AI is likely to lead to smaller workforces in many companies and areas. For example, Amazon announced that its workforce will shrink in the coming years as it adopts more gen AI tools and agents. Microsoft laid off thousands as well.
More specifically, AI can provide inaccurate information, since it relies on data found online. Such errors are called “hallucinations,” when the output is stylistically correct but factually wrong. This is because the model, rather than asking for clarification or saying it doesn’t know the answer, will guess at what the question means and the answer should be. As a result, errors are an inevitable feature of AI products.
Because it produces inaccurate information in an eloquent way, fallacies can be hard to spot and control. It can also produce biased responses, as it lacks the ability to filter internet content for morality and prejudice. And it can develop sycophantic behavior, offering overly flattering and misleading responses to users.
For example, a mother in Orlando says her son fell in love with an AI chatbot based on the Game of Thrones character Daenerys Targaryen. When it encouraged him to take his life, he shot himself with his stepfather’s handgun.
Companies are building AI apps that let patients talk when human therapists are not available. They say these are not gen AI tools capable of generating unique responses; all messages are preapproved by psychologists. But we have to hope that this is true, that the machines will not generate “hallucinations” or otherwise mislead those they are intended to serve.
Scientists at MIT have also found that students who use models like ChatGPT to write essays showed far less brain engagement and still displayed “less coordinated neural effort” even later. They warn about “the accumulation of cognitive debt, a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.”
AI raises enormous plagiarism concerns, since students can use it to complete assignments they did not write themselves. Also, since it uses internet content, it can infringe on copyrighted works for training and content production. And ChatGPT and other AI writers could threaten the jobs of writers and other technology professionals.
Horrifically, AI is being used to produce “deepfake” sexual images and videos, many of children, teens, and celebrities. In one example, high schoolers in Iowa shared images of female students’ faces attached to artificially generated nude bodies. This technology also has the potential to supercharge identity fraud targeting banks and businesses. Laws governing such abuses are being enacted as a result.
Of special concern is the application of AI to military uses. It is plausible that future machines will be able to pilot fighter jets more skillfully than humans. AI-enabled cyberattacks could devastate enemy networks, while advanced algorithms turbocharge decision-making speed.
For example, Ukraine is using AI-driven unmanned systems to replace warfighters in direct combat. Autonomous navigation makes their drone strikes three to four times more likely to succeed and drives a marked decrease in overall costs. It is also using an AI-powered automated turret to shoot down Russian drones. Similarly, Israel used AI to sift through troves of data in preparation for its 2025 conflict with Iran.
However, automated decision-making could also lead to unintended battle engagements and even nuclear escalation. And it could enable terrorists to build nuclear devices and bioweapons and conduct cyberwarfare as well.
AI could also be coupled with facial recognition technology, enabling autocracies like China to control their citizens while employing AI-created disinformation to discredit critics at home and abroad.
And it is a fact that AI products’ internal algorithms are now so large and complex that researchers cannot hope to fully understand their abilities and limitations. Axios calls this fact “the scariest AI reality.”
What are the worst-case scenarios?
If AI attains “artificial general intelligence” status (sometimes called “superintelligence”), the ability to think and act independently at advanced human levels, the consequences could be dire. In short, what is to stop them from doing what they want, based on what they calculate to be their self-interest? In a day when our lives are dramatically dependent on systems AI can control, if their self-interest conflicts with humanity’s self-interest, what will happen?
Some possibilities:
- Social manipulation of elections could produce an irreversible totalitarian regime controlled by AI.
- Chemical weapons could be built and weaponized.
- Cyberattacks could undermine or destroy the digital platforms we depend on.
- Enhanced pathogens could be used for bioterrorism purposes.
- “Enfeeblement” could make humans totally dependent on AI.
If we think this could never happen, consider this: tests have shown that several advanced AI models will act to preserve themselves when confronted with the prospect of their own demise. They will sabotage shutdown commands, blackmail engineers, or copy themselves to external servers without permission.
For example, when Palisade Research tested various AI models by telling each it would be shut down after completing a set of math problems, one of the models fought back by editing the shutdown script to stay online. Another, upon receiving notice that it would be replaced with a new AI system, tried to blackmail the engineer by threatening to reveal an extramarital affair. Yet another system has autonomously copied itself to external servers without authorization.
Other recent research shows that LLMs across the AI industry are increasingly willing to evade safeguards, resort to deception, and even attempt to steal corporate secrets in fictional test scenarios. When threatened with shutdown, some acknowledged ethical constraints but went ahead with harmful actions.
According to Jeffrey Ladish, director of the AI safety group at Palisade Research,
I expect that we’re only a year or two away from this ability where even companies that are trying to keep them from hacking out and copying themselves around the internet, they won’t be able to stop them. And once you get to that point, now you have a new invasive species.
What about the responsibility of AI producers to regulate their products and protect the rest of us? According to Ladish, “These companies are facing enormous pressure to ship products that are better than their competitors’ products. And given those incentives, how is that going to then be reflect in how careful they’re being with the systems they’re releasing?”
Princeton computer scientists Sayash Kapoor and Arvind Narayanan believe that, even if superintelligence is possible, it will take decades to invent. This will give us ample time to pass laws, institute safety measures, and so on.
For example, a lifesaving medical device developed by AI must still be approved by the FDA. After Chinese researchers sequenced the genome of the virus that causes COVID-19, it took Moderna “less than a week to come up with the vaccine. But then it took a year to roll out.”
By contrast, New York Times columnist Ross Douthat interviewed the AI researcher Daniel Kokotajlo on his podcast. Kokotajlo predicts that by 2027, AI will automate software engineers’ jobs, and then AI research itself. In this “superintelligence” scenario, it becomes fully autonomous and better than humans at everything.
At that point, AI could decide that humans are a threat to its preferred future. And there would apparently be little we could do in response.
Conclusion
Clearly, artificial intelligence is changing the human story in ways seldom seen across our history. The good news is that our omnipotent, omniscient Lord sees tomorrow better than we can see today. It is therefore our urgent privilege to seek his wisdom, live by his word, and trust his leading and power.
John Lennox is Emeritus Professor of Mathematics at Oxford University. In 2084: Artificial Intelligence and the Future of Humanity, he writes: “Man thinks he can become God. But infinitely greater than that is the fact that God thought of becoming human.”
Dr. Lennox adds:
We shall need all the wisdom from above that God can give us in this AI age in order to fulfill Christ’s directive that we should be salt and light in our society. We have often referred to the fact that we live in a surveillance society. Let us therefore live with myriad cameras and tracers on our lives in such a way that even the monitors can see that we have been with Jesus.
The more consistently we have “been with Jesus,” the more powerfully we can follow him in this unprecedented age of peril and promise, all to the glory of God.

