As artificial intelligence gets woven into the fabric of everyday life, the human implications of AI demand thoughtful leadership. Vanderbilt is working to ensure that ethics-not just innovation-drive its future.
Artificial intelligence is no longer an abstract frontier. Generative AI is reshaping the most human parts of our lives in ways both subtle and profound.
A chatbot can mimic a comforting emotional exchange.
Creativity, once limited by talent or training, can be outsourced to systems that produce art on demand.
Everyday cognitive tasks-drafting messages or deciding what to eat for dinner-can be handed off to algorithms that “think” more quickly than we do.
And as AI becomes the first gatekeeper for jobs, loans and other opportunities, those who understand these systems and how they work stand to gain a significant advantage.
The implications are both vast and deeply personal, and they demand careful, ongoing reflection on the moral and ethical dimensions of using AI. At Vanderbilt, this work unfolds daily across campus, where scholars, students, researchers and staff explore AI’s emerging opportunities while grappling with the questions it raises.
To shed light on these efforts, Vanderbilt Magazine hosted a conversation between Jules White, senior advisor to the chancellor on generative AI and professor of computer science, and Yolanda Pierce, dean of the Vanderbilt Divinity School. Although their backgrounds are markedly different, each of these scholars brings essential insights to this sometimes unsettling but ultimately illuminating conversation about the human implications of AI-at Vanderbilt and beyond.
TECHNOLOGY BEYOND EXPECTATIONS
If you feel as if generative AI came out of nowhere, you’re not alone. Even one of Vanderbilt’s leading AI experts was blindsided by its arrival. In fact, White says, AI “is absolutely a technology that I did not believe would exist in my lifetime.”
What’s more, there was nothing incremental about its arrival. It was as if the first Model T came off the assembly line and immediately took off at 150 miles per hour.
“When it came out, it was totally unexpected and transformative, and I actually think generative AI and AI agents are under-hyped,” White says.
“… a technology that I did not believe would exist in my lifetime.” – Jules White
Most of the public discussion about AI is centered around surface-level uses-writing emails, generating images, assistance with homework-but those examples barely scratch the surface of what AI can already do.
“The real shift is that computers are finally learning to understand us in our own language,” White says. “Instead of people having to learn specialized software, programming languages or rigid systems, we can now express a goal in natural language, and a computational system can interpret it, reason about it and, in many cases, carry it out.”
AWE AND ANXIETY OF AI
For Pierce, the arrival of generative AI triggered a mix of excitement and trepidation. As a self-proclaimed child of the ’80s raised on The Jetsons and Star Trek, Pierce found the new tech tool thrilling. “At the same time, my training as an ethicist and a theologian makes me ask the question, ‘Because we can do something, should we do something?'”
Something like … asking ChatGPT if there’s a god. (For what it’s worth, White is shocked by this prospect and is adamant that one should not pose such existential questions to a chatbot.)
“At the Divinity School, we are thinking deeply about how generative AI impacts our teaching and learning,” Pierce says. “We’re fielding questions and having conversations that I think are really helpful, which center around the idea that generative AI is about knowledge, but many of the religious traditions of the world talk about wisdom. And so, what do you do when you have access to this vast array of knowledge beyond comprehension, but you’re also on a quest for wisdom within a certain tradition? What tools and skills do you still need to have, even though you have so-called answers at your fingertips?”
For White, that’s an easy problem to solve: “Never ask it for one of anything,” he says. “Ask it for five, and then use your own critical thinking skills and human discernment to determine the best answer.”
But this approach runs counter to the broader cultural and corporate emphasis on efficiency and speed. Pierce worries that too few people pause to think, “Here are several possible answers; now it’s time to draw on my own creativity and judgment.”
“Because we can do something, should we do something?” – Yolanda Pierce
“We see this tension in the corporate world,” White says. “Some businesses are using generative AI to be more efficient, and the result is layoffs. Other companies are using it to dramatically expand their capacity, enabling them to do things that weren’t possible before. I am much more impressed with those efforts to be more thoughtful, more creative and more innovative with AI, rather than simply using it to do things faster and faster.”
PLAYING CATCH-UP
AI has advanced at a pace few anticipated, and the development of ethical guidelines and practical oversight is lagging behind. To understand the scale of its acceleration, consider this: In 1908, Ford Motor Company sold about 10,000 Model Ts in the car’s first year of production-at a time when there were roughly 20 million American households. By contrast, when ChatGPT arrived in 2022, it drew 1 million users in its first five days and was widely expected to approach 1 billion by the end of 2025.
With that kind of meteoric rise comes a familiar narrative-one that casts the companies building these tools as reckless drivers speeding toward humanity’s demise. But a more nuanced view, White says, recognizes the simple reality of a brand-new road.
“What we’re experiencing now, and the unease many of us feel around it, is because it’s very new and very early,” he explains. “These conversations were always going to happen, and a thoughtful framework was always on its way. It was inevitable that we were going to have to teach people how to use generative AI thoughtfully and understand the pitfalls of its use.”
And that will require a team effort, like what is already underway at Vanderbilt. Last year saw the launch of the College of Connected Computing, the university’s first new college in 40 years and the home of Vanderbilt’s computer science and data science programs. The college is enabling seamless cross-campus innovation and collaboration, including with the Divinity School.
“It was inevitable that we were going to have to teach people how to use generative AI thoughtfully and understand the pitfalls of its use.” – Jules White
“There is significant interest from our faculty and students,” Pierce says, noting the popular religious studies course on humanity and artificial intelligence.
In 2025, the Divinity School hosted a symposium, “Let the Church Say A.I.: Artificial Intelligence and the Future of Black Digital Religion,” that was cosponsored by the College of Connected Computing. The Divinity School also has a grant from the Wabash Center for Teaching and Learning in Theology and Religion for its project, “We Can Wait No Longer: Educating Theological Educators About Artificial Intelligence.”
AI IS FOR ACCESSIBILITY
“One of the most exciting changes AI brings is that innovation is no longer limited to people who can write code,” White says. “A teacher, a nurse, a musician, a social scientist-anyone can now take an idea and turn it into something functional using conversational AI tools.”
White reports that enrollment across AI-related courses-even beyond computer science-is surging. “Students and faculty are curious not just about how the technology works, but about what it means for creativity, ethics, identity, justice and society,” he says.
That curiosity is reflected across campus. AI initiatives are unfolding everywhere-from Arts and Science and Peabody College to Campus Dining and People, Culture and Belonging. Much of this momentum stems from the 2024 launch of Amplify, Vanderbilt’s custom generative AI platform that’s available to all faculty, staff and students. As one of the first universities to deploy a comprehensive AI program campus-wide, Vanderbilt is giving its community a safe, secure environment to explore the technology’s potential. (All data processed through Amplify stays within Vanderbilt’s protected technological “sandbox.”)
“Anyone can now take an idea and turn it into something functional using conversational AI tools.” – Jules White
“We’re essentially treating the university as a living lab,” White explains. “We have all these functions that exist in the real world and will be transformed by generative AI. So we’re solving our own internal challenges while also learning from the solutions and understanding their ramifications.”
Some of those solutions are delightfully practical. One of White’s favorite examples is Amplify’s uncanny ability to predict T-shirt orders for student groups. “That was totally unexpected, but these groups say it does a great job-and it reduces waste,” he says.
AI THROUGH THE JUSTICE LENS
It may be difficult to imagine the moral ramifications of generative AI successfully predicting the right T-shirt sizes-unless you consider it from the perspective of the haves and have-nots. Who has access to this technology? Everyone with a computer or phone, right? Technically, that is correct, but the greatest advantage will go to those who know how to use it well-and that’s where education comes in.
“The disparities we see today in education in terms of quality and access are at risk of being multiplied and amplified by the arrival of generative AI,” White says. “Because it’s all built on language, the quality of what you get out of it depends entirely on the quality of what you put in. People who already have strong literacy, knowledge about a topic and critical thinking will thrive-while those without them risk falling even further behind.”
The widening gap in who can effectively use AI leads to a deeper question for Pierce-not just about education, but about justice. If access to skills determines who benefits, what does that mean for communities that are already facing inequities?
“I think we first have to recognize the injustices already embedded in the development and use of AI,” Pierce says. For her, the key questions aren’t just about who gets to access these systems, but also who decides what data goes into them, how they’re trained, and how they frame problems and solutions-things that look radically different around the world. She says her colleagues are also concerned about which communities will bear the environmental costs. And they worry about the ways in which AI, in areas like predictive policing and the criminal justice system, might end up deepening existing inequities.
“Like any tool, I believe that AI has the capacity to help solve some of the biggest issues of injustice-but not without guardrails and safeguards that don’t seem to currently exist,” Pierce says.
SHARED RESPONSIBILITY
Safeguards for AI, much like those for cars, won’t come from a single source. Car manufacturers are responsible for building safer vehicles, but they’re not the ones who enforce speed limits. That takes a web of other forces-regulators who set the rules, law enforcement that upholds them, insurance companies that create incentives and drivers who rely on their own judgment every time they’re on the road.
“Those fundamental human needs for community and real connection are still going to matter just as much.” – Yolanda Pierce
With AI, though, many people do exactly what White warns against: They ask it just one question and treat the first answer as unquestionably correct. That overreliance on the technology, Pierce argues, creates another ethical problem, because it concentrates enormous power in the hands of the companies that build these systems while quietly shaping-even reshaping-what people accept as truth.
And many see it as increasingly risky to place that power in the hands of the tech companies. For starters, modern AI systems generate responses by predicting likely patterns from vast amounts of data, not by discerning truth. As a result, they sometimes “hallucinate,” or produce incorrect information. Ideally, White says, creating the necessary guardrails is going to require thoughtfulness from across government, industry, education and the public.
FAITH IN THE AGE OF AI
As people turn to AI for guidance on everything from daily tasks to life’s deepest mysteries, faith communities are grappling with a dilemma. On one hand, technology can broaden access to spiritual resources-much as the printing press once carried sacred texts into the hands of everyday people. On the other, it challenges long-held assumptions about the spiritual journey and the irreplaceable role of human connection.
“I’m fascinated by how people are beginning to use these tools for spiritual counseling and advice, because we’re all longing for answers to the big questions about the meaning of life,” Pierce says. “I certainly haven’t figured that out yet. But as a person of faith, I know the journey matters more than the answer. So what happens to our religious communities and traditions when someone can bypass the journey altogether?”
Or what happens when the voice from the pulpit is no longer human, but an AI pastor writing sermons for congregations that don’t have enough spiritual leaders? Pierce points to a 2023 church service in northern Germany that was covered by the AP because the sermon was written and delivered by AI.
Despite that singular instance, she says, AI bots aren’t actually poised to take over the ministry. However, they might serve as useful tools for faith leaders: offering ideas for sermons or streamlining the writing process so they can devote more time to individual spiritual care.
“Examples like this, along with what we’re seeing in our courses and research, make one thing very clear: The human in the room still matters,” Pierce says. “Even when a tool can connect us across continents, spiritual life is grounded in presence, intention and relationship. Religion will survive this moment-it always has-but it will absolutely be changed. And as AI becomes more sophisticated, those fundamental human needs for community and real connection are still going to matter just as much, if not more.”
Community will matter, connection will matter, and so will creativity, according to White. “We’re entering a future where computing adapts to us,” he says. “That means creativity becomes more important than technical ability. Curiosity can become more valuable than memorization of archaic commands. And the ability to express a problem clearly may matter more than knowing how to solve it.”
-Lena Anthony, BS’03
