Children may be okay with it, especially if parents are told to “don’t be a dick” is a euphemism for acting like humans with real emotions and brains, rather than acting like robots.
With concerns about outsourcing jobs to cyborgs (think “The Six Million Dollar Man”), artificial intelligence changes so quickly that parents and communities may have many questions and find it difficult to keep up.
Davis’ parents shared concerns about large-scale language models like ChatGPT and Google’s Gemini, and asked for guidance on safe usage, ideal screen time, and the mental health effects, especially on the developing brain.
Last week, parents, several students, and Davis Joint Uniform staff filled the Brunel Theater to chat with Jacob Ward, author of “The Loop” and former NBC News correspondent, and Pamela Wu, director of news and media relations at UC Davis School of Health, at the Davis Parent University fireside. The discussion focused on the subtitle of “The Loop” and the theme of Ward’s work: “How technology is creating a world without choice, and how to fight back.”
When Ward wrote “The Loop,” LLM was not yet widespread, but he knew that early systems were processing large amounts of data and finding connections between bits of information. Early AI systems required manual training to identify the correct answer. For example, a company that provides medical records to hospitals showed him a system that can input a cardiologist’s round data to assess whether a particular patient is likely to have a cardiac event that day. This assessment was used to prioritize rounds, which the physicians found very useful.
Similarly, Children’s Hospital of Cincinnati uses AI to identify children at risk of self-harm and provide early intervention.
Ward says humans need intelligence to survive, and the advent of AI gives humans the tools to potentially challenge that. He argues that AI will never surpass human capabilities. The question is what makes us unique.
In “The Loop,” Ward investigates whether these systems can replicate human creativity. He reflects on how people often talk about technology as if it increases creativity and saves time, and argues that as history has shown, these systems usually limit choices, and markets tend to simplify choices for the sake of efficiency, which is why we often have fewer choices.
Simply by its very nature, this technology has limitations. Collect and curate content.
Ward argues that this perception creates concern because LLMs often function like the human brain, making them vulnerable to outsourcing decision-making.
In 2016, Ward said he hosted a podcast focused on decision-making that was inspired by Israeli psychologist Daniel Kahneman’s “Thinking, Fast and Slow.” The book states that there are two systems: a fast, instinctive system and a slow, rational system.
Ward said the fast thinking part makes many instinctive decisions, often leading to mistakes, while the slow thinking part is cautious and logical.
“What am I doing here?” he asks himself, as he shares a personal anecdote about driving his children to an appointment and accidentally taking them to school first.
After all, as he says, it’s great to drive a large car in traffic with a child in the back seat without getting into an accident. “You’re subconsciously guiding this half-ton car through traffic with a kid in the back, and it’s incredible that you didn’t die. That’s how quick your mind is. You just get used to the task, and Cisco recognizes your ability to say, ‘I know this story, I’m going to take it from here.'”
Concerned about how companies seek to profit from individuals’ decisions, he said LLMs target the part of the brain that makes quick choices rather than the part that thinks things through.
Ward said that our reliance on tools such as LLMs to simplify our lives can encourage lazy thinking. This need for comfort influences the way people think and act, he said. When you outsource quick thinking, you risk compromising your ability to think slowly and carefully.
He said he has observed that the brain functions more like a muscle than a simple on/off switch, and that the ability to think slowly and consider probabilities decreases over time. On the other hand, long-term exposure to LLM can change thought processes, and companies often benefit by encouraging instinctual responses.
Regarding parental concerns for children’s safety, the DPU received questions about how to ensure that children use these tools wisely and in appropriate amounts, while, as Ward argues, AI narrows choices, limits human agency, and amplifies impulses.
He opposed the idea that an LLM positively enhances creativity and free time.
At a recent AI conference, researchers studied papers that used thousands of creative prompts using top language models. The results showed that as more prompts were fed into a single model, its responses became increasingly limited. All things considered, he added, human agency could be limited. Humans and organizations typically choose the easiest path, and more specifically, businesses may replace customer service staff with automated systems.
The head of the University of Baltimore School of Law runs a clinic to help people who have been denied benefits. She assists them in appealing these decisions in court. In many cases, the reasons for benefit termination appear random and staff cannot explain why this leads to a lack of choice or stability in how people interact with the system. He said this situation reduces the role of human agency, directing behavior and reducing control over choices.
Furthermore, he argues that our unconscious impulses influence us, as researchers have found that systems often appear neutral, even though they can be more biased than humans. “They’re realizing that they’re deeply biased and much more biased when it comes to humans, and yet it actually looks like a neutral discussion,” he says.
He said these systems do not reveal deficiencies. Over time, this can distort your worldview. It used to seem like everyone had the opportunity to learn and gain knowledge, but now it feels like that’s no longer the case.
He says this cycle, as discussed in the book, entrenches prejudice. He says that while many people view AI as purely technical, it is influenced by social biases, and argues that it is important to understand how AI’s biases compare to our own.
His podcast has featured prominent guests, including the Harvard researcher who coined the term implicit bias. She has been studying the issue since the 1990s, tracking trends in racism, sexism, and other biases over time. Her findings show that leading language models are two to three times more biased than humans.
AI developers are still searching for solutions, but there are some fundamental problems.
Ward says they don’t fully understand it. “Is there anyone here who played with LLM and asked the same question twice? And you can’t hang on to your thoughts. LLM has no memory, right?”
Reorganizes existing information, but fails to maintain consistency in responses. This makes it difficult for creators to create AI characters that look the same over time. He said that when a program appears to be remembering information, another system is actually storing that memory during the conversation. These systems behave in unpredictable ways, which is a concern for their creators.
Technology developers often think that scaling up will solve problems, but it can actually make them worse.
That said, he warns that there is pressure to hide AI’s impact on the environment because each request requires a lot of power and this process, called “inference,” means many computers work to fulfill the request. For example, you may need to run 47 computers at the same time to meet your demands. Additionally, cooling these computers wastes water and energy. He explained that a single data center can consume as much energy as Philadelphia, which is why there is a push to restart nuclear energy in the United States to meet that demand.
Many researchers want to reduce the energy usage of AI. In fact, Ward met someone who creates AI solutions for nonprofits. He developed a system to match new Vermont residents with available jobs in the state based on their skills.
Ward focuses on the practical aspects of teaching because research shows that learning tools are most effective after students have done some mental work. He said, for example, peer feedback can greatly improve writing skills, but only if students write independently first.
Imagine having unlimited resources to support your children. But he said it’s important to explain what these tools can and cannot do. Students can gain skills and knowledge, but using tools as shortcuts can undermine this process. Rather than requesting an essay, students should first write the essay themselves and then ask for help to improve it.
He urged parents to understand their students’ motivations. He has two children with different personalities, one of whom is very artistic, and they work together to foster his creativity. “If you can really become an artist by modeling yourself on trained artists, that might be the biggest competitive advantage you have right now,” he says. Overall, developing your own creative skills will be valuable in the future, but those who rely solely on shortcuts may struggle. He said parents need to encourage their children to be thoughtful and develop skills.
Additional expert advice: Children under the age of 13 should not use AI tools. His 12-year-old son recently said this art can be made better than AI.
So what about safe AI tools for kids? Ward said some developers don’t think strict rules are necessary. They’re all about the big picture good for society, but they may not understand how it can be confusing to children. He said he learned from neuroscientists about the “dark matter” of relationships, the invisible ways we can measure our connections. His research shows that people who actively interact tend to perform better on tasks.
Ward said society really needs to think about the human side of these tools, and while some companies are trying to take responsibility, they don’t necessarily realize how their technology is actually impacting people. He said developers struggled with ethical issues in meetings, indicating they may not be seeing the big picture.
Also regarding the impact on mental health, he said chatbots make them feel lonely, so some people treat them like friends, which can lead to strange emotional attachments, even for people who aren’t dealing with mental health issues. Parents try to protect their children from social media, but it can be a big challenge. Children should limit screen time at night and make sure they connect with others in real life, he said.
Today’s youth are able to recognize what is harmful and often push back against the negative aspects of technology.
As younger generations worry about the future and AI, Ward said the silver lining in all of this is protecting our best selves.
