Artificial intelligence (AI) is said to be a “black box” whose logic is hidden from human understanding, but how much does the average user want to know how AI actually works? According to a new study by a team that includes researchers at Penn State University, it depends on how well the system meets users’ expectations. Using a fabricated, algorithm-driven dating website, the researchers found that how well the system met, exceeded, or underperformed users’ expectations directly corresponded to how much users trusted the AI and wanted to know how it worked.
The study is available online ahead of publication in the April 2026 issue of the journal Computers in Human Behavior. The findings have implications for companies in a variety of industries, including health care and finance, that are developing systems to better understand what users want to know and provide useful information in an easy-to-understand way, said co-author S. Shyam Sundar, Evan Pugh University Professor and James P. Gimiro Professor of Media Ethics in Penn State’s Donald P. Bellisario College of Communication.
“AI can create all kinds of soul-searching in people, especially in sensitive, personal areas like online dating,” says Sander, director of the Penn State Center for Social Responsibility and Artificial Intelligence and co-director of the Media Effects Institute. “There is uncertainty in how algorithms produce results. If a dating algorithm suggests fewer matches than expected, users may think there is something wrong, but if it suggests more matches than expected, users may think their dating criteria is too broad and indiscriminate.”
In the study, 227 participants in the United States who reported being single answered questions on smartmatch.com, a fictitious dating site created by researchers for the study. Each participant was assigned to one of nine potential test conditions and instructed to answer typical dating site questions about their interests and characteristics they find desirable in other people. The site then offered 10 possible matches on its “Discover Page,” which it said “typically generates five ‘top picks’ per user.” Depending on the test condition, participants will either see five “top picks” mentioned with a message confirming that five options are standard, or a variation with a message stating that five options are standard, but this time the system has found two or ten options.
“If someone was expecting five matches and got two or 10, the user might think they did something wrong or something is wrong,” said lead author Yuan Sun, an assistant professor in the University of Florida’s School of Journalism and Communication. With Sander’s advice, she completed her PhD at Penn State in 2023. “If the system works well, you just follow it. There’s no need for long explanations. But what do you need when expectations aren’t met? The broader issue here is transparency.”
That may be different from how people react when others violate their expectations, says co-author Joseph B. Walther, Bertelsen Dean of Technology and Society and distinguished professor of communication at the University of California, Santa Barbara. He has been researching the violation of expectations in interpersonal relationships for many years. When humans violate expectations, surprised victims tend to make judgments about the transgressor, like them more or less, and then approach or avoid them.
“Being able to know, ‘Why are you surprised?’ That’s a luxury and a source of satisfaction,” he said, explaining that asking others why they acted the way they did can be intrusive and awkward. “But we don’t seem to be afraid to hold intelligent machines to account.”
Study participants had the opportunity to request detailed information about the results and rate their confidence in the system. The researchers found that when the system met expectations and delivered the top five promised, participants reported trusting the system without needing an explanation of the AI’s inner workings. When the system overperformed, a simple explanation clarifying the mismatch in expectations increased users’ trust in the algorithm. However, when the system provided insufficient information, users requested more detailed explanations.
“Many developers are talking about making AI more transparent and easier to understand by providing specific information,” Sun said. “There’s much less discussion about when those explanations are needed and how much they need to be provided. That’s the gap we want to fill.”
The researchers noted that while many social media apps already offer options for users to learn more about the systems in place, they are relatively standardized, use jargon, and are buried in the fine print of extensive user agreements.
“A lot of research shows that these narratives don’t work. They’re not effective in the goal of transparency to improve user experience and trust,” Sander said, noting that many current narratives are treated like disclaimers. “No one actually benefits. It’s more about due diligence than social responsibility.”
Sun pointed out that a large body of scientific literature reports that the better a site performs, the more people will trust it. However, these findings suggest that this is not the case. Even if they were given far more top picks than promised, people still wanted to understand why.
“I thought people would be satisfied with it at face value because good is good, but they weren’t. They were curious,” Sun said. “It’s not just performance. It’s transparency. More transparency means people understand the system better, which leads to more trust.”
But as more industries adopt AI, simple transparency is not enough, the researchers said.
“You can’t say that if there’s information in the terms of service, you’re exempt,” Sun said. “We need more user-centered and customized explanations that help people better understand AI systems when they need them and in a way that meets their needs. This study opens the door to further research that can help achieve that.”
Mengqi “Maggie” Liao of the University of Georgia also contributed to this project.
/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.
