Nathanael Fast, PhD: One of the big issues that I see with artificial intelligence is that we’re building these powerful AI systems that are shaping the world. And they’re influencing the future for everyone who lives in the world, but also everyone who will live in the future. But they’re being created by a tiny minority of the world population. So very small number of people in a room building these AI systems. And one of the problems that emerges with that kind of relationship is that sometimes the benefits and the harms of the AI systems that we’re building are unevenly distributed.
Kim Mills: Welcome to Speaking of Psychology. I’m Kim Mills. We’re doing something a little bit different this week. In early January, the American Psychological Association joined the Consumer Technology Association at CES, the world’s largest technology trade show, for a series of discussions about how technology is shaping human behavior and about how psychological science can help the development of more ethical and effective technology.
Amid the robots, gaming systems, smart tech, and AI-powered everything on display at CES, we talked about some of the field’s biggest questions: How can we harness the power of artificial intelligence ethically? Can digital health interventions help solve the mental health crisis? How should companies approach online privacy? Can video games promote mental health? After the discussion on AI, we caught up with panelist Nathanael Fast, director of the Neely Center for Ethical Leadership and Decision Making at the USC Marshall School of Business.
Dr. Fast is an expert on technology adoption and studies how power and status influence decision-making, how it interacts with human behavior, and how AI will shape our future. We talked about why people might make less ethical decisions when they’re acting through AI agents, whether most of us trust AI, and why it’s important to make sure that the potential benefits of AI flow to everyone and not just the most privileged. Here’s our discussion. If you want to hear more from Dr. Fast and the other psychologists who spoke at CCEs, you can find all the talks at ces.apa.org.
So, Dr. Fast, I want to say thanks for joining us here at CES. We’re in Las Vegas at the most incredible technology show in the world, and you were part of a panel presentation this morning on artificial intelligence and ethics, and I just wanted to ask you a few questions coming from that and from the work that you do. So I’m wondering, as we talk about AI technologies, they’re being developed at an amazingly fast pace right now, and there are doom and gloom predictions out there about what AI is going to do to our lives, but there are also people who are saying that our lives will be changed for the better. Where do you fall on that spectrum?
Fast: Thanks. It’s great to be here. I’ll first preface it by saying I’m pretty excited about AI and where technology is going in today’s world. It’s a very exciting time to be alive, but I definitely get both sides. I understand the people who are very concerned and I share a lot of the concerns that I guess those that you might say are in the doomer camp. However I also share some of the excitement that some who might be optimists have. I think it’s pretty naive to think that it’s going to be one way or the other. I mean, I don’t think we’ve ever developed a powerful technology or tool that only had positive uses or negative harms for society.
And so for me, I think I see myself as kind of a measured optimist. I think I’m optimistic because I think as psychologists, we know that we often get what we are looking for, the self-fulfilling prophecy. And so I think it really makes sense for us to be optimistic about what we can achieve as humans in response to these new technologies. But I’m very measured about it because I think that there’s a lot of tremendous harms and downsides of these technologies if we don’t deploy them and adopt them and use and govern them effectively.
Mills: In your research, you look at how interacting with AI may change human behavior and the way we relate to each other. For example, in one study you looked at how using an AI assistant might actually make people behave less ethically, and that’s what you were talking about this morning, the ethics of AI. Can you talk about that study—what you did and what did you find?
Fast: Well, I mean, we’re actually still in the early phases of this, and I described this in a paper with John Gratch, and John and his team are the ones who did that particular study. But what we’re finding overall is that we can often hide behind the AI that we’re using to kind of mediate our relationships. And so when we are face-to-face as we are now, there’s a greater sense of kind of evolutionary pressure that I think is a positive one to kind of treat each other fairly and so on, and not manipulate each other or do harm. But when our interactions are mediated by an AI assistant or an AI model that’s negotiating on our behalf, then we can kind of hide behind that. And you can also think about it in terms of algorithms. If you’re hailing a ride from Lyft or Uber and the price looks a little low, it’s a little easier to just kind of let that algorithm do its thing and it’s a little bit harder for us to step in and do something about it.
And so I think what we were trying to say in that article was just that there’s some potential for moving in an unethical direction, maybe more so than we would naturally, the more we adopt AI. Another study that I have with Roshni Raveendhran and Peter Carnevale, we looked at managers and when managers—we put managers in situations where they had to engage in socially unacceptable behaviors like micromanagement and things like that. They much preferred to use virtual, to manage virtually, rather than in person when they were doing those types of things. And so I think that also speaks to the psychology that we kind of hide behind these technologies in some cases.
Mills: Speaking of hiding behind technologies, you studied how much people trust AI, for instance, whether they’d rather be monitored at work, if they’re going to be monitored at work, by a human or an AI system. What have you found?
Fast: Well, it’s interesting. So I found kind of opposite results, but they actually make a lot of sense. So across a couple of different projects. So in one project with Roshni, again, we actually found that people are more willing to be tracked when the tracking is done by technology only. So if people are at work and they’re going to be tracked by their computer or a smartwatch or something like that, it’s monitoring their performance and things like that, they’re much more willing to say yes to those situations than when those same tools are being used to track them but there’s a human who’s analyzing the data and looking at the data and all of that kind of stuff. And we found that even when we told them what the data were going to be used for and so on, there’s something psychological that we’ve evolved to feel kind of—what we argued and found in our paper was that people feel kind of less autonomy when they’re being watched by another person, that creates kind of a social pressure to perform and to not be judged negatively, whereas the technology doesn’t judge us, it just kind of measures our performance.
And so people tend to have a greater preference for that. But in another paper with David Newman and Derek Harmon, we actually found that once we have all these data that we’re collecting in the organizational context, we can actually use them to make decisions like HR decisions, hiring and firing and promotions and things like that, and perhaps remove some of this human bias that enters into these decisions. And it’s one of the biggest things that employees complain about is human bias in these decisions.
But what we found is that actually people in that case prefer humans to make the decisions. And so we would give them a whole set of different HR related decisions that were made, and when we told them those same decisions were made by an algorithm and AI, they viewed them as less fair than when they were made by humans. And so we have this kind of juxtaposition where we’re giving our data, we’re more willing to be tracked and provide our data, which might be problematic for privacy. Once we have that data, we could actually use it to make good decisions. But in those cases, that’s when we want step in and say, we don’t trust it. So it’s kind of interesting,
Mills: But with some of these HR applications of AI, because AI is basically scraping what’s out there in the world and coming together and then using the data to basically interpret, and I mean it becomes a tool for us. What I’m trying to get at is that AI can be as biased as we are—the people who create the initial data that goes into the AI. And so we’ve seen in some HR instances where AI makes the same biased decisions that the human beings make. How can we counteract that?
Fast: So Amazon famously tried to create a hiring algorithm and they had to scrap it at the end. They tried to overcome the bias. It kept hiring men and suggesting that they hire men and not women. And so they took names out. They tried to strip everything out from the resumes. But the AI is very good at assessing and knowing what’s, whether you played women’s sports in college or whether you use even certain adjectives to describe your performance, that men and women differed on that. And so they had to scrap that. So in some cases, I think we actually just need to not rely on AI when we can’t remove that from the system.
In other cases–Sendhil Mullainathan and his team of researchers have this great study where they found that there was racism embedded, a decision-making algorithm that was used by hospital in a medical context. And they actually did an audit of the algorithm and they were able to find that it was making decisions that were based on monetary preferences that ended up being racist decisions, and they were able to go in and fix those. And as Sendhil talks about, once you fix an algorithm, a decision-making algorithm, it doesn’t make that mistake anymore. But you can kind of tell people that they’re making biased decisions and they continue to make them year after year. And so it’s not as easy as just saying, let’s throw it out altogether, or let’s always use it. I think we have to be really smart about how we’re doing this.
Mills: Your panel this morning was about ethics and AI, and I got to thinking about a story I had read about somebody who really wanted to be on the podcast that Esther Perel does, the psychologist who talks about relationships and works with people actually on her podcast. And because he couldn’t quite get on the show, he created an AI Esther Perel. And I think we’re seeing more of these kinds of things happening. I read about a AI Marty Seligman and other psychologists, because they have a big body of work out there that can be synthesized. Is that an ethical thing to do? And if somebody wanted to make an AI of you, how would you feel about that?
Fast: Yeah, well, I definitely would feel concerned, and that’s one of the things that I’m concerned about—a lot of people are concerned about, especially as we head into the election. With 20 to 30 seconds of a voice, a person’s voice, you can actually create a deepfake that sounds just like that person. We can do that with videos now too. This is new territory for us. We have to figure these things out. I certainly can’t say that I’m comfortable with that idea. I think we will find our way as a society and try to figure out how to handle those situations, but it’s probably going to be messy, and this upcoming election is actually going to be quite messy as well.
Mills: You have also talked about the need to democratize AI to make sure that the benefits get to everyone in all parts of the world. What does that mean? What do you mean when you say democratize AI?
Fast: A lot of people are talking about democratizing AI and they mean different things. And so I think that is exactly where we should start with that question. And the reason why it’s a big priority for me is that we live in a world where right now we’re developing powerful AI systems, and these AI systems are affecting the entire world. And not only are they affecting the entire world, but also all future humans who are lucky enough to walk the face of the earth are going to be affected by these systems too. And they’re made by such a tiny minority of the existing population in the world today. So that’s a problem in my perspective. I’ve studied power for most of my career, and that’s a power imbalance if you’ve ever seen one. And so we do need to democratize AI, meaning we need better, we need to infuse the AI systems that we’re building with more input from around the world.
And there are different things you can democratize. And I want to make this point here because I think big tech often talks about democratizing AI, and I don’t really like the way that they’re talking about it. They mean creating cheap products that lots of people can use, and there’s nothing inherently wrong with products that a lot of people can use. But in the case of something like social media, making sure that it’s a free product that everybody gets to use, and in many ways humans are kind of the product there. I don’t know that democratizing access to that is really the positive force for good that democratizing implies. And so when I talk about democratizing AI, I’m really talking about democratizing the design of the systems, democratizing the use of the systems and democratizing the governance of the systems and really finding ways for more people’s voices to be infused into each of those three areas.
Mills: A lot of educators worry that artificial intelligence is going to change teaching and learning for the worst by letting a lot of students offload their writing and other work to AI chatbots. You’re a professor as well as a researcher. Is this something you worry about? How do you approach it in your classroom?
Fast: Yeah, no, I mean, I’m not worried about it. Maybe I should be, but I’m not worried about it because as long as we change how we teach, I think we’re going to be okay. And I actually believe we need to change how we teach. I think we’ve needed to change how we teach for a long, long time. And so when we make our classrooms more experiential, when we make them more kind of exploratory, team-based, things like that, I think people learn a lot more through working on projects together. And so I think AI actually lends itself really well for students to kind of explore new tools and explore new ways of using AI. But it does require that we change and it requires that we both find good ways of using AI and also, especially when we’re trying to teach writing, yeah, we’re going to probably have to have people work on that in the classroom and flip the classroom. If they do it at home, they’re going to use chat GPT in many cases. But I think we can handle this. I think we can find good ways to continue to educate people
Mills: Despite these concerns. AI does have the potential to transform our lives in beneficial ways as well. What are some of your biggest hopes for AI at this point?
Fast: There’s a lot of benefits and one of another reason why it’s important to kind of democratize the use of AI, and I think this comes from kind of educating and getting the word out to populations that might not otherwise get the word out, is because there are benefits of AI. And so I was just recently in Kenya and was touring the Kibera slum and my tour guide lives there in Kibera, and I asked him if he had ever heard of ChatGPT. I was across the world. I thought I would take advantage of the opportunity to ask someone. And it turns out he had, and he surprised me. He had heard of chatGPT, he had heard about it from somebody from the UK who was on a tour, and he actually uses chatGPT to increase bookings. And he takes its advice about how to take pictures, how to arrange his website and so on.
And so I think that there’s a lot of benefits that people around the world who don’t have access to great education and who don’t have access to personal tutors the way that the wealthy often do that I’m really excited about. I’m really excited about people getting access to good tutors and Khan Academy has created a tool that’s pretty exciting as well. On that front, I’m also excited about the possibility that AI will bring people together to address it. And maybe I’m a little bit blindly optimistic about this, but I think that there’s potential here.
And so at the Neely Center, at the USC Marshall School of Business that I direct, we have something called the Neely Indices where we’re tracking user experiences. How are social media platforms affecting users? How are AI models affecting users? How are mixed reality technologies affecting users? One of the things that we found with AI is that both Republicans and Democrats are concerned equally across the board. They’re both excited and concerned about AI to equal amounts. We haven’t politicized this issue yet. Of course, we tend to politicize every big issue. And so that’s a concern, but I think it’s also an opportunity for us to kind of work together. And so that’s one thing I’m also hopeful about.
Mills: One of the things that struck me that you said on the panel here at CES was that we need to slow down the development of AI. Why do you think that? And is it even possible to make business slow down?
Fast: Well, I mean, that’s another good question. And so to clarify, so there’s a gap, which is the speed capacity gap that I mentioned, and where we’re deploying and developing new AI and new technologies with a greater speed than we’re able to handle. And so we don’t have the capacity to make decisions about these new technologies and we don’t know how they’re affecting us. And so it’s really hard to govern, to set policy, to design technologies more effectively and with greater health benefits when we don’t really know how they’re affecting us. And so of course, you can close that gap by either slowing things down or speeding up the capacity. And I’m actually not a big proponent of slowing things down. I do think that one way to slow things down effectively is to hold companies more accountable for the harms that their technologies are creating.
When we do that, they’re going to slow themselves down by choice because they don’t want to put technologies out there too quickly. And so I think that kind of slowing down is good, but I don’t like the idea of slowing down simply to slow down. And the reason is because we learn from each iteration. So if you think about something like large language models, each iteration of large language models, as we deploy it, we learn a whole bunch of stuff and that gets embedded into the next model. And so if we’re trying to learn as much as we possibly can by say the year 2030 or 2040, the more iterations that we can have in between now and then the more we’re going to learn and be able to create safer models.
The caveat to that is there are times where we’re kind of deploying the technology too quickly and with too much speed that we’re not actually able to learn and give adequate feedback in between the iterations. And so that’s where we’re really working hard to try to elevate the capacity of society. And I think for me anyway, one, as an academic, one of the best ways that we can improve society’s capacity to handle the speed is actually to collect data more quickly and share it broadly. And so with the Neely Indices, for example, we’re collecting data about all the different platforms, not just one. We’re making it public so the companies feel kind of pressure and also incentives when they do good things, they also get credit for that. And then we’re also sharing that with researchers so that they can actually get research out there quicker. So I’m more bullish on the idea of speeding up our capacity than I am on slowing down the tech.
Mills: But when it comes to punishing a developer that is doing harm, how would that happen? I mean, we have watched Congress try to wrestle with social media, which I think 80% of them don’t understand or use. And then we have organizations or regulators like the FTC, but I mean they’re also slow on the draw as well. So does the punishment just come from the marketplace?
Fast: I think the marketplace is the best place for punishment to come for the companies—by not buying products and abandoning products. And you see that with companies like X or Twitter, and you see some of that market pressure happen as a result of decisions that the companies make. And that’s another one of the benefits of the indices that we’re trying to work on through the Neely Center, is making those data public. And so for example, Twitter ranked very high on our initial sets of indices for people reporting that there was content that was bad for the world or bad for them personally and lower on connecting with others or learning new things, whereas like LinkedIn, for example, scored very high on or decently high on learning new things and connecting with others, but really low on the harms. And so that’s evidence or that’s data out there that’s relevant to users and they can make decisions about where to spend their time.
I think one of the things that I do want to note, and this is really messy, and a lot of people have a lot of arguments about—accelerationists are naive or maybe the people who are saying, let’s take a pause and slow down, are naive. I actually think it’s a messy debate, which actually is a healthy democracy. That’s what that looks like. And so the Future of Life Institute, for example, had the big pause letter, let’s pause for six months, things like that. And you could critique those things, but I actually think that they got the policymaker’s attention. And I think when you see what policymakers and the lack of understanding of how social media was working back when they were dealing with that, and you compare that to how much they understand about AI, there’s a big difference. They’re actually a lot more skillful with regard to AI. And they have room to grow. But I think a lot of that is because of the concern that’s been generated. So I think everything comes with its pros and cons, but I do want to acknowledge that I think some of those calls had the effect of getting everybody’s attention. And I think that’s a good thing.
Mills: Is there enough transparency in AI as it’s developing today?
Fast: No, that’s an easy one. We need more transparency, and I think the companies that are building it, I think we have to have a measure of kind of mission, almost missional quality to what they’re doing. We’re building AI, we’re doing something that’s never been done before. And so part of our mission is to be transparent about what we’re building, how we’re building, and what’s going into the data, the training data, and maybe weighing in on some of the research findings that are out there. There’s just a huge stream of research – it is not fun to try to stay on top of this field. It is unbelievable. And so I think if companies were more transparent about what they’re doing and how they’re doing it, as well as weighing in on some of the research and doing research of their own, I think we’re going to be better off the more that happens.
Mills: And what are the next big questions for you? What are you working on?
Fast: I’m putting a lot of effort this year into efforts to democratize AI and the way that we’re talking about getting more input from around the world. I’ll be doing a lot of international travel to talk to people who are in different areas. We’re expanding our indices to Poland, to Kenya, to Somalia, to other countries to collect more data from people who don’t typically get their data into the conversations.
And then the second thing is really working on purpose-driven technology and trying to shift the paradigm away from kind of maximizing engagement, maximizing profit through engagement, and instead really tech designers, but also consumers as well as policy makers, thinking about purpose-driven technology, what is the purpose of this particular, what is the purpose of using this large language model or this VR headset? What am I trying to achieve with it? And is it achieving that purpose and measuring that and what are some of the side effects or the harms that come from it? And we do that with medicine, with new drugs, and I think we need to do that with a lot of these new technologies because they’re getting to be quite powerful. And so I’ll be focusing on how to shift the paradigm to focus more on purpose-driven tech.
Mills: And just to illuminate for our listeners what you mean by that. Are there some examples of purpose-driven AI technologies right now?
Fast: Sure. I mean, you can think about metaverse all the conversations about the metaverse, and you could think about virtual reality as kind of an opportunity to create a virtual space that we push people into or incentivize people into, and they spend a bunch of time in virtual reality. And that’s it’s this container and we make money because we collect data from them, lots of data from while they’re in there, almost like a glorified social media. It’s unclear what the purpose of that is. And so that’s like a profit-driven or engagement-driven model, and that’s not actually working. People are not rushing to Meta’s vision of what the metaverse could be. And instead, there’s so many, and we heard many of them today with the APA sessions and others where people are using virtual reality to treat pain or to treat Parkinson’s or to improve learning and improve optimism about the world and make a difference in the world and things like that. Those are very purpose-driven experiences that we can create for people. And I think the more we do that, I think the better off we’ll be.
Mills: Well, Dr. Fast, I want to thank you for joining me today. I want to thank you for being here at CES participating in the panels that APA did today.
Fast: Well, thank you. It was my pleasure.
Mills: You can find previous episodes of Speaking of Psychology on our website at www.speakingofpsychology.org or on Apple, Spotify, YouTube, or wherever you get your podcasts. And if you like what you’ve heard, please subscribe and leave us a review. If you have comments or ideas for future podcasts, you can email us at speakingofpsychology@apa.org. Speaking of Psychology is produced by Lea Winerman. Our sound editor is Chris Condayan.
Thank you for listening. For the American Psychological Association, I’m Kim Mills.