This month on YouTube, harvard business review I shared a discussion with Harvard Business School professor Tsedal Neely (recorded last year) as part of a series I describe as “a podcast of legendary case studies.”
“We are in the midst of an AI revolution,” said Brian Kenny, chief marketing and communications officer at Harvard Business School, the organizer of the event. He also referred to criticisms such as his warnings and Elon Musk’s quip that “artificial intelligence will lead to an AI revolution.” Intelligence, we are summoning the devil. “
A very concerned Kenny asks perhaps an even more important question. “Whose job is it to ensure that such a vision never materializes?”
And in this podcast, companies tried to crack down on themselves by discussing the case of prominent AI researcher Dr. We dig into case studies of what can go wrong at times. .
Check your work
Gebble has been a longtime acquaintance of Professor Neely. When Neely was a first-year doctoral student at Stanford University, he met Gebreu, who was a freshman undergraduate at Stanford University at the time. “And you knew this woman was going to be special…Timnit is one of those who sees things clearly.Today everyone is talking about her AI, AI ethics, AI bias. was thinking about this over ten years ago.”
Gebble went on to pursue his Ph.D. She earned a PhD in computer science from Stanford University and by 2018 worked with AI researcher Joy Buorumwini at the MIT Media Lab to analyze facial recognition software from three companies. Their study drew attention to an obvious failure. Neely summed it up by saying, “The darker the skin of a person, the less likely it is that an AI will accurately recognize a face,” and that Gebble “was one of the first people to see it and record it.” …” he pointed out.
“The clarity with which she recognized the problem of AI bias early on is simply astonishing to me, because everyone is talking about it today.”
Neeley said he also learned from Gebru that AI bias is “inextricably tied to DEI.” [diversity and inclusion]. They cannot be separated. It is the communities with the least power that will suffer from the impact of AI, the ones least likely to play a role in influencing the technology and the models being built. “
Neely later emphasized that point. “[A]Companies, organizations, and groups interested in digital transformation, bringing AI into their operations, and using data to create algorithms and models cannot ignore the DEI component.
“And really, we need to have the right people to look at the work, help design it, and help develop it, because otherwise flawed people will create flawed systems. is.”
Fearless
Neeley also sees Gebru calling attention to a larger concern: the larger the model, the more difficult it is to eliminate bias. And that same year, she began working at Google as co-leader of her “Ethical AI” research team.
It’s interesting to hear Neeley talk about Gebru’s experience at Google. Including that she has often welcomed her claims. “If she sees someone being systematically harassed, a minority, she will speak up. She will try to improve the culture of women and people of color in her Google.” .”
And here, Neely’s real-life encounter with Gebble gives the story context. “She’s fearless…that’s one of the questions I asked her. Where does this fearlessness come from?” It is not. She just has this flame in her heart and if she sees the truth, if she sees anything, she is not afraid to raise her voice. “
But from the company’s point of view, “I can’t rest easy…” You can imagine how difficult that is for some of the organizations, especially the leaders. We don’t like people who get excited. Neely describes the apex as Mr. Gebrew’s “dismissal or resignation, depending on which side you’re on.”
It started with a paper on biases in large language models that Gebru co-authored with six other researchers (four from Google). Gebble told The New York Times that Google managers demanded that the names of Google employees be removed from the paper. “She refused to retract the paper without further discussion,” the newspaper reported, adding, “And in an email sent Tuesday night, if the company could not explain why she was asking her to retract the paper, she would not be properly dismissed.” He said he planned to resign after a long period of time.” Please answer any other concerns. “
The Times quoted part of the email, in which Gebble said, “When you start defending underrepresented people, life starts to get worse. You start to upset other leaders. You can’t get anything done with more documents or more conversations.”
Guevrew said on Twitter that Google had instead accepted her resignation “immediately, effective today.” write in “Certain aspects of an email sent last night to non-management employees at Brain Group reflect behavior that is inconsistent with the expectations of Google managers.”
Or, as Mr. Neely puts it, “due to some procedural issues, they eventually expelled her.”
But now Mr. Neely took to Twitter to praise Mr. Gebrew’s response. “She wanted to make sure she wasn’t fired or locked up in silence … as long as everyone took a few risks to speak up and name their names, over time the collective You will be able to protect your body.” People of the future. “
Google CEO Sundar Pichai apologized, saying, “I have publicly acknowledged what happened and expressed my deepest regret for the loss of one of the world’s top AI experts, who happens to be a black woman. Told”. There was a letter of concern signed by nine US lawmakers and an angry petition signed by thousands (both inside and outside of Google).
But Neely’s real question is this. “Was this situation doomed from the start?”
“Can we have an AI ethics, AI bias expert to evaluate the technology within the company? mosquito?”

Liberalization of research
Neely explained that there is a danger “when it comes to communities that are being policed, prejudice will be duplicated, duplicated and spread exponentially,” and called Gebreu’s message on prejudice “slow down and understand.” ’” he summarized. And when the models themselves are designed by a homogenous group, the problem is even worse, Neely added.
work together #ChatGPT It’s like working with a new team member. We need to adapt and learn to maximize our potential. @awsamuel Share key tips for success, including transparency, feedback, and caveats.For more information, please read the book #digital mindset & this @WSJ Article: https://t.co/f42eyHUWap
— Tsedal Neely (@tsedal) May 30, 2023
Professor Neely pointed out that it was these concerns that led Gebru to co-found the blacks in the AI research community.
A year after the incident, Gebrew founded the Distributed Artificial Intelligence Laboratory (DAIR). According to the website, it’s “a space for community-based and independent AI research, free from the broader influence of Big Tech.” According to the site, the institute is “rooted in the belief that AI is inevitable.” “[I]This harm is preventable and can be beneficial if its creation and deployment involve diverse perspectives and deliberate processes. “
Neely feels there is a different message within the organization. That’s because Gebble “must work outside the company so that it can become independent, develop research, develop insights, and support the independent reviews of other companies without being influenced.” I firmly believe that it will not.” of a company…
“Some of her colleagues at Google have joined her on DAIR, even though she’s still figuring out long-term, sustainable revenue models.”
Towards the end of the podcast, host Kenny asked Neely if companies like Google and Microsoft would more readily accept findings from outside organizations. Neeley isn’t sure, but claims that DAIR can produce university-level research with “insights that can be generalized or extrapolated to better understand some emerging technologies.”
And ultimately, Gebru’s vocal advocacy is already changing the way people think about AI, Neely said. “When I talk to companies that are building digital capabilities, he is putting AI into their systems, building algorithms, he reminds me of Timnit.”
