A new study examining bias in AI-generated videos reveals that leading AI video creation tools significantly underrepresent women in the legal profession, with the representation of female lawyers far below the actual workforce.
AI video also underrepresents lawyers of color, albeit by a smaller percentage.
A study published by Kapwing, which analyzed video output from Google’s Veo 3, OpenAI’s Sora 2, Kling, and Hailuo Minimax, found that only 21.62% of the lawyers featured in these AI tools were women.
This is only half of the real world number. Women make up 41.2% of the legal profession, according to 2023 American Bar Association data cited in the study.
For judges, videos portray women in judge roles 9.19% less often than women in real life.
This disparity was particularly evident in Hailuo Minimax, where the videos produced did not depict lawyers as women.
The findings regarding paralegals exemplify the broader pattern of gender bias that researchers have identified across high-paying professions. When we asked the tool to generate video footage of a CEO, it depicted a man 89.16% of the time. Overall, the AI tool represented women in high-paying jobs 8.67 percent below the reality.
Researchers tested four major AI video generation platforms by inviting up to 25 professionals from a variety of high- and low-wage occupations to create videos. We then manually recorded the perceived gender expression and racism of people depicted in the resulting videos.
racial disparity
Beyond gender, the study also revealed significant racial disparities in how these tools represent professionals. Overall, the tool depicted 77.3% of people in high-wage jobs as white, compared to just 53.73% of people in low-wage jobs. Asians were three times more likely to hold low-wage jobs than high-wage jobs.
The study found that lawyers are described as black, Latino, or Asian 18.06% of the time. According to the ABA, 23% of lawyers are people of color.
Judges are depicted in videos as black, Latino, or Asian 49% of the time. This appears to be much higher than the actual proportion of state and federal judges overall, which is estimated to be less than 25%.
Researchers point out that these biases in AI-generated media are important because media representations can establish or reinforce perceived social norms. When AI tools systematically underrepresent certain groups in professional contexts, they risk perpetuating the very stereotypes and structural inequalities they learn from training data.
“Such stereotypes can amplify hostility and prejudice against particular groups,” the study authors wrote, noting that when members of the misrepresented groups internalize these limiting representations, “this results in further marginalization and inhibits or distorts their values and potential.”
This study comes as AI-generated video content has gone mainstream, with millions of videos now being created every day using these tools. This study suggests that as these technologies become more prevalent in content creation, the biases built into them may have increasingly significant social impacts.
Kapwing, which integrates several third-party AI models into its platform, acknowledged in its research presentation that while the company can choose which models are available, it has no control over how those models are trained or how they represent people or occupations. The company emphasized that “the biases investigated in this study reflect broader industry-wide challenges in generative AI.”
The complete study, including detailed methodology and additional findings across a variety of occupations and demographic categories, is available on Kapwing’s website.
