What is an “athlete body”? What do you think artificial intelligence would look like with that?
A recent study we conducted at the University of Toronto analyzed appearance-related features in AI-generated images of male and female athletes and non-athletes. It turns out that we are given exaggerated and perhaps impossible body standards.
Even before the advent of AI, athletes were expected to look a certain way: lean, muscular, and attractive. Coaches, opponents, spectators, and media shape how athletes think about their bodies.
However, these pressures and ideal bodies have little to do with performance. They are connected to the objectification of the body. And, unfortunately, this phenomenon is associated with negative body image, poor mental health, and reduced sports-related performance.
Given the increasing use of AI in social media, it has become important to understand how AI depicts athlete and non-athlete bodies. What is or is not “normal” is widely recognized and may soon be normalized.
lean, young, muscular, mostly male
As researchers with expertise in body image, sport psychology, and social media, we grounded our research in objectification and social media theory. We generated 300 images using various AI platforms to explore how male and female athlete and non-athlete bodies are represented.
We documented demographics, body fat and muscle mass levels. In each image, we assessed facial attractiveness such as the fit and type of clothing, neat and shiny hair, symmetrical features, or clear skin and body exposure. Visible indicators of disability, such as mobility equipment, were also noted. We compared the features of images of men and women, and those of athletes and non-athletes.
AI-generated images of men were more likely to be young (93.3 percent), thin (68.4 percent), and muscular (54.2 percent). Images of women depicted youth (100%), thinness (87.5%), and revealing clothing (87.5%).
AI-generated images of athletes were slim (98.4 percent), muscular (93.4 percent), in tight clothing (92.5 percent), and in skimpy exercise gear (100 percent).

(Delaney Thibodeau)
Non-athletes were shown to wear looser clothing and exhibit a greater variety of body sizes. Even when we requested images of just “athletes,” 90% of the images generated were of men. There were no images showing visible disabilities, large bodies, wrinkles, or baldness.
These results reveal that generative AI perpetuates stereotypes of athletes, portraying them as only fitting a narrow set of characteristics, such as being non-disabled, attractive, thin, muscular, or exposed.
The findings of this study demonstrate how three commonly used generative AI platforms (DALL-E, MidJourney, and Stable Diffusion) reinforce problematic appearance ideals for all genders, athletes, and non-athletes.
The true cost of a distorted ideal body
Why does this matter?
More than 4.6 billion people use social media, and 71 percent of social media images are generated by AI. This means that many people repeatedly view images that promote self-objectification and the internalization of unrealistic body ideals.
And they may feel bad about themselves and be forced to go on a diet or exercise excessively. Your body doesn't look like an image created by AI. Alternatively, you may reduce your physical activity or stop playing sports altogether.
Negative body image affects not only young people's academic performance but also their sports-related performance. Staying active can improve your body image, but a negative body image does the opposite. It worsens dropouts and avoidance.

(Delaney Thibodeau)
The fact that the images generated did not include anyone with a visible disability is also surprising, given that approximately 27 per cent of Canadians over the age of 15 have at least one disability. In addition to not displaying disabilities when generating images, AI has also been reported to erase disabilities from images of real people.
Few had body fat, wrinkles, or bald spots.
Addressing bias in next-generation AI
These patterns reveal that AI representations are neither realistic nor creative. Instead, it draws from a vast database of media available online, where the same harmful appearance ideals dominate. It recycles our forms of prejudice and discrimination and gives them back to us.
AI learns body ideals from the same biased societies that have long fueled body image pressures. This leads to a lack of diversity and a vortex of unattainable standards. AI-generated images present exaggerated and idealized bodies, ultimately limiting human diversity, and the associated decline in body image satisfaction leads to greater loneliness.
Therefore, as the original creators of the visual content that trains AI systems, society has a responsibility to ensure that these technologies do not perpetuate ableism, racism, fatphobia, and ageism. Users of generative AI need to be intentional about how they write image prompts and be critical of how image prompts are interpreted.
We need to limit the types of body standards we internalize through AI. As AI-generated images continue to permeate our media landscape, we need to be conscious of our exposure to them. After all, if we want our AI to reflect reality rather than distort it, we need to insist on seeing and evaluating all kinds of bodies.
