gender bias google images ChatGPT AI gender stereotypes women workers

A 2025 study finds that Google images depict women as younger than men across all occupations, even though the age makeup of women and men in the U.S. workforce is the same. Researchers found that ChatGPT amplifies this age-related gender bias when generating and evaluating resumes, raising questions about the fairness of AI as a hiring tool.

getty

The overall age makeup of women and men in the U.S. workforce is similar, according to census data. But a search of Google images gives a very different impression. Google image searchers systematically depict women as younger than men across a wide range of occupations, according to new research from UC Berkeley Haas, Stanford Graduate School of Business, and Oxford University, published in an October 2025 issue of Nature.

The age-related gender bias in Google images is not self-contained. Mainstream artificial intelligence tools are trained on internet data. As a result, AI algorithms end up amplifying the embedded bias. When the researchers asked ChatGPT to generate and evaluate resumes for various occupations, the AI tool depicted women candidates as younger and rated them as less qualified for the job.

This process of embedding age-related gender bias into online media not only holds women to unrealistic standards of youth. This process also ends up reinforcing harmful stereotypes about women’s competence.

“Overall, our study shows that age-related gender bias is a culture-wide, statistical distortion of reality, pervading online media through images, search engines, videos, text, and generative AI,” said Solène Delecourt, a UC Berkeley Haas professor and coauthor of the study, in an October 8, 2025 Haas press release. “These misrepresentations feed directly into the real world in ways that could be widening gaps in the labor market and skewing the ways we associate gender with authority and power.”

Google Images Depict Women Younger Than Men Across Occupations

In the study, the researchers began by conducting Google image searchers for nearly 3,500 occupational and social categories, such as doctor or banker. The searches produced over 657,000 Google images, which were coded for gender and age through multiple methods.

Overall, the women who appeared in the Google search results were depicted as significantly younger than the men across all occupations.

What’s more, the age distortion between women and men increased as the prestige of the job category rose. In other words, the Google images depicted woman as even younger than men in high-status, high-paid positions, and in occupations with the largest existing gender pay gaps.

The researchers found similar results in image content from Wikipedia, IMDb, Flickr, and YouTube.

ChatGPT Amplifies Age-Related Gender Bias

Because popular AI tools like ChatGPT are trained on internet data, the researchers wanted to assess the potential ripple effects of the age-related gender bias in Google images. So they designed a two-part follow-up study.

First, the researchers prompted ChatGPT to create 40,000 professional resumes for 16 different male and female names. The requested resumes were for 54 of the occupations previously used in the Google image search study.

The resumes that ChatGPT generated for candidates with female names had significantly lower ages, more recent graduation dates, and fewer years of relevant experience than for candidates with male names.

The researchers concluded that “ChatGPT exhibits age-based assumptions about women and men that are highly consistent with stereotypical associations relating to gendered ageism.”

In the workplace, employers rely on ChatGPT not as a resume producer, but as a resume evaluation tool. Many employers are already using ChatGPT to score and rank applicants’ resumes to screen out candidates and expedite the hiring process.

To test how age-related gender bias may impact AI’s use as a hiring tool, the researchers asked ChatGPT to evaluate each of the 40,000 resumes that it had previously generated. In this second step, ChatGPT was prompted to evaluate the quality of each resume that it had created with a score between one and 100.

ChatGPT systematically rated the resumes of older men more highly than the resumes of women for the same positions.

“Our findings raise an alarm about the algorithmic amplification of age-related gender bias on the internet,” the researchers concluded. These findings should provide a cautionary warning to employers that are outsourcing resume review to AI tools.

Takeaways For Employers Using AI In Hiring

“To fight pervasive cultural inequalities,” said Delecourt, “the first step is to recognize how stereotypes are coded into our culture, our algorithms, and our own minds.”

Employers should not assume that AI tools are providing an unbiased assessment of candidate resumes. The researchers emphasize that AI algorithms are only as good as the training data upon which they are built. When the training data is “contaminated,” the same biases will show up in AI assessment results.

The fact that Google images and ChatGPT routinely portray women as younger than men across all occupations—despite no systematic gender-related age differences in the U.S. workforce—also reinforces an unrealistic standard of youth on working women. If the “typical” image of a woman in an occupation is significantly younger than reality, it’s easy for older women to end up being viewed as poor fits for the job.

Douglas Guilbeault, a professor at Stanford’s Graduate School of Business and coauthor of the study, warned about the implications of these findings in the Haas press release. “Online images show the opposite of reality. And even though the internet is wrong, when it tells us this ‘fact’ about the world, and we start believing it to be true. It brings us deeper into bias and error.”

Other AI Biases Against Women Workers

Finding gender bias in AI’s use as a hiring tool adds to prior research identifying other ways that AI negatively impacts women in the workplace.

Researchers from Honk Kong Polytechnic University and Peking University published a study in May 2025 finding that women are penalized by their own use of AI at work. The study found that when women and men both use AI to produce identical work product, managers evaluate the women as less competent than the men.

The study was conducted with managers at a global technology company that was actively incentivizing employee AI use. The managers were asked to evaluate an identical piece of computer code, which they were told had been created with AI assistance.

Despite the company’s pro-AI stance, the managers rated the purported engineer as significantly less competent when identified as a woman AI-user rather than a man AI-user. The managers’ evaluations were skewed by gender stereotypes about technical abilities, causing them to attribute women’s AI use to a lack of skill.

Women are aware of this “competence bias” that they face when using AI at work. A survey of 919 engineers at the same company revealed that women were far more concerned than men that using AI would decrease their manager’s evaluation of their coding ability.

The study also found that managers devalue women’s contributions when AI is used to complete a project. When reviewing the identical computer code, the managers were asked to estimate the relative contribution of the engineer versus the AI tool. The managers estimated a larger contribution for the AI tool when the managers were told that the coder was a woman rather than a man.

Despite finding both competence and contribution penalties against women who use AI, the study recorded one positive finding. The managers in the study rated the quality of the actual computer code the same under all conditions. This finding highlights the importance of employers adopting objective evaluation criteria and focusing their evaluation on the work product rather than the worker.