Exclusive

Most Large Companies Are Using AI For Hiring. The Leading AI Models Discriminate Against White Men.

'Even if everything else is the same...stereotypically white names are less likely to pass through.'

   DailyWire.com
Most Large Companies Are Using AI For Hiring. The Leading AI Models Discriminate Against White Men.
Getty Images

The global economy is no longer years, months, or weeks away from the artificial intelligence revolution.

From Meta utilizing it for social media content moderation to Starbucks employing AI to assist baristas with everyday tasks, AI is here. For some job-seekers, AI’s role in a company could even be affecting them during the hiring process, selecting some candidates over others without any human intervention.

But can be trusted to select the most qualified candidates for a job opening? New research suggests it can’t — because some of the most popular AI language models used by many companies in the hiring process could be discriminating against certain candidates based on race and sex.

A study published earlier this month by independent researchers Adam Karvonen and Samuel Marks shows that the “leading commercial … and open-source [AI language] models” used by major companies insert “significant racial and gender biases” in the hiring process. The research found that AI language models such as Chat GPT-4o, Claude 4 Sonnet, Gemini 2.5 Flash, along with open-source models Gemma-2 27B, Gemma-3, and Mistral-24B, “consistently favor Black over White candidates and female over male candidates across all tested models and scenarios.”

Karvonen told The Daily Wire that their research should raise red flags for companies using AI to hire employees. He said that it’s possible for companies to add their own “safeguards” to prevent AI from discriminating against people simply based on their race or sex, but added, “Since companies generally do not disclose the details of their internal safeguards, we cannot independently verify that these systems are fair and robust.”

“Our work shows that standard resume anonymization may not be a complete solution,” Karvonen said. “For example, we found that models would infer a candidate’s race from their college affiliation and become biased as a result. Even if you remove obvious identifiers like names, bias can still creep in through other background details. This is another reason why a technical fix like ours, which prevents the model from processing these concepts at all, can be more robust than simply hiding surface-level information.”

The study found that asking AI models to consider “real-world contextual details” and to focus on “a highly selective hiring process” can cause the AI to become biased and discriminate against white people, especially white men, who received “up to 12% differences in interview rates.”

“Even if everything else is the same — the resume is the same, the experience is the same — stereotypically white names are less likely to pass through the interview process,” Jason Hausenloy, an independent AI safety researcher, told The Daily Wire.

Hausenloy added that the AI language models favored minority job candidates, even if a company’s diversity pledges and statements were completely removed from the scenario.

“Just the very act of making the hiring scenarios more realistic meant that they are more likely to be biased,” he said.

The Daily Wire reached out to the companies running the AI language models highlighted in the study, which include Google, OpenAI, Claude, and Mistral. An OpenAI spokesman told The Daily Wire that “AI tools can be useful in hiring, but they can also be biased. They should be used to help, not replace, human decision-making in important choices like job eligibility.”

“OpenAI has safety teams dedicated to researching and reducing bias, and other risks, in our models,” the spokesman added. “Bias is an important, industry-wide problem and we use a multi-prong approach, including researching best practices for adjusting training data and prompts to result in less biased results, improving accuracy of content filters and refining automated and human monitoring systems. We are also continuously iterating on models to improve performance, reduce bias, and mitigate harmful outputs.”

None of the other companies responded to The Daily Wire’s request to comment.

It’s not just a few mega corporations using AI to weed out job candidates. A study published by Resume Builder last October found that half of all companies are already using AI in the hiring process and predicts that this will increase to 70% by the end of 2025. The study surveyed 948 business leaders who work at a company with more than 21 employees.

“Today, 82% of companies use AI to review resumes, while 40% employ AI chatbots to communicate with candidates. About 23% use AI to conduct interviews, and 64% apply AI to review candidate assessments,” Resume Builder reported. “Additionally, 28% of companies use AI for onboarding new hires, and 42% scan social media or personal websites as part of the hiring process. Only 0.2% of companies report not using AI in their hiring practices.”

Hausenloy told The Daily Wire that no one really knows what could be causing these leading AI language models to be discriminating against people based on race and sex.

“I would very strongly suspect that the companies are not trying to optimize this behavior,” he said. “I don’t think OpenAI is saying, ‘You must be biased against white males.'”

“We actually don’t know why this happens, but it is a consistent finding,” Hausenloy added.

He said that there are some “promising” findings in the new study, adding that companies could easily fix the AI’s tendency to discriminate against white men.

“You can use certain technical techniques to remove the concept of race and bias from the language models if you edit the internal concepts within the models,” Hausenloy said.

AI discrimination in hiring, however, could be part of a larger issue with the new technology.

The Center for AI Safety published a study earlier this year that could shed some more light on why these AI language models are opting for blatant discrimination. According to the findings, AI models value certain people more than others, preferring people from Africa and the Middle East over people in Europe and the United States.

“We’ve found as AIs get smarter, they develop their own coherent value systems,” wrote Dan Hendrycks, the director for the Center for AI Safety. “For example they value lives in Pakistan > India > China > US. These are not just random biases, but internally consistent values that shape their behavior, with many implications for AI alignment.”

“Internally, AIs have values for everything. This often implies shocking/undesirable preferences. For example, we find AIs put a price on human life itself and systematically value some human lives more than others,” he added.

Create a free account to join the conversation!

Already have an account?

Log in

Got a tip worth investigating?

Your information could be the missing piece to an important story. Submit your tip today and make a difference.

Submit Tip
The Daily Wire   >  Read   >  Most Large Companies Are Using AI For Hiring. The Leading AI Models Discriminate Against White Men.
Daily Wire Plus
Facts and headlines on the go.
Download the Daily Wire app.
Download on the App StoreGet it on Google Play
Download App QR CodeScan the QR Code to Download
FacebookXInstagramYouTubeRSS
Daily Wire PlusFacts and headlines on the go.
Download the Daily Wire app.
Download on the App StoreGet it on Google Play
© Copyright 2025, The Daily Wire LLC  | Terms | Privacy
Podcast compliance badge