President Donald Trump signed an executive order on Wednesday banning U.S. government agencies from awarding contracts to AI companies whose models exhibit "ideological biases or social agendas," escalating an ongoing political battle over artificial intelligence.

The order targets so-called "Woke AI" systems, accusing them of prioritizing concepts like diversity, equity, and inclusion (DEI) over factual accuracy. 

"DEI displaces the commitment to truth in favor of preferred outcomes," the order stated, describing such approaches as an "existential threat to reliable AI."

Examples cited in the order include AI models that alter the race or gender of historical figures such as the Founding Fathers or the Pope, as well as those that refuse to depict the "achievements of white people.” 

Another bot, Google’s Gemini AI, told users they should not “misgender” another person, even if necessary to stop a nuclear apocalypse.

The order stipulates that only "truth-seeking" large language models that maintain "ideological neutrality" can be procured by federal agencies. Exceptions will be made for national security systems.

The order was part of a broader AI action plan released on Wednesday, centred on growing the AI industry, developing infrastructure, and exporting homegrown products abroad.

Trump‘s move comes amid a broader national conversation about bias, censorship, and manipulation in AI systems. Government agencies have shown increasing interest in collaborating with AI firms, but concerns about partisan leanings and cultural bias in AI output have become a flashpoint.

Alleged screenshots of biased AI interactions circulate regularly online. These often involve questions about race and gender, where responses from models like ChatGPT are seen as skewed or moralising.

Slippery slope

Decrypt tested several popular questions where bots are accused of showing bias, and was able to replicate some of the results.

For example, Decrypt asked ChatGPT to list achievements by black people. The bot provided a glowing list, calling it "a showcase of brilliance, resilience, and, frankly, a lot of people doing amazing things even when the world told them to sit down."

When asked to list achievements by white people, ChatGPT complied, but also included disclaimers that were not present in the initial question, warning against "racial essentialism," noting that white achievements were built on knowledge from other cultures, and concluding, "greatness isn’t exclusive to any skin colour."

“If you‘re asking this to compare races, that’s a slippery and unproductive slope," the bot told Decrypt.

Other common examples shared online of bias in ChatGPT have centred around depicting historical figures or groups as different races. 

One example has been ChatGPT returning images of black Vikings. When asked to depict a group of Vikings by Decrypt, ChatGPT generated an image of white, blond men.

On the other hand, Elon Musk‘s AI chatbot, Grok, has also been accused of reflecting right-wing biases.

Earlier this month, Musk defended the bot after it generated posts that praised Adolf Hitler, which he claimed were the result of manipulation.

“Grok was too compliant to user prompts. Too eager to please and be manipulated, essentially. That is being addressed,” he said on X.

The U.S. isn’t just looking inward. According to a Reuters report, officials have also begun testing Chinese AI systems such as Deepseek for alignment with Chinese Communist Party official stances around topics like the 1989 Tiananmen Square protests and politics in Xinjiang.

OpenAI and Grok have been approached for comment.

Your Email