AI company Anthropic announced this week it is giving $20 million to a political group campaigning for more regulation of the technology – but its main rival, OpenAI, is telling its employees it won’t be making similar donations.

In a memo to staff on Thursday, OpenAI’s chief global affairs officer Chris Lehane said that while OpenAI allows its employees to “express their ideological beliefs in terms of who they support,” the company itself won’t be making similar moves anytime soon.

OpenAI is not yet contributing to political action committees or 501(c)(4) social-welfare nonprofits because OpenAI wants to retain control of its political spending, Lehane told CNN in an interview.

“We do believe it’s really important that this issue transcends partisan politics,” Lehane said.

The stakes are especially high this year. Both Anthropic and OpenAI are reportedly mulling what could be blockbuster initial public offerings this year, while Congress is working to craft the rules of the road for industry for the next decade or longer. And as the midterm elections approach, voters are increasingly worried about the consequences of AI development, from energy bills to privacy to job loss.

Though OpenAI is not making super PAC donations, its executives and biggest investors have made major contributions. President and co-founder Greg Brockman and his wife, Anna, have donated $25 million to a super PAC that supports President Donald Trump.

Brockman, as well as several of OpenAI’s top investors, have collectively donated more than $100 million to a bipartisan super PAC called Leading the Future that advocates against state-level AI regulation in favor of a national regulatory framework, which Lehane acknowledged in his memo to staff. The group has already paid for ads opposing New York State Assemblyman Alex Bores, who is running in New York’s 12th congressional district as an outspoken voice for AI guardrails.

Lehane said OpenAI supports “a national federal framework and have endorsed bills at both the state and federal level already this year on a range of issues.”

Anthropic was founded with a focus on AI safety and often highlights the need for regulation in AI development. CEO Dario Amodei regularly writes long essays and gives interviews about the risks posed by AI.

The company said this week it is donating to the Public First Action super PAC, a bipartisan group that advocates for AI regulation, because they “don’t want to sit on the sideline” while AI regulation is being developed.

“(W)e need good policy: flexible regulation that allows us to reap the benefits of AI, keep the risks in check, and keep America ahead in the AI race,” Anthropic wrote in their announcement. “That means keeping critical AI technology out of the hands of America’s adversaries, maintaining meaningful safeguards, promoting job growth, protecting children, and demanding real transparency from the companies building the most powerful AI models.”

But Anthropic’s position has placed them in the crosshairs of the Trump administration. David Sacks, the White House’s AI czar, blamed Anthropic last year for “the state regulatory frenzy that is damaging the ecosystem. “Anthropic is running a sophisticated regulatory capture strategy based on fear-mongering,” he wrote on X.

Last year Trump signed an executive order to prevent states from enacting their own laws regulating AI in favor of a single national policy that has yet to be established.

Anthropic and OpenAI’s differing positions on AI regulation are an extension of their longstanding rivalry, which exploded into public view last week when Anthropic ran a SuperBowl ad about their ad-free chatbot, Claude – just days before OpenAI started showing some users ads in their ChatGPT conversations this week.