As OpenAI marks its tenth birthday in December 2025, it can celebrate becoming one of the world’s leading companies, worth perhaps as much as US$1 trillion (£750 billion). But it started as a non-profit with a serious moral mission – and its story demonstrates the difficulty of combining morality with capitalism.
The firm recently became a “public benefit corporation”, meaning that – in addition to performing some sort of pubic good – it now has a duty to make money for its shareholders, such as Microsoft.
That’s quite a change from the original set up.
Influenced by a movement known as “effective altruism”, a project which tries to find the most effective ways of helping others, OpenAI’s initial mission was to “ensure that artificial general intelligence […] benefits all of humanity” – including preventing rogue AI systems from enslaving or extinguishing the human race.
Being a non-profit was central to that mission. If pushing AI in dangerous directions was the best way to make money, a profit-seeking company would do it, but a non-profit wouldn’t. As CEO Sam Altman said in 2017: “We don’t ever want to be making decisions to benefit shareholders. The only people we want to be accountable to is humanity as a whole.”
So what changed?
Some argue that the company simply sold out – that Altman and his colleagues faced a choice between making a fortune or sticking to their principles, and took the money. (Many of OpenAI’s founders and early employees chose to leave the company instead.)
But there is another explanation. Perhaps OpenAI realised that to fulfil its moral mission, it needed to make money. After all, AI is a very expensive business, and OpenAI’s rivals – the likes of Google, Amazon and Meta – are vast corporations with deep pockets.
To have a chance of influencing AI development in a positive direction, OpenAI had to compete with them. To compete, it needed investment. And it’s hard to attract investment with no prospect of profit.
As Altman said of a previous adjustment towards profit-making: “We had tried and failed enough to raise the money as a non-profit. We didn’t see a path forward there. So we needed some of the benefits of capitalism.”
Capitalist competition
But along with the benefits of capitalism come constraints. What Karl Marx called the “coercive laws of competition” mean that in a competitive market, businesses have little choice but to put profit first, whatever their moral principles.
Indeed, if they choose not to do something profitable out of moral concerns, they know they’ll be replaced by a less scrupulous firm which will. This means not only that they fail as a business, but that they fail in their moral mission too.
The philosopher Iris Marion Young, illustrated this paradox with the example of a sweatshop owner who claims that they would love to treat their workers better. But the cost of improved pay and conditions would make them less competitive, meaning they lose out to rivals who treat their workers even worse. So being kinder to their workers would not do any good.
Similarly, had OpenAI held back from releasing ChatGPT due to worries about energy usage or self-harm or misinformation, it would probably have lost market share to another company. This in turn would have made it harder to raise the investment it needed to fulfil their mission of shaping AI development for good.
So in effect, even when its moral mission was supposedly paramount (before it became a public benefit corporation), OpenAI was already acting like a for-profit firm. It needed to, to stay competitive.
The recent legal transition just makes this official. The fact that a nonprofit board dedicated to the moral mission retains some control over the company in principle is unlikely to stop the drive to profit in practice. Marx’s coercive laws of competition squeeze morality out of business.
Marx and Milton
If Marx is capitalism’s most famous critic, perhaps its most famous cheerleader was the economist Milton Friedman.

Karl and coercion.
christianthiel.net/Shutterstock
But Friedman actually agreed with Marx that business and morals are difficult to mix. In 1971, he wrote that business executives have only one social responsibility: to make profit for shareholders.
Pursuing any other goal would be spending other people’s money on their own private principles. And in a competitive market, Friedman argued, businesspeople will find that customers and investors can quickly switch to other companies “less scrupulous in exercising their social responsibilities”.
All of this suggests that we cannot expect businesses to do as OpenAI originally promised, and put humanity before shareholder value. Even if it tries, the coercive laws of competition will force it to seek profit.
Friedman and Marx would have further agreed that we need other types of institutions to look after humanity. Though Friedman was mostly sceptical about the state, the AI arms race is precisely the kind of case that even he recognised required government regulation.
For Marx, the solution is more radical: replacing the coercive laws of competition with a more co-operative economic system. And my own research suggests that safeguarding the future of humanity may indeed require some restraining of capitalism , to allow tech workers time to develop safe and ethical technologies together, free from the pressures of the market.