OpenAI is to make London its largest research hub outside the US as it battles with Google DeepMind to secure the best human talent in the field of artificial intelligence.
The developer of ChatGPT committed to significantly increasing the size of the site, which currently employs 30 researchers, but did not specify headcount or funding.
The company, whose European headquarters are in Dublin, cited a “unique concentration of world-class talent across machine learning and the sciences as well as its strong culture of cross-disciplinary collaboration” as reasons for its decision to expand its London research centre.
Liz Kendall, the technology secretary, said it was “a huge vote of confidence in the UK’s world-leading position at the cutting edge of AI research” which “also reaffirms the UK’s global leadership as the place to pursue AI innovation that is both safe and transformative”.
Sir Sadiq Khan, the mayor of London, said he was “delighted that OpenAI is anchoring its major new research hub here, as we help shape the capital for the next technological wave. London is home to world-class talent and renowned institutions and I am committed to ensuring our capital benefits from the huge potential of AI.”
OpenAI will set its sights on luring researchers from Google DeepMind, which employs 2,000 people in the UK.
• The awkward moment AI rivals refuse to hold hands in ‘show of unity’
Mark Chen, OpenAI’s chief research officer, said: “We definitely have hired some people from Google DeepMind in the past and I do think the draw of OpenAI comes down to its culture. We are famously a bottom-up lab. We let researchers come in, pursue their lines of research and turn those into … company-level bets.” Chen said Google’s culture “tends to be slightly more top-down”.

OpenAI would be offering pay packets that were “very competitive with what Google DeepMind is offering”, Chen said.
“AI talent is very valuable and we need to be competitive everywhere.”
The industry’s hiring war in the US has involved luring engineers with life-changing pay packets. Mark Zuckerberg reportedly offered researchers up to $1 billion to join Meta’s AI unit.
A mid-level AI research scientist at Google DeepMind has a £330,000 package, made up of £115,000 in salary, £185,000 in equity and £28,000 in bonuses, according to the AI Paygrades website.
As a private company, OpenAI is able to offer staff equity that could increase significantly in value should the company succeed and go public.
It has also enabled current staff to sell their shares in the private market, making many of them wealthy.
Sam Altman, the chief executive of OpenAI, declared a “code red” for the company as its competitors Google and Anthropic caught up with its technology.
With the capability of AI advancing, concern has increased. Two essays warning about its impact from Matt Shumer, a US entrepreneur, and Citrini Research have recently gone viral, the latter causing shares in some companies to fall.
• World is on AI path to disaster, former Google executive warns
Chen said “something is happening in AI that feels like a step change”, which he put down to the success of AI agents — software programs that can act autonomously. The software industry started to use them widely as coding agents became more capable.
“It really does feel like we’ve reached a level where, you know, we can rely on them and use them in the real world workforce,” said Chen, who uses up to eight agents in his work.
“I think we’ve really gone from a researcher coming up with an idea, implementing and executing it, to one where it’s more of a handoff.
“The researcher scopes out the experiments and they let the model implement and run some of these experiments. Then the human comes back and interprets them and iterates so it actually allows the human to offload some mental capacity on implementation.”
Chen believed other industries would soon start to use agents this way, especially those that do “analyst-style work”, although he cautioned that agents “cannot ideate and come up with the experimental design itself”.
Chen admitted “the external perception of AI has shifted in a more negative direction. I think it’s still early in terms of the population’s adoption of AI. When it comes to agents, there’s this fear of what is undefined and amorphous. But I think there are many positive uses of agents and I think that’s the kind of thing we as an industry need to underscore.”