Rishi Sunak was British Prime Minister when the UK held the first AI Summit at Bletchley Park in 2023 and is one of the key guests at the third edition in New Delhi. He spoke to Sruthijith KK in an interview about what’s changed since the inaugural event, India’s positive attitude toward AI and how the “best antidote to anxiety, to fear, is action and preparation.” Edited excerpts:
During your time as Prime Minister, you positioned the UK as a global convener on AI safety. What has changed in terms of standards, enforcement and coordination?
When I conceived of the first AI summit, which we held in Bletchley Park a few years ago, it occurred to me when I was in Downing Street that there wasn’t a dedicated forum for leaders to come together and discuss what I believe was the most transformative technology of our lifetimes.
It’s not often that you get to be in office as a policymaker at the time that a general purpose technology comes along, which has the ability to transform every aspect of our societies, our economies, our countries. And for that reason, I felt it was important that we did have a forum to do this. But I wanted to do it differently.
You know, typically these G7s, G20s are just leaders. I thought given the nature of how AI is being developed, it was important to bring those innovators, entrepreneurs and frontier labs into the same conversation so that leaders could talk directly to them about what was happening and then formulate best policy in response and alongside. So this forum, as I said, the bringing together of both sides, I think is unique.
And I’m really pleased that it’s gone from the UK to South Korea to Paris and now India, which feels like a very natural home for the summit, given India’s digital capabilities and ambitions. Now to directly answer your question on how we’ve moved on the safety agenda, I think the biggest, I guess, legacy coming out of Bletchley was the creation of AI security institutes, of which I created the first one in the UK. And now obviously other countries have done the same.
And what they have is the technical capability to evaluate the risks that these models pose. And those risks are well articulated by the creators of artificial intelligence themselves and they came to see me in Downing Street when I was there.
That really made me realize that we had to focus on this and we had to develop that capability, which at that moment, I don’t think existed anywhere. And I thought it was right that we didn’t just say that it was acceptable for these companies to mark their own homework. Democratic institutions had to be able to do that themselves.
And that was the other thing coming out of Bletchley was a commitment by the leading labs to provide their models to these institutes for pre-deployment testing. And actually, just in the past month, you’ve seen the releases that have been put out by the UK institute and some of the frontier labs where they tested the latest models from Anthropic and OpenAI. What they’ve tended to do is, where they’ve identified risks or vulnerabilities, work with the companies to remediate them.
That was the main legacy coming out of that, which I think has been sustained. I’m glad to see there are more AI security institutes. I would like it if they worked more with each other. I think that would be a positive development, if we get good international collaboration among these institutes so they can share expertise and best practice and take a little bit more of a joined-up, coordinated approach. I think that would be a very welcome development. I think conversations are happening.
But crucially, I think we need to always recognize that AI progress and AI safety go hand in hand, because unless there is public acceptance and confidence in the technology, I don’t think you will get the widespread adoption and deployment that we all want to see for the good that we know it can bring.
What’s been your impression of the summit in Delhi?
It’s been a fantastic few days so far. There’s enormous energy, I think it’s probably the word I would use to describe the first couple of days. I’ve been particularly struck by the number of young entrepreneurs that I’ve met who express such optimism about the future and their ability to help shape the future. And that I think is an enormous asset for India, to have this deep talent pool of optimistic, innovative people who are keen to play their part in making sure that technology benefits people in meaningful ways.
So that has been something that struck me. The sheer scale of it has been, I think, another thing that you can’t fail to miss. This is the biggest AI summit since we first started in Bletchley, which now feels quite small in comparison.
But under Prime Minister Modi’s leadership, his ability to bring together so many countries and so many leaders here in India, at the same time as showcasing the development of the technology here in India, I think has been very powerful. I feel the AI debate is shifting from technology to strategy, from what the tools can do to what countries choose to do with them.
For political leaders, AI has to move from being a specialist subject to being a central responsibility of government. That conversation has been happening at this summit, particularly… how this technology can benefit humanity.
Maybe the last takeaway is India’s place and role in all of this and the leadership role it’s playing.
Are we underestimating the near-term economic and social disruption that this technology can cause and whether governments should be looking at AI regulation from an existential standpoint?
When you’re in government, this is always a tricky balance to get right. You’ve got a new technology… it’s clear the good that it can do, whether that’s driving economic growth, more effective delivery of public services, huge breakthroughs in science and research, but also, just really lifting the floor for humanity.
We know that is all available because of this technology. But it comes with risks, which we can’t be complacent about. Striking that balance between being supportive of the innovation and not strangulating it at birth with overly burdensome regulation, that is the tricky judgment call that policymakers everywhere have to make.
I don’t think this is the moment for oppressive, top-down regulation, which will stifle the innovation at birth, not least because the technology is changing so quickly. I am a little sceptical of the ability of governments to keep pace with it and get that right. That said, I don’t think the answer is to do nothing. The approach that I took–for governments to actually be open and honest with their citizens about the issues, develop the capabilities to evaluate them and fund those properly, as we did with the Security Institute, work with the model companies and make sure that they are being transparent about what they are doing, for now–is the right approach.
What that will do is make sure there’s an independent democratic check on what’s happening, which I think is important. As I said, I don’t think you can leave it to companies to mark their own homework. And it will put us in a place to take more action down the line at the moment that we think that that becomes necessary.
We are seeing AI systems getting into defence architecture and cybersecurity. With all of this, can companies be left to mark their own homework– voluntary self-regulation?
So far, it seems to be working in that the companies are being transparent about their responsible scaling practices, about how they address the issues of alignment.
They are red teaming issues as they arise and being public about them. For example… when their tools have been used for cyberattacks. They have been proactive and constructive in working with, for example, the UK AI Security Institute in providing the models for testing before they’re deployed, so that any extra risks and vulnerabilities can be identified.
You’re right to highlight some of the national security (issues), another reason governments have to be involved. Because really, it’s governments that have that extra capability amongst their defence and security services to really think about those risks.
So far, it has worked. If we got to a point where the companies were not being transparent, or were not cooperating with AI security institutes, then I think clearly there would be a cause for more concern. Many people at that point would feel it was appropriate to move to something more mandatory.
I’m encouraged that so far that hasn’t been necessary and that there’s constructive cooperation. But obviously, that’s something that we need to make sure we keep an eye on.
There appears to be a broad consensus that AI and the pace of progress it is making are bound to disrupt white-collar employment at some scale. How do you prepare people for what is to come without triggering a backlash?
I think AI is going to change the labour market. We have to be upfront and honest about that with our citizens. Some jobs are going to be lost. Some new jobs are going to be created. And I think many more jobs are going to be redesigned.
History tells us that societies flourish when these transitions are handled well. And my view is that the responsibility of government is not to stop this innovation, which it can’t do, but it’s to help prepare people to take on these new roles, new jobs, with confidence and security. And I think Prime Minister Modi said something similar and I’ve said something similar in the past, which is that the best antidote to anxiety, to fear, is action and preparation.
So what should political leaders be doing? They should be talking to their citizens with candour about this. But as I said, their responsibility is to help prepare people for the transition, to take on these new roles, give them the skills and the confidence they need to succeed in this new world.
What sectors specifically, say in Britain as well as India, do you think are likely to be impacted the most in terms of economic disruption?
I think that there’s good research from lots of different bodies at this point, whether it’s Goldman Sachs, McKinsey… Cognizant actually just put out an interesting report that looks at this.
I think there’s not something surprising in one sense in all of… knowledge work, broadly defined… professional services are the areas where you can see the most exposure. But what I’d say is, at the moment at least, we’re not seeing a macroeconomic impact on employment as a result of AI. You’re seeing specific impact in quite narrow verticals, like software engineering, for example, for entry level people.
And that actually is a Stanford report called Canaries in the Coal Mine, which is an apt name for it, just saying, look, it’s not happening at a macro level yet, but there are some signs that this is going to be a bigger issue down the road. Today, I think it’s fair to say you’re less likely to lose your job to AI, you’re more likely to lose your job to someone who is using AI. And so I think, what’s the right response to this? It’s making sure that your population is AI literate.
If you look at what employers are demanding across all different sectors, across knowledge work, this notion of AI literacy is incredibly important. In almost whatever job you are doing, figuring out how you can use AI tools to make yourself, your organization more productive, more efficient, able to grow fast, I think should be what both employers, policymakers, and individuals should be thinking about. I think that is something that you can do, and governments can do to support you, which will give you more agency and confidence in this transition.
There is a big call for sovereign cloud as countries. In fact, Prime Minister Modi said during his keynote that people should have authority and control over their own data. Is the movement towards sovereign AI and data centres and cloud, is that compatible with open global collaboration, which will serve to greatly accelerate this technology?
Just before I go into that, you mentioned earlier, this word ‘trust.’ I think it is interesting that you are seeing a range of attitudes towards AI around the world. In India, there’s, I think, almost overwhelming optimism and trust in AI, whereas in the West, I think the dominant attitude is one of anxiety. Closing that confidence gap is as much a policy issue as it is a technical issue.
It goes to a point about who should be doing what. For me, I think the public sector is where this battle for trust in AI can be won or lost. Because if citizens are getting better healthcare, more efficient interaction with the state, quicker services delivered, then I think this AI debate goes from being abstract to being real and can be resolved in a positive way.
It’s worth reflecting on that, because you’re talking about, what should governments be doing? What should policymakers be doing? I think addressing that trust deficit, but using the public sector as a way and the place to do that is an obvious avenue for them to prioritise. So on sovereignty, I don’t think sovereignty is just about building the largest frontier model in your country. I think it’s more about having the capabilities, the skills, and the trusted access that allow you to deploy this technology with confidence in your country and in your public services.
India has enormous strength in this regard. But for me, as someone who’s been in that seat and thinking about what leaders are grappling with, I think from a very practical perspective, sovereignty is about making sure that your institutions are not beholden to any one provider of services, and that you can deploy this technology in a way that is in accordance with your laws, your values, and your priorities.
There’s a concentration of power when it comes to AI in the US, and to an extent, China. How should countries like India navigate this era?
This is actually a good moment to double-click on how well India is positioned. India has recognised that technological leadership doesn’t depend only on inventing the technology, it depends on how effectively you deploy it. I think by focusing on mass adoption, supported by a very deep talent pool, strong digital public infrastructure and broad public support for the technology, I think India is very well positioned to be a leading nation in this era of AI.
That progress is reflected in the indices. Now I’m a graduate of Stanford, I’m a visiting fellow back at Stanford, and they do the most authoritative ranking of AI superpowers around the world, and India has moved into third place, which is reflective of its very strong position. If you unpack that a little bit, this deep talent pool is a really important competitive advantage.
Just having the volume of highly skilled AI engineers here is a huge advantage for India. Indeed, I think the second largest contributor to AI GitHub projects are Indians and Indians here. The digital public infrastructure, I wouldn’t underestimate. Many countries look at that enviously. Between Aadhaar, UPI and now the Ayushman Bharat health accounts, to have interoperable, digitally verifiable rails where you can safely deploy applications that reach over a billion people is truly an extraordinary advantage. And India has been able to leapfrog many other countries by developing those and now exporting them to the world.
The last thing I’d say is this… I touched on it when I talked about the summit and my takeaways–the vibrancy of the AI ecosystem here. You saw the announcements from Sarvam AI yesterday, which I thought were very powerful. And it just demonstrates that in all these different ways, India is building up what I talked about, which is these capabilities, the skills, and the trusted access to technology that will allow it to deploy AI in a way that benefits its citizens, does so in a responsible way, and does so in accordance with its values and priorities. That is India leveraging its strengths to deliver practical AI sovereignty for its citizens. And it’s actually a good model for countries everywhere to emulate.
Since your time in office, India has negotiated major trade deals with the US, Europe and of course the UK. What does it mean for the Indian economy going forward, to have such major trade linkages?
I think it’s very positive that India has concluded what are meaningful FTAs with both the UK and EU. Obviously, I started those negotiations with Prime Minister Modi. I’m pleased that they reached conclusion. And I think that’s positive for both countries. And indeed, India’s deal with the EU is positive for a couple of reasons. One, I believe closer economic cooperation like that is good for jobs, opportunity and economic growth in both countries. So that’s positive.
But I also think it’s important geopolitically for the signal it sends, which is, one, in a world where there is more fragmentation and polarisation, it demonstrates that countries can come together and forge ties that are win-win for both of their citizens. But I think it also demonstrates India’s position in this new world order, a world order which is characterised less by American unipolarity, but multi-polarity, an environment where states like India, or indeed, I put the Gulf states in this category, are increasingly significant global players that are able to forge their own path and do things in the way that they want. And what you’ve seen is other countries all want to find ways to strengthen their relationships with India.
We’re also moving over time.. trading goods is not the most important, trading goods is one element. Increasingly, we should also focus on the exchange of, the sharing of technology and intelligence between nations. And I think that is something that we should think about as we think, well, how do we build on these trade deals in the future? How do we strengthen that cooperation and collaboration on technology and intelligence sharing in the AI sense, not the other sense? I think it’s something that we should all have in our minds for where these things can go in the future.
Five years from now, what would success in global AI governance look like?
The ultimate test is, are our citizens confident that we are deploying this technology responsibly around the world? If we hold ourselves to that standard of gaining people’s trust and confidence that we’re able to deploy this technology and are deploying this technology responsibly, then that’s the North Star. How exactly we do that, we will have to iterate given the technology itself is developing so quickly.
So rather than give you a hard and fast policy prescription, which I think is difficult to do in such a fast-moving environment, I think having a North Star of what we’re aiming to solve is probably the right way to think about it. And alongside that, probably the other thing I’d like to really see delivered is what Prime Minister Modi talked about today and what much of the focus of this summit has been. And that’s not just how AI can raise the ceiling, but how does it lift the floor for humanity? AI represents, I think, the most democratising and uplifting force that we are likely to experience.
Its ability to expand equitable access to healthcare and education in particular, I think, is going to be transformative for the world and particularly for people in poorer parts of the world. India has always wanted to play a leadership role in making that happen. That was very much the theme of the G20 that Modi-ji held. And again, you’ve seen that same emphasis here. I’d like to make sure that in the years to come, we deliver on that promise, because I think it is entirely within our grasp. It’s something that, as I said, would be tremendously positive for all of humanity if we can do it.