By Soumoshree Mukherjee
Editor’s note: This article is based on insights from a podcast series. The views expressed in the podcast reflect the speakers’ perspectives and do not necessarily represent those of this publication. Readers are encouraged to explore the full podcast for additional context.
In an era where artificial intelligence is transforming everything from healthcare to how we search the web, Nicholas Thompson, CEO of The Atlantic, offers a rare and urgent perspective one that bridges journalism, tech ethics, and policy reform. In a recent podcast, Thompson unpacks the delicate balance between innovation and regulation, urging a future where AI amplifies human potential without trampling on ethical boundaries.
He said that the structure of the internet is shifting. “Once you change search, you’ve changed the whole architecture of the internet,” he said, pointing to the way AI is reshaping search functions, one of the internet’s core mechanics. This shift, while exciting, could upend the financial and editorial ecosystems of journalism, a sector already struggling to survive the digital age.
READ: ‘The future is now’: Congressman Gabe Amo on AI policy, equity and education (April 2, 2025)
At the heart of his concern lies a battle over fair compensation. Many AI models have been trained on massive datasets scraped from the web including content from journalists without compensation or credit. Thompson highlights recent legal skirmishes around this, noting the urgent need for clear guidelines that respect creators while still allowing transformative use. He argued that if we can’t protect content creators, we risk hollowing out the very industries that make democracy function.
But Thompson’s not just playing defence. He’s optimistic about the potential of AI to enhance journalism. Tools like PRAA, which track how publishers appear in AI-generated search results, could pave the way for new revenue models and smarter content creation.
Regulation, however, remains a thorny issue. Federal proposals that could block states from crafting their own AI laws for a decade are, in Thompson’s view, dangerously inflexible. He advocates instead for a dynamic, competitive ecosystem, one that avoids monopolistic dominance while encouraging local innovation.
Beyond journalism, AI’s influence ripples through society. Thompson highlights its dual nature, much like social media’s impact on teens’ self-esteem and communication. Platforms like Instagram have shown technology’s power to connect and harm, and AI could amplify these effects if left unchecked.
Responsible governance, he argues, is crucial to ensure AI serves humanity rather than destabilizes it. The EU AI Act, for instance, strikes a balance by fostering innovation while prioritizing safety, while open-source AI gains traction as a collaborative counterweight to corporate dominance.
This conversation isn’t just for policymakers. It touches on deeply human stakes how AI affects our jobs, our identities, even our mental health. From the threat of automation in white-collar professions to the risks of AI-powered companionship replacing real human connection, Thompson urges listeners to keep ethics at the centre of the AI dialogue.
His reflections on AI in medicine are hard-hitting; he said “… Doctors [being] able to move faster, be more effective, more efficient, I think would just be a huge plus.” He firmed that AI can help diagnose, treat, and even comfort but it cannot replace empathy. In journalism, the challenge is to augment human intelligence, not override it.
As AI races forward, Thompson’s insights remind us: AI’s future is not just about code, it’s about culture, conscience, and cooperation. And if we get the balance right, we might just shape a digital age that works for everyone.