The stakes are only ratcheting higher. Contests over who controls AI will intensify as the technology grows more powerful. Claude’s use in Venezuela and Iran indicates that advanced AI is now an integral tool for the most powerful military in the world. Meanwhile, an array of new pressures—state power, domestic politics, national-security imperatives—have been piled atop those already weighing on a for-profit company in a race to deploy a volatile new technology. Like biologists conjuring deadly pathogens in the lab in order to find a cure, Anthropic took it upon itself to chart AI’s hazards, pushing the frontiers of development rather than leave it to others more willing to take reckless shortcuts. Yet even as it preaches caution, Anthropic is using Claude to accelerate the development of future, more powerful versions of itself. Staff believe the next few years will be a pivotal test, for the company and the world. “We should operate as if 2026 to 2030 is where all the most important things happen—models becoming faster, better, possibly faster than humans can handle them,” says Graham. As Dave Orr, Anthropic’s head of safeguards, puts it, “We’re driving down a cliff road. A mistake will kill you. Now we’re driving at 75 instead of 25.”