OpenAI chief executive Sam Altman has voiced his growing concerns over the misuse of artificial intelligence, warning that the real danger lies not in autonomous machines going rogue, but in people using AI tools to cause deliberate harm.
Speaking on a recent episode of Theo Von’s podcast, Altman addressed the long-debated question of AI risks. Rather than echoing dystopian fears of machines turning against humanity, he shifted the spotlight to human intent. “I worry more about people using AI to do bad things than the AI deciding to do bad things on its own,” Altman said.
His remarks mark a departure from the typical science-fiction narrative of killer robots and self-aware systems, instead highlighting a more immediate and realistic challenge, the potential for malicious actors to exploit advanced AI models.
“The risk is if someone really wants to cause harm and they have a very powerful tool to do it,” he noted, pointing to the ease with which powerful AI systems could be weaponised if left unchecked.
Altman acknowledged the difficulty of designing AI systems that remain safe and beneficial while in the hands of millions of users. “We’re trying to build guardrails as we go. That’s hard, but necessary,” he admitted, underlining the ongoing efforts at OpenAI to embed ethical guidelines and technical safeguards into its models.
His comments come at a time when OpenAI is facing increased scrutiny from policymakers and civil society, particularly as speculation mounts around the development of GPT-5. With generative AI becoming more accessible and influential in everyday life, questions around governance, accountability, and control are more pressing than ever.
Meanwhile, OpenAI has officially begun rolling out its new artificial intelligence agent, ChatGPT Agent, after a week-long delay. Originally announced on 18 July, the feature is now being made available to all ChatGPT Plus, Pro, and Team subscribers, according to a statement posted by the company on social media platform X.
The delay in rollout left many users puzzled, with some still reporting the absence of the feature despite OpenAI’s claims of a complete deployment. The company has not disclosed the cause behind the delay, and questions raised in the post’s comment section remain unanswered.