Mohsin says he has only found the flaws in Orchids, and not yet in other vibe-coding platforms such as Claude Code, Cursor, Windsurf and Lovable.
Nonetheless, experts say it should serve as a warning.
“The main security implications of vibe-coding are that without discipline, documentation, and review, such code often fails under attack,” says Kevin Curran, professor of cybersecurity at Ulster University.
AI tools that carry out complex tasks for us – known as agentic AI – are increasingly hitting the headlines.
One recent example is the viral Clawbot agent also known as Moltbot or Open Claw.
The AI bot can run tasks on your own device, such as sending WhatsApp messages or managing your calendar, with little human input.
It’s estimated that the free AI agent has been downloaded by hundreds of thousands of people and has deep access to people’s computers – but that also means many potential security risks and flaws.
Karolis Arbaciauskas, head of product at the cyber-security company NordPass, says people should be cautious.
“While it’s exciting and curious to see what an AI agent can do without any security guardrails, this level of access is also extremely insecure,” he said.
His advice is to run these tools on separate, dedicated machines and use disposable accounts for any experimentation.