The BBC has been shown a significant – and unfixed – cyber-security risk in a popular AI coding platform.
Orchids is a so-called “vibe-coding” tool, meaning people without technical skills can use it to build apps and games by typing a text prompt into a chatbot.
Such platforms have exploded in popularity in recent months, and are often heralded as an early example of how various professional services could be done quickly and cheaply by AI.
But experts say the ease with which Orchids can be hacked demonstrates the risks of allowing AI bots deep access to our computers in exchange for the convenience of allowing them to carry out tasks autonomously.
The BBC has repeatedly asked the company for comment but it has not replied.
‘You are hacked’
Orchids claims to have a million users, and says it is used by top companies including Google, Uber, and Amazon.
It is rated as the best programme for some elements of vibe coding according to ratings from App Bench and other analysts.
Its security flaws were demonstrated to the BBC by cyber-security researcher Etizaz Mohsin.
I downloaded the Orchids desktop app to my spare laptop, which I use for experiments, and started a vibe-coding project as a test.

Orchids is one of many AI agent platforms that writes code for users who have no experience [BBC]
I asked Orchids to help me build the code for a computer game based on the BBC News website.
Automatically, the AI assistant began compiling code on the screen that, without any experience, I couldn’t understand.
Exploiting a cyber-security weakness (which we are not disclosing), Mohsin was able to gain access to my project, and view and edit any of the code.
He then added a a small line of code somewhere in the thousands of lines of letters, numbers and symbols into my project, unbeknown to me.
It appears this allowed him to gain access my computer – because shortly afterwards, a notepad file called “Joe is hacked” appeared on the desktop, and the wallpaper was changed to an image of an AI hacker.
The implications of the hack on the platform’s tens of thousands of projects were obvious.
A nefarious hacker could have easily installed a virus on to my machine without me having to do anything.
My or my company’s private or financial data could have been stolen.
An attacker could have accessed my internet history or even spy through the cameras and microphones.
Most hacks involve a victim downloading a piece of malicious software or being tricked into handing over login details.
This attack was able to be carried out without any involvement from the victim – a zero-click attack, as it’s known.
“The vibe-coding revolution has introduced a fundamental shift in how developers interact with their tools, and this shift has created an entirely new class of security vulnerability that didn’t exist before,” Moshin told me.
“The whole proposition of having the AI handle things for you comes with big risks.”

Etizaz Mohsin speaking about cyber-security at the prestigious BlackHat conference [BBC]
Mohsin, 32, is from Pakistan and now lives in the UK. He has a track record of finding dangerous flaws in software that allow hackers to break in including work on the infamous Pegasus spyware.
He said he found the flaw while experimenting with vibe-coding in December 2025 and has spent the weeks since trying to get Orchids to respond on email, LinkedIn, and Discord with around a dozen messages.
The Orchids team finally responded to him this week, saying they “possibly missed” his warnings as the team is “overwhelmed with inbound” messages.
The San Francisco-based company’s LinkedIn page says it was founded in 2025 and has fewer than 10 employees.
AI Agent risks
Mohsin says he has only found the flaws in Orchids, and not yet in other vibe-coding platforms such as Claude Code, Cursor, Windsurf and Lovable.
Nonetheless, experts say it should serve as a warning.
“The main security implications of vibe-coding are that without discipline, documentation, and review, such code often fails under attack,” says Kevin Curran, professor of cybersecurity at Ulster University.
AI tools that carry out complex tasks for us – known as agentic AI – are increasingly hitting the headlines.
One recent example is the viral Clawbot agent also known as Moltbot or Open Claw.
The AI bot can run tasks on your own device, such as sending WhatsApp messages or managing your calendar, with little human input.
It’s estimated that the free AI agent has been downloaded by hundreds of thousands of people and has deep access to people’s computers – but that also means many potential security risks and flaws.
Karolis Arbaciauskas, head of product at the cyber-security company NordPass, says people should be cautious.
“While it’s exciting and curious to see what an AI agent can do without any security guardrails, this level of access is also extremely insecure,” he said.
His advice is to run these tools on separate, dedicated machines and use disposable accounts for any experimentation.

[BBC]
Sign up for our Tech Decoded newsletter to follow the world’s top tech stories and trends. Outside the UK? Sign up here.