{"id":493267,"date":"2026-03-24T21:10:09","date_gmt":"2026-03-24T21:10:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/493267\/"},"modified":"2026-03-24T21:10:09","modified_gmt":"2026-03-24T21:10:09","slug":"sandboxing-ai-agents-100x-faster","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/493267\/","title":{"rendered":"Sandboxing AI agents, 100x faster"},"content":{"rendered":"<p>Last September we introduced <a href=\"https:\/\/blog.cloudflare.com\/code-mode\/\" rel=\"nofollow noopener\" target=\"_blank\">Code Mode<\/a>, the idea that agents should perform tasks not by making tool calls, but instead by writing code that calls APIs. We&#8217;ve shown that simply converting an MCP server into a TypeScript API can <a href=\"https:\/\/www.youtube.com\/watch?v=L2j3tYTtJwk\" rel=\"nofollow noopener\" target=\"_blank\">cut token usage by 81%<\/a>. We demonstrated that Code Mode can also operate behind an MCP server instead of in front of it, creating the new <a href=\"https:\/\/blog.cloudflare.com\/code-mode-mcp\/\" rel=\"nofollow noopener\" target=\"_blank\">Cloudflare MCP server that exposes the entire Cloudflare API with just two tools and under 1,000 tokens<\/a>.<\/p>\n<p>But if an agent (or an MCP server) is going to execute code generated on-the-fly by AI to perform tasks, that code needs to run somewhere, and that somewhere needs to be secure. You can&#8217;t just eval() AI-generated code directly in your app: a malicious user could trivially prompt the AI to inject vulnerabilities.<\/p>\n<p>You need a sandbox: a place to execute code that is isolated from your application and from the rest of the world, except for the specific capabilities the code is meant to access.<\/p>\n<p>Sandboxing is a hot topic in the AI industry. For this task, most people are reaching for containers. Using a Linux-based container, you can start up any sort of code execution environment you want. Cloudflare even offers <a href=\"https:\/\/developers.cloudflare.com\/containers\/\" rel=\"nofollow noopener\" target=\"_blank\">our container runtime<\/a> and <a href=\"https:\/\/developers.cloudflare.com\/sandbox\/\" rel=\"nofollow noopener\" target=\"_blank\">our Sandbox SDK<\/a> for this purpose.<\/p>\n<p>But containers are expensive and slow to start, taking hundreds of milliseconds to boot and hundreds of megabytes of memory to run. You probably need to keep them warm to avoid delays, and you may be tempted to reuse existing containers for multiple tasks, compromising the security.<\/p>\n<p>If we want to support consumer-scale agents, where every end user has an agent (or many!) and every agent writes code, containers are not enough. We need something lighter.<\/p>\n<p>And we have it.<\/p>\n<p>      Dynamic Worker Loader: a lean sandbox<br \/>\n      <a href=\"#dynamic-worker-loader-a-lean-sandbox\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-start-5\"><\/p>\n<p>      <\/a><\/p>\n<p>Tucked into our Code Mode post in September was the announcement of a new, experimental feature: the Dynamic Worker Loader API. This API allows a Cloudflare Worker to instantiate a new Worker, in its own sandbox, with code specified at runtime, all on the fly.<\/p>\n<p>Dynamic Worker Loader is now in open beta, available to all paid Workers users.<\/p>\n<p><a href=\"https:\/\/developers.cloudflare.com\/workers\/runtime-apis\/bindings\/worker-loader\/\" rel=\"nofollow noopener\" target=\"_blank\">Read the docs for full details<\/a>, but here&#8217;s what it looks like:<\/p>\n<p>            \/\/ Have your LLM generate code like this.<br \/>\nlet agentCode: string = `<br \/>\n  export default {<br \/>\n    async myAgent(param, env, ctx) {<br \/>\n      \/\/ &#8230;<br \/>\n    }<br \/>\n  }<br \/>\n`;<\/p>\n<p>\/\/ Get RPC stubs representing APIs the agent should be able<br \/>\n\/\/ to access. (This can be any Workers RPC API you define.)<br \/>\nlet chatRoomRpcStub = &#8230;;<\/p>\n<p>\/\/ Load a worker to run the code, using the worker loader<br \/>\n\/\/ binding.<br \/>\nlet worker = env.LOADER.load({<br \/>\n  \/\/ Specify the code.<br \/>\n  compatibilityDate: &#8220;2026-03-01&#8221;,<br \/>\n  mainModule: &#8220;agent.js&#8221;,<br \/>\n  modules: { &#8220;agent.js&#8221;: agentCode },<\/p>\n<p>  \/\/ Give agent access to the chat room API.<br \/>\n  env: { CHAT_ROOM: chatRoomRpcStub },<\/p>\n<p>  \/\/ Block internet access. (You can also intercept it.)<br \/>\n  globalOutbound: null,<br \/>\n});<\/p>\n<p>\/\/ Call RPC methods exported by the agent code.<br \/>\nawait worker.getEntrypoint().myAgent(param);<\/p>\n<p>That&#8217;s it.<\/p>\n<p>Dynamic Workers use the same underlying sandboxing mechanism that the entire Cloudflare Workers platform has been built on since its launch, eight years ago: isolates. An isolate is an instance of the V8 JavaScript execution engine, the same engine used by Google Chrome. They are <a href=\"https:\/\/developers.cloudflare.com\/workers\/reference\/how-workers-works\/\" rel=\"nofollow noopener\" target=\"_blank\">how Workers work<\/a>.<\/p>\n<p>An isolate takes a few milliseconds to start and uses a few megabytes of memory. That&#8217;s around 100x faster and 10x-100x more memory efficient than a typical container.<\/p>\n<p>That means that if you want to start a new isolate for every user request, on-demand, to run one snippet of code, then throw it away, you can.<\/p>\n<p>Many container-based sandbox providers impose limits on global concurrent sandboxes and rate of sandbox creation. Dynamic Worker Loader has no such limits. It doesn&#8217;t need to, because it is simply an API to the same technology that has powered our platform all along, which has always allowed Workers to seamlessly scale to millions of requests per second.<\/p>\n<p>Want to handle a million requests per second, where every single request loads a separate Dynamic Worker sandbox, all running concurrently? No problem!<\/p>\n<p>One-off Dynamic Workers usually run on the same machine \u2014 the same thread, even \u2014 as the Worker that created them. No need to communicate around the world to find a warm sandbox. Isolates are so lightweight that we can just run them wherever the request landed. Dynamic Workers are supported in every one of Cloudflare&#8217;s hundreds of locations around the world.<\/p>\n<p>The only catch, vs. containers, is that your agent needs to write JavaScript.<\/p>\n<p>Technically, Workers (including dynamic ones) can use Python and WebAssembly, but for small snippets of code \u2014 like that written on-demand by an agent \u2014 JavaScript will load and run much faster.<\/p>\n<p>We humans tend to have strong preferences on programming languages, and while many love JavaScript, others might prefer Python, Rust, or countless others.<\/p>\n<p>But we aren&#8217;t talking about humans here. We&#8217;re talking about AI. AI will write any language you want it to. LLMs are experts in every major language. Their training data in JavaScript is immense.<\/p>\n<p>JavaScript, by its nature on the web, is designed to be sandboxed. It is the correct language for the job.<\/p>\n<p>      Tools defined in TypeScript<br \/>\n      <a href=\"#tools-defined-in-typescript\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-start-5\"><\/p>\n<p>      <\/a><\/p>\n<p>If we want our agent to be able to do anything useful, it needs to talk to external APIs. How do we tell it about the APIs it has access to?<\/p>\n<p>MCP defines schemas for flat tool calls, but not programming APIs. OpenAPI offers a way to express REST APIs, but it is verbose, both in the schema itself and the code you&#8217;d have to write to call it.<\/p>\n<p>For APIs exposed to JavaScript, there is a single, obvious answer: TypeScript.<\/p>\n<p>Agents know TypeScript. TypeScript is designed to be concise. With very few tokens, you can give your agent a precise understanding of your API.<\/p>\n<p>            \/\/ Interface to interact with a chat room.<br \/>\ninterface ChatRoom {<br \/>\n  \/\/ Get the last `limit` messages of the chat log.<br \/>\n  getHistory(limit: number): Promise;<\/p>\n<p>  \/\/ Subscribe to new messages. Dispose the returned object<br \/>\n  \/\/ to unsubscribe.<br \/>\n  subscribe(callback: (msg: Message) =&gt; void): Promise;<\/p>\n<p>  \/\/ Post a message to chat.<br \/>\n  post(text: string): Promise;<br \/>\n}<\/p>\n<p>type Message = {<br \/>\n  author: string;<br \/>\n  time: Date;<br \/>\n  text: string;<br \/>\n}<\/p>\n<p>Compare this with the equivalent OpenAPI spec (which is so long you have to scroll to see it all):<\/p>\n<p>openapi: 3.1.0<br \/>\ninfo:<br \/>\n  title: ChatRoom API<br \/>\n  description: &gt;<br \/>\n    Interface to interact with a chat room.<br \/>\n  version: 1.0.0<\/p>\n<p>paths:<br \/>\n  \/messages:<br \/>\n    get:<br \/>\n      operationId: getHistory<br \/>\n      summary: Get recent chat history<br \/>\n      description: Returns the last `limit` messages from the chat log, newest first.<br \/>\n      parameters:<br \/>\n        &#8211; name: limit<br \/>\n          in: query<br \/>\n          required: true<br \/>\n          schema:<br \/>\n            type: integer<br \/>\n            minimum: 1<br \/>\n      responses:<br \/>\n        &#8220;200&#8221;:<br \/>\n          description: A list of messages.<br \/>\n          content:<br \/>\n            application\/json:<br \/>\n              schema:<br \/>\n                type: array<br \/>\n                items:<br \/>\n                  $ref: &#8220;#\/components\/schemas\/Message&#8221;<\/p>\n<p>    post:<br \/>\n      operationId: postMessage<br \/>\n      summary: Post a message to the chat room<br \/>\n      requestBody:<br \/>\n        required: true<br \/>\n        content:<br \/>\n          application\/json:<br \/>\n            schema:<br \/>\n              type: object<br \/>\n              required:<br \/>\n                &#8211; text<br \/>\n              properties:<br \/>\n                text:<br \/>\n                  type: string<br \/>\n      responses:<br \/>\n        &#8220;204&#8221;:<br \/>\n          description: Message posted successfully.<\/p>\n<p>  \/messages\/stream:<br \/>\n    get:<br \/>\n      operationId: subscribeMessages<br \/>\n      summary: Subscribe to new messages via SSE<br \/>\n      description: &gt;<br \/>\n        Opens a Server-Sent Events stream. Each event carries a JSON-encoded<br \/>\n        Message object. The client unsubscribes by closing the connection.<br \/>\n      responses:<br \/>\n        &#8220;200&#8221;:<br \/>\n          description: An SSE stream of new messages.<br \/>\n          content:<br \/>\n            text\/event-stream:<br \/>\n              schema:<br \/>\n                description: &gt;<br \/>\n                  Each SSE `data` field contains a JSON-encoded Message object.<br \/>\n                $ref: &#8220;#\/components\/schemas\/Message&#8221;<\/p>\n<p>components:<br \/>\n  schemas:<br \/>\n    Message:<br \/>\n      type: object<br \/>\n      required:<br \/>\n        &#8211; author<br \/>\n        &#8211; time<br \/>\n        &#8211; text<br \/>\n      properties:<br \/>\n        author:<br \/>\n          type: string<br \/>\n        time:<br \/>\n          type: string<br \/>\n          format: date-time<br \/>\n        text:<br \/>\n          type: string<\/p>\n<p>We think the TypeScript API is better. It&#8217;s fewer tokens and much easier to understand (for both agents and humans).  <\/p>\n<p>Dynamic Worker Loader makes it easy to implement a TypeScript API like this in your own Worker and then pass it in to the Dynamic Worker either as a method parameter or in the env object. The Workers Runtime will automatically set up a <a href=\"https:\/\/blog.cloudflare.com\/capnweb-javascript-rpc-library\/\" rel=\"nofollow noopener\" target=\"_blank\">Cap&#8217;n Web RPC<\/a> bridge between the sandbox and your harness code, so that the agent can invoke your API across the security boundary without ever realizing that it isn&#8217;t using a local library.<\/p>\n<p>That means your agent can write code like this:<\/p>\n<p>            \/\/ Thinking: The user asked me to summarize recent chat messages from Alice.<br \/>\n\/\/ I will filter the recent message history in code so that I only have to<br \/>\n\/\/ read the relevant messages.<br \/>\nlet history = await env.CHAT_ROOM.getHistory(1000);<br \/>\nreturn history.filter(msg =&gt; msg.author == &#8220;alice&#8221;);<\/p>\n<p>      HTTP filtering and credential injection<br \/>\n      <a href=\"#http-filtering-and-credential-injection\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-start-5\"><\/p>\n<p>      <\/a><\/p>\n<p>If you prefer to give your agents HTTP APIs, that&#8217;s fully supported. Using the globalOutbound option to the worker loader API, you can register a callback to be invoked on every HTTP request, in which you can inspect the request, rewrite it, inject auth keys, respond to it directly, block it, or anything else you might like.<\/p>\n<p>For example, you can use this to implement credential injection (token injection): When the agent makes an HTTP request to a service that requires authorization, you add credentials to the request on the way out. This way, the agent itself never knows the secret credentials, and therefore cannot leak them.<\/p>\n<p>Using a plain HTTP interface may be desirable when an agent is talking to a well-known API that is in its training set, or when you want your agent to use a library that is built on a REST API (the library can run inside the agent&#8217;s sandbox).<\/p>\n<p>With that said, in the absence of a compatibility requirement, TypeScript RPC interfaces are better than HTTP:<\/p>\n<p>As shown above, a TypeScript interface requires far fewer tokens to describe than an HTTP interface.<\/p>\n<p>The agent can write code to call TypeScript interfaces using far fewer tokens than equivalent HTTP.<\/p>\n<p>With TypeScript interfaces, since you are defining your own wrapper interface anyway, it is easier to narrow the interface to expose exactly the capabilities that you want to provide to your agent, both for simplicity and security. With HTTP, you are more likely implementing filtering of requests made against some existing API. This is hard, because your proxy must fully interpret the meaning of every API call in order to properly decide whether to allow it, and HTTP requests are complicated, with many headers and other parameters that could all be meaningful. It ends up being easier to just write a TypeScript wrapper that only implements the functions you want to allow.<\/p>\n<p>Hardening an isolate-based sandbox is tricky, as it is a more complicated attack surface than hardware virtual machines. Although all sandboxing mechanisms have bugs, security bugs in V8 are more common than security bugs in typical hypervisors. When using isolates to sandbox possibly-malicious code, it&#8217;s important to have additional layers of defense-in-depth. Google Chrome, for example, implemented strict process isolation for this reason, but it is not the only possible solution.<\/p>\n<p>We have nearly a decade of experience securing our isolate-based platform. Our systems automatically deploy V8 security patches to production within hours \u2014 faster than Chrome itself. Our <a href=\"https:\/\/blog.cloudflare.com\/mitigating-spectre-and-other-security-threats-the-cloudflare-workers-security-model\/\" rel=\"nofollow noopener\" target=\"_blank\">security architecture<\/a> features a custom second-layer sandbox with dynamic cordoning of tenants based on risk assessments. <a href=\"https:\/\/blog.cloudflare.com\/safe-in-the-sandbox-security-hardening-for-cloudflare-workers\/\" rel=\"nofollow noopener\" target=\"_blank\">We&#8217;ve extended the V8 sandbox itself<\/a> to leverage hardware features like MPK. We&#8217;ve teamed up with (and hired) leading researchers to develop <a href=\"https:\/\/blog.cloudflare.com\/spectre-research-with-tu-graz\/\" rel=\"nofollow noopener\" target=\"_blank\">novel defenses against Spectre<\/a>. We also have systems that scan code for malicious patterns and automatically block them or apply additional layers of sandboxing. And much more.<\/p>\n<p>When you use Dynamic Workers on Cloudflare, you get all of this automatically.<\/p>\n<p>We&#8217;ve built a number of libraries that you might find useful when working with Dynamic Workers: <\/p>\n<p><a href=\"https:\/\/www.npmjs.com\/package\/@cloudflare\/codemode\" rel=\"nofollow noopener\" target=\"_blank\">@cloudflare\/codemode<\/a> simplifies running model-generated code against AI tools using Dynamic Workers. At its core is DynamicWorkerExecutor(), which constructs a purpose-built sandbox with code normalisation to handle common formatting errors, and direct access to a globalOutbound fetcher for controlling fetch() behaviour inside the sandbox \u2014 set it to null for full isolation, or pass a Fetcher binding to route, intercept or enrich outbound requests from the sandbox.<\/p>\n<p>            const executor = new DynamicWorkerExecutor({<br \/>\n  loader: env.LOADER,<br \/>\n  globalOutbound: null, \/\/ fully isolated<br \/>\n});<\/p>\n<p>const codemode = createCodeTool({<br \/>\n  tools: myTools,<br \/>\n  executor,<br \/>\n});<\/p>\n<p>return generateText({<br \/>\n  model,<br \/>\n  messages,<br \/>\n  tools: { codemode },<br \/>\n});<\/p>\n<p>The Code Mode SDK also provides two server-side utility functions. codeMcpServer({ server, executor }) wraps an existing MCP Server, replacing its tool surface with a single code() tool. openApiMcpServer({ spec, executor, request }) goes further: given an OpenAPI spec and an executor, it builds a complete MCP Server with search() and execute() tools as used by the Cloudflare MCP Server, and better suited to larger APIs.<\/p>\n<p>In both cases, the code generated by the model runs inside Dynamic Workers, with calls to external services made over RPC bindings passed to the executor.<\/p>\n<p><a href=\"https:\/\/www.npmjs.com\/package\/@cloudflare\/codemode\" rel=\"nofollow noopener\" target=\"_blank\">Learn more about the library and how to use it.<\/a> <\/p>\n<p>Dynamic Workers expect pre-bundled modules. <a href=\"https:\/\/www.npmjs.com\/package\/@cloudflare\/worker-bundler\" rel=\"nofollow noopener\" target=\"_blank\">@cloudflare\/worker-bundler<\/a> handles that for you: give it source files and a package.json, and it resolves npm dependencies from the registry, bundles everything with esbuild, and returns the module map the Worker Loader expects.<\/p>\n<p>            import { createWorker } from &#8220;@cloudflare\/worker-bundler&#8221;;<\/p>\n<p>const worker = env.LOADER.get(&#8220;my-worker&#8221;, async () =&gt; {<br \/>\n  const { mainModule, modules } = await createWorker({<br \/>\n    files: {<br \/>\n      &#8220;src\/index.ts&#8221;: `<br \/>\n        import { Hono } from &#8216;hono&#8217;;<br \/>\n        import { cors } from &#8216;hono\/cors&#8217;;<\/p>\n<p>        const app = new Hono();<br \/>\n        app.use(&#8216;*&#8217;, cors());<br \/>\n        app.get(&#8216;\/&#8217;, (c) =&gt; c.text(&#8216;Hello from Hono!&#8217;));<br \/>\n        app.get(&#8216;\/json&#8217;, (c) =&gt; c.json({ message: &#8216;It works!&#8217; }));<\/p>\n<p>        export default app;<br \/>\n      `,<br \/>\n      &#8220;package.json&#8221;: JSON.stringify({<br \/>\n        dependencies: { hono: &#8220;^4.0.0&#8221; }<br \/>\n      })<br \/>\n    }<br \/>\n  });<\/p>\n<p>  return { mainModule, modules, compatibilityDate: &#8220;2026-01-01&#8221; };<br \/>\n});<\/p>\n<p>await worker.getEntrypoint().fetch(request);<\/p>\n<p>It also supports full-stack apps via createApp \u2014 bundle a server Worker, client-side JavaScript, and static assets together, with built-in asset serving that handles content types, ETags, and SPA routing.<\/p>\n<p><a href=\"https:\/\/www.npmjs.com\/package\/@cloudflare\/worker-bundler\" rel=\"nofollow noopener\" target=\"_blank\">Learn more about the library and how to use it.<\/a><\/p>\n<p><a href=\"https:\/\/www.npmjs.com\/package\/@cloudflare\/shell\" rel=\"nofollow noopener\" target=\"_blank\">@cloudflare\/shell<\/a> gives your agent a virtual filesystem inside a Dynamic Worker. Agent code calls typed methods on a state object \u2014 read, write, search, replace, diff, glob, JSON query\/update, archive \u2014 with structured inputs and outputs instead of string parsing.<\/p>\n<p>Storage is backed by a durable Workspace (SQLite + R2), so files persist across executions. Coarse operations like searchFiles, replaceInFiles, and planEdits minimize RPC round-trips \u2014 the agent issues one call instead of looping over individual files. Batch writes are transactional by default: if any write fails, earlier writes roll back automatically.<\/p>\n<p>            import { Workspace } from &#8220;@cloudflare\/shell&#8221;;<br \/>\nimport { stateTools } from &#8220;@cloudflare\/shell\/workers&#8221;;<br \/>\nimport { DynamicWorkerExecutor, resolveProvider } from &#8220;@cloudflare\/codemode&#8221;;<\/p>\n<p>const workspace = new Workspace({<br \/>\n  sql: this.ctx.storage.sql, \/\/ Works with any DO&#8217;s SqlStorage, D1, or custom SQL backend<br \/>\n  r2: this.env.MY_BUCKET, \/\/ large files spill to R2 automatically<br \/>\n  name: () =&gt; this.name   \/\/ lazy \u2014 resolved when needed, not at construction<br \/>\n});<\/p>\n<p>\/\/ Code runs in an isolated Worker sandbox with no network access<br \/>\nconst executor = new DynamicWorkerExecutor({ loader: env.LOADER });<\/p>\n<p>\/\/ The LLM writes this code; `state.*` calls dispatch back to the host via RPC<br \/>\nconst result = await executor.execute(<br \/>\n  `async () =&gt; {<br \/>\n    \/\/ Search across all TypeScript files for a pattern<br \/>\n    const hits = await state.searchFiles(&#8220;src\/**\/*.ts&#8221;, &#8220;answer&#8221;);<br \/>\n    \/\/ Plan multiple edits as a single transaction<br \/>\n    const plan = await state.planEdits([<br \/>\n      { kind: &#8220;replace&#8221;, path: &#8220;\/src\/app.ts&#8221;,<br \/>\n        search: &#8220;42&#8221;, replacement: &#8220;43&#8221; },<br \/>\n      { kind: &#8220;writeJson&#8221;, path: &#8220;\/src\/config.json&#8221;,<br \/>\n        value: { version: 2 } }<br \/>\n    ]);<br \/>\n    \/\/ Apply atomically \u2014 rolls back on failure<br \/>\n    return await state.applyEditPlan(plan);<br \/>\n  }`,<br \/>\n  [resolveProvider(stateTools(workspace))]<br \/>\n);<\/p>\n<p>The package also ships prebuilt TypeScript type declarations and a system prompt template, so you can drop the full state API into your LLM context in a handful of tokens.<\/p>\n<p><a href=\"https:\/\/www.npmjs.com\/package\/@cloudflare\/shell\" rel=\"nofollow noopener\" target=\"_blank\">Learn more about the library and how to use it.<\/a><\/p>\n<p>Developers want their agents to write and execute code against tool APIs, rather than making sequential tool calls one at a time. With Dynamic Workers, the LLM generates a single TypeScript function that chains multiple API calls together, runs it in a Dynamic Worker, and returns the final result back to the agent. As a result, only the output, and not every intermediate step, ends up in the context window. This cuts both latency and token usage, and produces better results, especially when the tool surface is large.<\/p>\n<p>Our own <a href=\"https:\/\/github.com\/cloudflare\/mcp-server-cloudflare\" rel=\"nofollow noopener\" target=\"_blank\">Cloudflare MCP server<\/a> is built exactly this way: it exposes the entire Cloudflare API through just two tools \u2014 search and execute \u2014 in under 1,000 tokens, because the agent writes code against a typed API instead of navigating hundreds of individual tool definitions.<\/p>\n<p>      Building custom automations\u00a0<br \/>\n      <a href=\"#building-custom-automations\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-start-5\"><\/p>\n<p>      <\/a><\/p>\n<p>Developers are using Dynamic Workers to let agents build custom automations on the fly. <a href=\"https:\/\/www.zite.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Zite<\/a>, for example, is building an app platform where users interact through a chat interface \u2014 the LLM writes TypeScript behind the scenes to build CRUD apps, connect to services like Stripe, Airtable, and Google Calendar, and run backend logic, all without the user ever seeing a line of code. Every automation runs in its own Dynamic Worker, with access to only the specific services and libraries that the endpoint needs.<\/p>\n<p>\u201cTo enable server-side code for Zite\u2019s LLM-generated apps, we needed an execution layer that was instant, isolated, and secure. Cloudflare\u2019s Dynamic Workers hit the mark on all three, and out-performed all of the other platforms we benchmarked for speed and library support. The NodeJS compatible runtime supported all of Zite\u2019s workflows, allowing hundreds of third party integrations, without sacrificing on startup time. Zite now services millions of execution requests daily thanks to Dynamic Workers.\u201d <\/p>\n<p>\u2014 Antony Toron, CTO and Co-Founder, Zite\u00a0<\/p>\n<p>      Running AI-generated applications<br \/>\n      <a href=\"#running-ai-generated-applications\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-start-5\"><\/p>\n<p>      <\/a><\/p>\n<p>Developers are building platforms that generate full applications from AI \u2014 either for their customers or for internal teams building prototypes. With Dynamic Workers, each app can be spun up on demand, then put back into cold storage until it&#8217;s invoked again. Fast startup times make it easy to preview changes during active development. Platforms can also block or intercept any network requests the generated code makes, keeping AI-generated apps safe to run.<\/p>\n<p>Dynamically-loaded Workers are priced at $0.002 per unique Worker loaded per day (as of this post\u2019s publication), in addition to the usual CPU time and invocation pricing of regular Workers.<\/p>\n<p>For AI-generated &#8220;code mode&#8221; use cases, where every Worker is a unique one-off, this means the price is $0.002 per Worker loaded (plus CPU and invocations). This cost is typically negligible compared to the inference costs to generate the code.<\/p>\n<p>During the beta period, the $0.002 charge is waived. As pricing is subject to change, please always check our Dynamic Workers <a href=\"https:\/\/developers.cloudflare.com\/dynamic-workers\/pricing\/\" rel=\"nofollow noopener\" target=\"_blank\">pricing<\/a> for the most current information.\u00a0<\/p>\n<p>If you\u2019re on the Workers Paid plan, you can start using <a href=\"https:\/\/developers.cloudflare.com\/dynamic-workers\/\" rel=\"nofollow noopener\" target=\"_blank\">Dynamic Workers<\/a> today.\u00a0<\/p>\n<p>    <a href=\"https:\/\/deploy.workers.cloudflare.com\/?url=https:\/\/github.com\/cloudflare\/agents\/tree\/main\/examples\/dynamic-workers\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" src=\"https:\/\/deploy.workers.cloudflare.com\/button\" alt=\"Deploy to Cloudflare\"\/><\/a><\/p>\n<p>Use this \u201chello world\u201d <a href=\"https:\/\/github.com\/cloudflare\/agents\/tree\/main\/examples\/dynamic-workers-starter\" rel=\"nofollow noopener\" target=\"_blank\">starter<\/a> to get a Worker deployed that can load and execute Dynamic Workers.\u00a0<\/p>\n<p>      Dynamic Workers Playground<br \/>\n      <a href=\"#dynamic-workers-playground\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-start-5\"><\/p>\n<p>      <\/a><\/p>\n<p>    <a href=\"https:\/\/deploy.workers.cloudflare.com\/?url=https:\/\/github.com\/cloudflare\/agents\/tree\/main\/examples\/dynamic-workers-playground\" rel=\"nofollow noopener\" target=\"_blank\"><img decoding=\"async\" src=\"https:\/\/deploy.workers.cloudflare.com\/button\" alt=\"Deploy to Cloudflare\"\/><\/a><\/p>\n<p>You can also deploy the <a href=\"https:\/\/github.com\/cloudflare\/agents\/tree\/main\/examples\/dynamic-workers-playground\" rel=\"nofollow noopener\" target=\"_blank\">Dynamic Workers Playground<\/a>, where you\u2019ll be able to write or import code, bundle it at runtime with @cloudflare\/worker-bundler, execute it through a Dynamic Worker, see real-time responses and execution logs. <\/p>\n<p>Dynamic Workers are fast, scalable, and lightweight. <a href=\"https:\/\/discord.com\/channels\/595317990191398933\/1460655307255578695\" rel=\"nofollow noopener\" target=\"_blank\">Find us on Discord<\/a> if you have any questions. We\u2019d love to see what you build!<\/p>\n","protected":false},"excerpt":{"rendered":"Last September we introduced Code Mode, the idea that agents should perform tasks not by making tool calls,&hellip;\n","protected":false},"author":2,"featured_media":493268,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-493267","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/493267","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=493267"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/493267\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/493268"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=493267"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=493267"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=493267"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}