{"id":159382,"date":"2025-11-25T20:03:18","date_gmt":"2025-11-25T20:03:18","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/159382\/"},"modified":"2025-11-25T20:03:18","modified_gmt":"2025-11-25T20:03:18","slug":"partnering-with-black-forest-labs-to-bring-flux-2-dev-to-workers-ai","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/159382\/","title":{"rendered":"Partnering with Black Forest Labs to bring FLUX.2 [dev] to Workers AI"},"content":{"rendered":"<p>In recent months, we\u2019ve seen a leap forward for closed-source image generation models with the rise of <a href=\"https:\/\/gemini.google\/overview\/image-generation\/\" rel=\"nofollow noopener\" target=\"_blank\">Google\u2019s Nano Banana<\/a> and <a href=\"https:\/\/openai.com\/index\/image-generation-api\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI image generation models<\/a>. Today, we\u2019re happy to share that a new open-weight contender is back with the launch of Black Forest Lab\u2019s FLUX.2 [dev] and available to run on Cloudflare\u2019s inference platform, Workers AI. You can read more about this new model in detail on BFL\u2019s blog post about their new model launch <a href=\"https:\/\/bfl.ai\/blog\/flux-2\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>. <\/p>\n<p>We have been huge fans of Black Forest Lab\u2019s FLUX image models since their earliest versions. Our hosted version of FLUX.1 [schnell] is one of the most popular models in our catalog for its photorealistic outputs and high-fidelity generations. When the time came to host the licensed version of their new model, we jumped at the opportunity. The FLUX.2 model takes all the best features of FLUX.1 and amps it up, generating even more realistic, grounded images with added customization support like JSON prompting.<\/p>\n<p>Our Workers AI hosted version of FLUX.2 has some specific patterns, like using multipart form data to support input images (up to 4 512&#215;512 images), and output images up to 4 megapixels. The multipart form data format allows users to send us multiple image inputs alongside the typical model parameters. Check out our <a href=\"https:\/\/developers.cloudflare.com\/changelog\/2025-11-25-flux-2-dev-workers-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">developer docs changelog announcement<\/a> to understand how to use the FLUX.2 model.<\/p>\n<p>      What makes FLUX.2 special? Physical world grounding, digital world assets, and multi-language support<br \/>\n      <a href=\"#what-makes-flux-2-special-physical-world-grounding-digital-world-assets-and-multi-language-support\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-left-5\"><\/p>\n<p>      <\/a><\/p>\n<p>The FLUX.2 model has a more robust understanding of the physical world, allowing you to turn abstract concepts into photorealistic reality. It excels at generating realistic image details and consistently delivers accurate hands, faces, fabrics, logos, and small objects that are often missed by other models. Its knowledge of the physical world also generates life-like lighting, angles and depth perception.<\/p>\n<p>Figure 1. Image generated with FLUX.2 featuring accurate lighting, shadows, reflections and depth perception at a caf\u00e9 in Paris.<\/p>\n<p>This high-fidelity output makes it ideal for applications requiring superior image quality, such as creative photography, e-commerce product shots, marketing visuals, and interior design. Because it can understand context, tone, and trends, the model allows you to create engaging and editorial-quality digital assets from short prompts.<\/p>\n<p>Aside from the physical world, the model is also able to generate high-quality digital assets such as designing landing pages or generating detailed infographics (see below for example). It\u2019s also able to understand multiple languages naturally, so combining these two features \u2013 we can get a beautiful landing page in French from a French prompt.<\/p>\n<p>            G\u00e9n\u00e9rer une page web visuellement immersive pour un service de promenade de chiens. L&#8217;image principale doit dominer l&#8217;\u00e9cran, montrant un chien exub\u00e9rant courant dans un parc ensoleill\u00e9, avec des touches de vert vif (#2ECC71) int\u00e9gr\u00e9es subtilement dans le feuillage ou les accessoires du chien. Minimiser le texte pour un impact visuel maximal.<\/p>\n<p>      Character consistency \u2013 solving for stochastic drift<br \/>\n      <a href=\"#character-consistency-solving-for-stochastic-drift\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-left-5\"><\/p>\n<p>      <\/a><\/p>\n<p>FLUX.2 offers multi-reference editing with state-of-the-art character consistency, ensuring identities, products, and styles remain consistent for tasks. In the world of generative AI, getting a high-quality image is easy. However, getting the exact same character or product twice has always been the hard part. This is a phenomenon known as &#8220;stochastic drift&#8221;, where generated images drift away from the original source material.<\/p>\n<p>Figure 2. Stochastic drift infographic (generated on FLUX.2)<\/p>\n<p>One of FLUX.2\u2019s breakthroughs is multi-reference image inputs designed to solve this consistency challenge. You\u2019ll have the ability to change the background, lighting, or pose of an image without accidentally changing the face of your model or the design of your product. You can also reference other images or combine multiple images together to create something new.\u00a0<\/p>\n<p>In code, Workers AI supports multi-reference images (up to 4) with a multipart form-data upload. The image inputs are binary images and output is a base64 encoded image:<\/p>\n<p>            curl &#8211;request POST \\<br \/>\n  &#8211;url &#8216;https:\/\/api.cloudflare.com\/client\/v4\/accounts\/{ACCOUNT}\/ai\/run\/@cf\/black-forest-labs\/flux-2-dev&#8217; \\<br \/>\n  &#8211;header &#8216;Authorization: Bearer {TOKEN}&#8217; \\<br \/>\n  &#8211;header &#8216;Content-Type: multipart\/form-data&#8217; \\<br \/>\n  &#8211;form &#8216;prompt=take the subject of image 2 and style it like image 1&#8217; \\<br \/>\n  &#8211;form input_image_0=@\/Users\/johndoe\/Desktop\/icedoutkeanu.png \\<br \/>\n  &#8211;form input_image_1=@\/Users\/johndoe\/Desktop\/me.png \\<br \/>\n  &#8211;form steps=25<br \/>\n  &#8211;form width=1024<br \/>\n  &#8211;form height=1024<\/p>\n<p>We also support this through the Workers AI Binding:<\/p>\n<p>            const image = await fetch(&#8220;http:\/\/image-url&#8221;);<br \/>\nconst form = new FormData();<\/p>\n<p>const image_blob = await streamToBlob(image.body, &#8220;image\/png&#8221;);<br \/>\nform.append(&#8216;input_image_0&#8217;, image_blob)<br \/>\nform.append(&#8216;prompt&#8217;, &#8216;a sunset with the dog in the original image&#8217;)<\/p>\n<p>const resp = await env.AI.run(&#8220;@cf\/black-forest-labs\/flux-2-dev&#8221;, {<br \/>\n    multipart: {<br \/>\n        body: form,<br \/>\n        contentType: &#8220;multipart\/form-data&#8221;<br \/>\n    }<br \/>\n})<\/p>\n<p>      Built for real world use cases<br \/>\n      <a href=\"#built-for-real-world-use-cases\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-left-5\"><\/p>\n<p>      <\/a><\/p>\n<p>The newest image model signifies a shift towards functional business use cases, moving beyond simple image quality improvements. FLUX.2 enables you to:<\/p>\n<p>Create Ad Variations: Generate 50 different advertisements using the exact same actor, without their face morphing between frames.<\/p>\n<p>Trust Your Product Shots: Drop your product on a model, or into a beach scene, a city street, or a studio table. The environment changes, but your product stays accurate.<\/p>\n<p>Build Dynamic Editorials: Produce a full fashion spread where the model looks identical in every single shot, regardless of the angle.<\/p>\n<p>Figure 3. Combining the oversized hoodie and sweatpant ad photo (generated with FLUX.2) with Cloudflare\u2019s logo to create product renderings with consistent faces, fabrics, and scenery. **Note: we prompted for white Cloudflare font as well instead of the original black font.\u00a0<\/p>\n<p>      Granular controls \u2014 JSON prompting, HEX codes and more!<br \/>\n      <a href=\"#granular-controls-json-prompting-hex-codes-and-more\" aria-hidden=\"true\" class=\"relative sm:absolute sm:-left-5\"><\/p>\n<p>      <\/a><\/p>\n<p>The FLUX.2 model makes another advancement by allowing users to control small details in images through tools like JSON prompting and specifying specific hex codes.<\/p>\n<p>For example, you could send this JSON as a prompt (as part of the multipart form input) and the resulting image follows the prompt exactly:<\/p>\n<p>            {<br \/>\n  &#8220;scene&#8221;: &#8220;A bustling, neon-lit futuristic street market on an alien planet, rain slicking the metal ground&#8221;,<br \/>\n  &#8220;subjects&#8221;: [<br \/>\n    {<br \/>\n      &#8220;type&#8221;: &#8220;Cyberpunk bounty hunter&#8221;,<br \/>\n      &#8220;description&#8221;: &#8220;Female, wearing black matte armor with glowing blue trim, holding a deactivated energy rifle, helmet under her arm, rain dripping off her synthetic hair&#8221;,<br \/>\n      &#8220;pose&#8221;: &#8220;Standing with a casual but watchful stance, leaning slightly against a glowing vendor stall&#8221;,<br \/>\n      &#8220;position&#8221;: &#8220;foreground&#8221;<br \/>\n    },<br \/>\n    {<br \/>\n      &#8220;type&#8221;: &#8220;Merchant bot&#8221;,<br \/>\n      &#8220;description&#8221;: &#8220;Small, rusted, three-legged drone with multiple blinking red optical sensors, selling glowing synthetic fruit from a tray attached to its chassis&#8221;,<br \/>\n      &#8220;pose&#8221;: &#8220;Hovering slightly, offering an item to the viewer&#8221;,<br \/>\n      &#8220;position&#8221;: &#8220;midground&#8221;<br \/>\n    }<br \/>\n  ],<br \/>\n  &#8220;style&#8221;: &#8220;noir sci-fi digital painting&#8221;,<br \/>\n  &#8220;color_palette&#8221;: [<br \/>\n    &#8220;deep indigo&#8221;,<br \/>\n    &#8220;electric blue&#8221;,<br \/>\n    &#8220;acid green&#8221;<br \/>\n  ],<br \/>\n  &#8220;lighting&#8221;: &#8220;Low-key, dramatic, with primary light sources coming from neon signs and street lamps reflecting off wet surfaces&#8221;,<br \/>\n  &#8220;mood&#8221;: &#8220;Gritty, tense, and atmospheric&#8221;,<br \/>\n  &#8220;background&#8221;: &#8220;Towering, dark skyscrapers disappearing into the fog, with advertisements scrolling across their surfaces, flying vehicles (spinners) visible in the distance&#8221;,<br \/>\n  &#8220;composition&#8221;: &#8220;dynamic off-center&#8221;,<br \/>\n  &#8220;camera&#8221;: {<br \/>\n    &#8220;angle&#8221;: &#8220;eye level&#8221;,<br \/>\n    &#8220;distance&#8221;: &#8220;medium close-up&#8221;,<br \/>\n    &#8220;focus&#8221;: &#8220;sharp on subject&#8221;,<br \/>\n    &#8220;lens&#8221;: &#8220;35mm&#8221;,<br \/>\n    &#8220;f-number&#8221;: &#8220;f\/1.4&#8221;,<br \/>\n    &#8220;ISO&#8221;: 400<br \/>\n  },<br \/>\n  &#8220;effects&#8221;: [<br \/>\n    &#8220;heavy rain effect&#8221;,<br \/>\n    &#8220;subtle film grain&#8221;,<br \/>\n    &#8220;neon light reflections&#8221;,<br \/>\n    &#8220;mild chromatic aberration&#8221;<br \/>\n  ]<br \/>\n}<\/p>\n<p>To take it further, we can ask the model to recolor the accent lighting to a Cloudflare orange by giving it a specific hex code like #F48120.<\/p>\n<p>The newest FLUX.2 [dev] model is now available on Workers AI \u2014 you can get started with the model through our <a href=\"https:\/\/developers.cloudflare.com\/workers-ai\/models\/flux-2-dev\" rel=\"nofollow noopener\" target=\"_blank\">developer docs<\/a> or test it out on our <a href=\"https:\/\/multi-modal.ai.cloudflare.com\/\" rel=\"nofollow noopener\" target=\"_blank\">multimodal playground.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"In recent months, we\u2019ve seen a leap forward for closed-source image generation models with the rise of Google\u2019s&hellip;\n","protected":false},"author":2,"featured_media":159383,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-159382","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/159382","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=159382"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/159382\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/159383"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=159382"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=159382"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=159382"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}