Google has launched Nano Banana 2, an image generation model rolling out across consumer and business products, including the Gemini app, Google Search and Google Ads.
The model is part of the Gemini family. It carries the product name Nano Banana 2 and the technical label Gemini 3.1 Flash Image. Google positions it as faster than Nano Banana Pro while bringing over features previously limited to the Pro version.
In the Gemini app, Nano Banana 2 is now the default image model across Google’s Fast, Thinking and Pro modes. Google AI Pro and Ultra subscribers still have access to Nano Banana Pro for what Google describes as specialised tasks. Users can access the Pro model by regenerating an image from an in-app menu.
Search rollout
Google is also adding Nano Banana 2 to Search through AI Mode and Lens. Availability is expanding to 141 additional countries and territories and eight more languages across the Google app, as well as mobile and desktop browsers.
The move reflects Google’s push to embed generative AI features into high-traffic products. Image creation and editing tools have become a competitive battleground, with major vendors pairing image generation with editing, search and advertising workflows.
Studio and API
Nano Banana 2 is available in preview in AI Studio and through the Gemini API. It is also available in preview in Vertex AI via the Gemini API on Google Cloud.
Developers and businesses often adopt Google’s image models through APIs before broader product integrations, since API access fits existing content pipelines and internal tools. Adding the model to Vertex AI places it alongside other Gemini services used to build applications, automations and chat experiences.
Creative controls
Google highlights features focused on instruction-following and consistency across multiple images. Nano Banana 2 can maintain character resemblance for up to five characters and preserve fidelity for up to 14 objects in a single workflow.
The model also adds controls for aspect ratios and output resolutions, ranging from 512 pixels to 4K. Google says this targets uses such as vertical social media formats and wide-screen backdrops.
Another focus is text rendering inside images. Google says Nano Banana 2 generates legible text for marketing mock-ups and greeting cards, and can translate and localise text within an image.
Grounded outputs
Google is also promoting what it calls the model’s advanced world knowledge. Nano Banana 2 can draw on Gemini’s real-world knowledge base and use real-time information and images from web search when rendering specific subjects.
Grounding image generation in search results has become a key theme for vendors aiming to reduce hallucinations and make outputs more predictable. For Google, the integration ties image generation to Search and Lens, which many users already rely on to identify objects, places and products.
Google also says the model can produce infographics, convert notes into diagrams and generate data visualisations-areas where text accuracy and layout control matter as much as aesthetic quality.
Ads integration
Google has also brought Nano Banana 2 into Google Ads. The model is available for suggestions during campaign creation, placing image generation directly inside an advertising workflow.
Generative AI in advertising has moved quickly from experimentation to integrated features across creative tools. Embedding the model in Ads links generative imagery with campaign setup and iteration, central steps for small businesses and agencies managing multiple accounts.
Flow default
Nano Banana 2 is now the default image generation model in Flow, Google’s creative tool. Google says it is available to all Flow users at zero credits, signalling an intent to broaden usage rather than keep it as a premium-only feature.
Google also says the model is available in Google Antigravity, but did not provide details on how Antigravity users will interact with it.
Provenance tools
Alongside the rollout, Google is putting renewed emphasis on provenance and verification for AI-generated media. It is pairing SynthID with C2PA Content Credentials, which attach information about how content was made.
Google says this approach gives users context not only on whether AI was used, but also how, and plans to bring C2PA verification to the Gemini app.
Google also shared usage figures for SynthID verification in Gemini. “Since its launch in November, our SynthID verification feature in Gemini app has been used over 20 million times across various languages, helping people identify Google AI-generated images, video and audio,” a Google spokesperson said.
With Nano Banana 2 rolling out across Gemini, Search, Ads and Google Cloud, Google is positioning it as the default option for fast image generation across much of its product portfolio.