{"id":355371,"date":"2025-12-18T04:47:07","date_gmt":"2025-12-18T04:47:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/355371\/"},"modified":"2025-12-18T04:47:07","modified_gmt":"2025-12-18T04:47:07","slug":"openais-new-chatgpt-image-generator-makes-faking-photos-easy","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/355371\/","title":{"rendered":"OpenAI\u2019s new ChatGPT image generator makes faking photos easy"},"content":{"rendered":"<p>For most of photography\u2019s <a href=\"https:\/\/en.wikipedia.org\/wiki\/View_from_the_Window_at_Le_Gras\" rel=\"nofollow noopener\" target=\"_blank\">roughly 200-year<\/a> history, altering a photo convincingly required either a darkroom, some Photoshop expertise, or, at minimum, a steady hand with scissors and glue. On Tuesday, OpenAI <a href=\"https:\/\/openai.com\/index\/new-chatgpt-images-is-here\/\" rel=\"nofollow noopener\" target=\"_blank\">released a tool<\/a> that reduces the process to typing a sentence.<\/p>\n<p>It\u2019s not the first company to do so. While OpenAI had a conversational image-editing model in the works since <a href=\"https:\/\/arstechnica.com\/information-technology\/2024\/05\/chatgpt-4o-lets-you-have-real-time-audio-video-conversations-with-emotional-chatbot\/\" rel=\"nofollow noopener\" target=\"_blank\">GPT-4o<\/a> in 2024, Google beat OpenAI to market <a href=\"https:\/\/arstechnica.com\/ai\/2025\/03\/farewell-photoshop-googles-new-ai-lets-you-edit-images-by-asking\/\" rel=\"nofollow noopener\" target=\"_blank\">in March<\/a> with a public prototype, then refined it to a popular model called <a href=\"https:\/\/arstechnica.com\/ai\/2025\/08\/google-improves-gemini-ai-image-editing-with-nano-banana-model\/\" rel=\"nofollow noopener\" target=\"_blank\">Nano Banana<\/a> image model (and <a href=\"https:\/\/arstechnica.com\/google\/2025\/11\/google-launches-nano-banana-pro-image-model-adds-ai-image-detection-in-gemini-app\/\" rel=\"nofollow noopener\" target=\"_blank\">Nano Banana Pro<\/a>). The enthusiastic response to Google\u2019s image-editing model in the AI community <a href=\"https:\/\/arstechnica.com\/ai\/2025\/12\/openai-ceo-declares-code-red-as-gemini-gains-200-million-users-in-3-months\/\" rel=\"nofollow noopener\" target=\"_blank\">got OpenAI\u2019s attention<\/a>.<\/p>\n<p>OpenAI\u2019s new <a href=\"https:\/\/openai.com\/index\/new-chatgpt-images-is-here\/\" rel=\"nofollow noopener\" target=\"_blank\">GPT Image 1.5<\/a> is an AI image synthesis model that reportedly generates images up to four times faster than its predecessor and costs about 20 percent less through the API. The model rolled out to all ChatGPT users on Tuesday and represents <a href=\"https:\/\/arstechnica.com\/ai\/2025\/03\/farewell-photoshop-googles-new-ai-lets-you-edit-images-by-asking\/\" rel=\"nofollow noopener\" target=\"_blank\">another step<\/a> toward making photorealistic image manipulation a casual process that requires no particular visual skills.<\/p>\n<p>                        <img width=\"1024\" height=\"791\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/queen_of_the_universe_on_a_sofa-1024x791.jpg\" class=\"center large\" alt=\"The \" galactic=\"\" queen=\"\" of=\"\" the=\"\" universe=\"\" added=\"\" to=\"\" a=\"\" photo=\"\" room=\"\" with=\"\" sofa=\"\" using=\"\" gpt=\"\" image=\"\" in=\"\" chatgpt.=\"\" decoding=\"async\" loading=\"lazy\"  \/><\/p>\n<p>\n      The \u201cGalactic Queen of the Universe\u201d added to a photo of a room with a sofa using GPT Image 1.5 in ChatGPT.<\/p>\n<p>GPT Image 1.5 is notable because it\u2019s a \u201cnative multimodal\u201d image model, meaning image generation happens inside the same neural network that processes language prompts. (In contrast, <a href=\"https:\/\/arstechnica.com\/information-technology\/2023\/11\/from-toy-to-tool-dall-e-3-is-a-wake-up-call-for-visual-artists-and-the-rest-of-us\/\" rel=\"nofollow noopener\" target=\"_blank\">DALL-E 3<\/a>, an earlier OpenAI image generator previously built into ChatGPT, used a different technique called diffusion to generate images.)<\/p>\n<p>This newer type of model, which we <a href=\"https:\/\/arstechnica.com\/ai\/2025\/03\/farewell-photoshop-googles-new-ai-lets-you-edit-images-by-asking\/\" rel=\"nofollow noopener\" target=\"_blank\">covered<\/a> in more detail in March, treats images and text as the same kind of thing: chunks of data called \u201ctokens\u201d to be predicted, patterns to be completed. If you upload a photo of your dad and type \u201cput him in a tuxedo at a wedding,\u201d the model processes your words and the image pixels in a unified space, then outputs new pixels the same way it would output the next word in a sentence.<\/p>\n<p>Using this technique, GPT Image 1.5 can more easily alter visual reality than earlier AI image models, changing someone\u2019s pose or position, or rendering a scene from a slightly different angle, with varying degrees of success. It can also remove objects, change visual styles, adjust clothing, and refine specific areas while preserving facial likeness across successive edits. You can converse with the AI model about a photograph, refining and revising, the same way you might workshop a draft of an email in ChatGPT.<\/p>\n","protected":false},"excerpt":{"rendered":"For most of photography\u2019s roughly 200-year history, altering a photo convincingly required either a darkroom, some Photoshop expertise,&hellip;\n","protected":false},"author":2,"featured_media":355372,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-355371","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/355371","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=355371"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/355371\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/355372"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=355371"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=355371"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=355371"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}