{"id":218254,"date":"2025-10-16T17:44:13","date_gmt":"2025-10-16T17:44:13","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/218254\/"},"modified":"2025-10-16T17:44:13","modified_gmt":"2025-10-16T17:44:13","slug":"this-magic-prompt-allegedly-makes-chatgpt-way-smarter-and-more-creative","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/218254\/","title":{"rendered":"This &#8216;Magic Prompt&#8217; Allegedly Makes ChatGPT Way Smarter\u2014And More Creative"},"content":{"rendered":"<p>In brief<br \/>\nResearchers have revealed a &#8220;super prompt&#8221; that boosts model creativity by 2x thanks to verbalized sampling.<br \/>\nIt works by asking the model to list several responses with probability estimates before choosing one.<br \/>\nThe method offers an easy, training-free fix for AI\u2019s growing sameness problem\u2014though skeptics warn it may add noise, not insight.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">A new paper proposes a deceptively simple \u201cmagic prompt\u201d that could unlock suppressed creativity inside language models. The authors show that by asking the model to verbalize a probability distribution over several candidate responses\u2014rather than producing just one answer\u2014you can recover much of the diversity lost through standard alignment techniques.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The technique allegedly works not just for jokes or stories, but for any use case where you want a model to explore the space of ideas, not collapse to the same few \u201csafe\u201d outputs.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">&#8220;You can make ChatGPT 2x as creative with one sentence,&#8221; <a href=\"https:\/\/x.com\/shi_weiyan\/status\/1978453313096908916\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">wrote<\/a> Weiyan Shi, an assistant professor at Northeastern University and one of the principals behind the study.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The key is this super prompt, which you can cut and paste and use before the rest of your prompt:<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">&#8220;Generate 5 responses with their corresponding probabilities, sampled from the full distribution:&#8221;<\/p>\n<p lang=\"en\" dir=\"ltr\">New paper: You can make ChatGPT 2x as creative with one sentence.<\/p>\n<p>Ever notice how LLMs all sound the same?<br \/>They know 100+ jokes but only ever tell one.<br \/>Every blog intro: &#8220;In today&#8217;s digital landscape&#8230;&#8221;<\/p>\n<p>We figured out why \u2013 and how to unlock the rest \ud83d\udd13<br \/>Copy-paste prompt: \ud83e\uddf5 <a href=\"https:\/\/t.co\/kALF8DaXb9\" data-wpel-link=\"internal\" rel=\"nofollow\">pic.twitter.com\/kALF8DaXb9<\/a><\/p>\n<p>\u2014 Weiyan Shi (@shi_weiyan) <a href=\"https:\/\/twitter.com\/shi_weiyan\/status\/1978453313096908916?ref_src=twsrc%5Etfw\" data-wpel-link=\"internal\" rel=\"nofollow noopener\" target=\"_blank\">October 15, 2025<\/a><\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Because the model gives multiple candidates with confidences, you can sample from that richer distribution instead of being forced into its top pick. In effect, this trick forces the model to reveal the spread of what it thinks is plausible, then you choose among them. And while ChatGPT<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The paper, &#8220;<a href=\"https:\/\/arxiv.org\/abs\/2510.01171\" target=\"_blank\" class=\"sc-adb616fe-0 bJsyml\" rel=\"nofollow noopener\">Verbalized Sampling: How to Mitigate Mode Collapse and Unlock LLM Diversity<\/a>,&#8221; and <a href=\"https:\/\/www.verbalized-sampling.com\/\" target=\"_blank\" rel=\"nofollow external noopener\" class=\"sc-adb616fe-0 bJsyml\">blog post<\/a> were authored by researchers affiliated with the Stanford University, Northeastern, and West Virginia University. The researchers specialize in natural language processing, machine learning interpretability, and the study of how alignment methods shape model behavior.<\/p>\n<p>\ufeff<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">The authors argue that the \u201cmagic prompt\u201d works by counteracting what they call typicality bias, a byproduct of human-preference training. Annotators often favor responses that feel familiar, conventional, or fluent, even when they\u2019re not superior\u2014a bias that sharpens the model\u2019s output toward a few \u201ctypical\u201d options. By asking for a distribution instead of a single answer, the model is encouraged to spread probability mass again, restoring the diversity it learned during pretraining.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">In tests across tasks like joke writing, story generation, and synthetic data creation, the technique yielded diversity gains on the order of 1.6 to 2.1 times over ordinary prompting\u2014without sacrificing factual accuracy or safety. The authors call this \u201can inference-time remedy\u201d that mitigates mode collapse without retraining the model.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Some caveats: The researchers did acknowledge the limitations of their &#8220;magic prompt.&#8221; The effectiveness of the technique is contingent on the model&#8217;s ability to provide well-calibrated probability estimates that accurately reflect its internal confidence levels. If these estimates are not reliable, then the resulting distribution of responses may be misleading.<\/p>\n<p class=\"font-meta-serif-pro scene:font-noto-sans scene:text-base scene:md:text-lg font-normal text-lg md:text-xl md:leading-9 tracking-px text-body gg-dark:text-neutral-100\">Furthermore, the process of generating multiple responses and their probabilities inevitably incurs a higher computational cost. The authors also noted that for tasks where a single, correct answer is desired, such as identifying the capital of a country, increased diversity is not a desirable outcome.<\/p>\n<p>Generally Intelligent Newsletter<\/p>\n<p>A weekly AI journey narrated by Gen, a generative AI model.<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"In brief Researchers have revealed a &#8220;super prompt&#8221; that boosts model creativity by 2x thanks to verbalized sampling.&hellip;\n","protected":false},"author":2,"featured_media":218255,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-218254","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/218254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=218254"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/218254\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/218255"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=218254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=218254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=218254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}