{"id":492313,"date":"2026-03-24T09:06:08","date_gmt":"2026-03-24T09:06:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/492313\/"},"modified":"2026-03-24T09:06:08","modified_gmt":"2026-03-24T09:06:08","slug":"telling-an-ai-model-that-its-an-expert-makes-it-worse-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/492313\/","title":{"rendered":"Telling an AI model that it&#8217;s an expert makes it worse \u2022 The Register"},"content":{"rendered":"<p>Many people start their work with AI by prompting the machine to imagine it is an expert at the task they want it to perform, a technique that boffins have found may be futile.<\/p>\n<p>Persona-based prompting \u2013 which involves using directives such as &#8220;You&#8217;re an expert machine learning programmer&#8221; in a model prompt \u2013 dates back to 2023, when researchers began to <a href=\"https:\/\/arxiv.org\/abs\/2305.14930\" rel=\"nofollow noopener\" target=\"_blank\">explore<\/a> how role-playing instructions influenced AI models\u2019 output.<\/p>\n<p>It&#8217;s now common to find online <a href=\"https:\/\/www.reddit.com\/r\/ClaudeAI\/comments\/1qb1024\/ultimate_claude_skillmd_autobuilds_any_fullstack\/\" rel=\"nofollow noopener\" target=\"_blank\">prompting guides<\/a> that include passages like, &#8220;You are an expert full-stack developer tasked with building a complete, production-ready full-stack web application from scratch.&#8221;<\/p>\n<p>But academics who have researched this approach report it does not always produce superior results.<\/p>\n<p>In <a href=\"https:\/\/arxiv.org\/abs\/2603.18507\" rel=\"nofollow noopener\" target=\"_blank\">a pre-print<\/a> paper titled &#8220;Expert Personas Improve LLM Alignment but Damage Accuracy: Bootstrapping Intent-Based Persona Routing with PRISM,&#8221; researchers affiliated with the University of Southern California (USC) find that persona-based prompting is task-dependent \u2013 which they say explains the mixed results.<\/p>\n<p>For alignment-dependent tasks, like writing, role-playing, and safety, personas do improve model performance. For pretraining-dependent tasks like math and coding, using the technique produces worse results.<\/p>\n<p>The reason appears to be that telling a model it&#8217;s an expert in a field does not actually impart any expertise \u2013 no facts are added to the training data.<\/p>\n<p>In fact, telling a model that it&#8217;s an expert in a particular field hinders the model&#8217;s ability to fetch facts from pretraining data.<\/p>\n<p>The researchers used the Measuring Massive Multitask Language Understanding (MMLU) benchmark, a means of evaluating LLM performance, to test persona-based prompting and found &#8220;when the LLM is asked to decide between multiple-choice answers, the expert persona underperforms the base model consistently across all four subject categories (overall accuracy: 68.0 percent vs. 71.6 percent base model). A possible explanation is that persona prefixes activate the model&#8217;s instruction-following mode that would otherwise be devoted to factual recall.&#8221;<\/p>\n<p>But persona-based guidance does help steer the model toward responses that satisfy the LLM-based judge assessing alignment. As an example, the authors note, &#8220;A dedicated &#8216;Safety Monitor&#8217; persona boosts attack refusal rates across all three safety benchmarks, with the largest gain on JailbreakBench (+17.7 percentage points from 53.2 percent to 70.9 percent).&#8221;<\/p>\n<p>Zizhao Hu, a PhD student at USC and one of the study&#8217;s co-authors, told The Register in an email that based on the study&#8217;s findings, asking AI to adopt the persona of an expert programmer will not help code quality or utility.<\/p>\n<p>But pointing to the prompt guidance we linked to above, Hu said &#8220;many other aspects, such as UI-preference, project architecture, and tool-preference, are more towards the alignment direction, which do benefit from a detailed persona.\u201d<\/p>\n<p>\u201cIn the examples provided, we believe that the general expert persona is not necessary, such as &#8216;You are an expert full-stack developer,&#8217; while the granular personalized project requirement might help the model to generate code that satisfies the user&#8217;s requirements.&#8221;<\/p>\n<p>Given that prompts about expertise do have an effect, the researchers \u2013 Hu and colleagues Mohammad Rostami and Jesse Thomason \u2013 proposed a technique they call PRISM (Persona Routing via Intent-based Self-Modeling) which attempts to harness the benefits of expert personas without the harm.<\/p>\n<p>&#8220;We use the gated LoRA [<a href=\"https:\/\/www.ibm.com\/think\/topics\/lora\" rel=\"nofollow noopener\" target=\"_blank\">low-rank adaptation<\/a>] mechanism, where the base model is entirely kept and used for generations that depend on pretrained knowledge,&#8221; he explained, adding &#8220;This decision process is learned by the gate.&#8221;<\/p>\n<p>The LoRA adapter is activated where persona-based behaviors improve output, and otherwise falls back on the unmodified model.<\/p>\n<p>The researchers designed PRISM to avoid the tradeoffs of other approaches \u2013 prompt-based routing, which applies expert personas at inference time, and supervised fine tuning, which bakes behavior into model weights.<\/p>\n<p>Asked whether there&#8217;s a way to generalize about effective prompting methods, Hu said: &#8220;We cannot say for sure for general prompting, but from our discovery on expert persona prompt, a potential point is, &#8216;When you care more about alignment (safety, rules, structure-following, etc), be specific about your requirement; if you care more about accuracy and facts, do not add anything, just send the query.'&#8221; \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"Many people start their work with AI by prompting the machine to imagine it is an expert at&hellip;\n","protected":false},"author":2,"featured_media":492314,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-492313","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/492313","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=492313"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/492313\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/492314"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=492313"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=492313"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=492313"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}