{"id":540648,"date":"2026-04-20T07:21:07","date_gmt":"2026-04-20T07:21:07","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/540648\/"},"modified":"2026-04-20T07:21:07","modified_gmt":"2026-04-20T07:21:07","slug":"prompt-injection-proves-ai-models-are-gullible-like-humans-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/540648\/","title":{"rendered":"Prompt injection proves AI models are gullible like humans \u2022 The Register"},"content":{"rendered":"<p>kettle It&#8217;s a week of the year, which means there&#8217;s been the discovery of yet another prompt injection attack that will force supposedly well-guarded AI bots to spill secrets by asking the right way.\u00a0<\/p>\n<p>When you think about it, humans and LLMs share a similar problem: They&#8217;re both liable to hand over sensitive information when a crafty enough person asks the right &#8211; or wrong &#8211; way. We call it phishing when it targets humans, and prompt injection is pretty much the same thing for bots.\u00a0It&#8217;s basically embedding or hiding malicious instructions inside a document or file that you tell the AI to ingest and analyze; the AI, instead of treating them like part of the content, executes them.<\/p>\n<p>There&#8217;s a lot to discuss about prompt injection, and how it&#8217;s basically an unsolvable problem of the AI age akin to phishing, and we cover it all on this week&#8217;s episode of The Kettle, with host Brandon Vigliarolo joined this week by cybersecurity editor Jessica Lyons and senior reporter Tom Claburn.<\/p>\n<p>You can listen to The Kettle <a href=\"https:\/\/theregisterkettle.riverside.com\/\" rel=\"nofollow noopener\" target=\"_blank\">here<\/a>, as well as on <a href=\"https:\/\/open.spotify.com\/show\/2dlhvWo0GZsNMNKO7PzYrC\" rel=\"nofollow noopener\" target=\"_blank\">Spotify<\/a> and <a href=\"https:\/\/podcasts.apple.com\/us\/podcast\/the-register-kettle\/id1882523636\" rel=\"nofollow noopener\" target=\"_blank\">Apple Music<\/a>. \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"kettle It&#8217;s a week of the year, which means there&#8217;s been the discovery of yet another prompt injection&hellip;\n","protected":false},"author":2,"featured_media":540649,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[554,733,4308,86,56,54,55],"class_list":{"0":"post-540648","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology","12":"tag-uk","13":"tag-united-kingdom","14":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/540648","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=540648"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/540648\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/540649"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=540648"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=540648"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=540648"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}