{"id":1331,"date":"2025-07-17T20:50:08","date_gmt":"2025-07-17T20:50:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/1331\/"},"modified":"2025-07-17T20:50:08","modified_gmt":"2025-07-17T20:50:08","slug":"to-secure-ai-start-thinking-like-an-attacker","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/1331\/","title":{"rendered":"To Secure AI, Start Thinking Like an Attacker"},"content":{"rendered":"<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">By Peter Garraghan, Mindgard<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">The rapid adoption of <a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_underline\" target=\"_self\" href=\"https:\/\/www.itprotoday.com\/ai-machine-learning\/ai-basics-a-quick-reference-guide-for-it-professionals\" rel=\"nofollow noopener\">AI<\/a> has unleashed a flood of innovation, but it has also exposed a glaring cybersecurity bottleneck of the industry&#8217;s own making. Organizations are racing to implement generative models, autonomous agents, and AI-enhanced services, yet too often without any real assurance that these systems are secure. According to a recent World Economic Forum <a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_underline\" target=\"_blank\" href=\"https:\/\/www.weforum.org\/publications\/global-cybersecurity-outlook-2025\/digest\/\" rel=\"nofollow noopener\">report<\/a>, 66% of businesses expect AI to impact cybersecurity profoundly in the coming year, but only 37% currently assess the security of AI tools before deployment. This is a dangerous paradox: recognition without readiness.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">We&#8217;ve seen this before. During the rise of cloud computing, security was often an afterthought, until the breaches began. With AI, however, the risks are not only faster and more complex, but also fundamentally different. Treating AI like any other software is a category error. These systems are non-deterministic, probabilistic, and deeply entangled in application workflows. They require a new playbook.<\/p>\n<p>Your AI Can (and Likely Will) Be Hacked<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Without sounding too apocalyptic, AI at its core still involves risks related to software, hardware, and data. These are not unfamiliar to security practitioners, who already have controls and processes in place. However, the emergence of a new technological paradigm requires adapting existing tools, training, and playbooks \u2014 and AI is no exception.<\/p>\n<p data-component=\"related-article\" class=\"RelatedArticle\">Related:<a class=\"RelatedArticle-RelatedContent\" href=\"https:\/\/www.itprotoday.com\/cloud-computing\/ai-infrastructure-inflection-point-60-cloud-costs-signal-time-to-go-private\" target=\"_self\" data-discover=\"true\" rel=\"nofollow noopener\">AI Infrastructure Inflection Point: 60% Cloud Costs Signal Time to Go Private<\/a><\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">AI models today expose entirely novel attack surfaces. Prompt injection, jailbreaks, adversarial chaining, and model extraction are not speculative threats. They are active techniques already being used in the wild. Consider &#8220;Ghost in the Shell&#8221; scenarios, where sensitive data can be resurrected from a model&#8217;s memory long after the original dataset has been deleted. Or data poisoning campaigns, such as the manipulation of open-source training, designed to alter downstream model behavior and spread disinformation.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">These attacks can take just minutes to execute and often require minimal technical skill. In fact, journalists with no specialist hacking background <a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_underline\" target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2024\/dec\/24\/chatgpt-search-tool-vulnerable-to-manipulation-and-deception-tests-show\" rel=\"nofollow noopener\">demonstrated<\/a> that OpenAI&#8217;s ChatGPT search tool was vulnerable to hidden-text prompt injection. By embedding invisible instructions into web pages, attackers were able to manipulate <a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_underline\" target=\"_self\" href=\"https:\/\/www.itprotoday.com\/ai-machine-learning\/what-is-chatgpt-how-it-works-and-best-uses-for-chatbots\" rel=\"nofollow noopener\">ChatGPT<\/a> into generating misleading outputs, such as artificially positive product reviews, and even returning malicious code. If that is what non-experts can achieve, imagine what skilled threat actors are capable of.<\/p>\n<p>OWASP Is Just the Start<\/p>\n<p data-component=\"related-article\" class=\"RelatedArticle\">Related:<a class=\"RelatedArticle-RelatedContent\" href=\"https:\/\/www.itprotoday.com\/cloud-computing\/how-to-select-the-right-cloud-gpu-instance-for-deploying-ai-models\" target=\"_self\" data-discover=\"true\" rel=\"nofollow noopener\">How to Select the Right Cloud GPU Instance for Deploying AI Models<\/a><\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">AI security is a rapidly moving target. As models become more capable and embedded in critical workflows, new classes of vulnerabilities will inevitably emerge. Some are due to advances in model architecture, others to complex human-AI interaction patterns. Attackers are already probing areas like model extraction, watermark removal, prompt injection chaining, and indirect prompt exploitation through third-party tools.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Frameworks like OWASP and MITRE help categorize risks, but they are not executional. A single line item \u2014 say, &#8220;LLM02:2025 Sensitive Information Disclosure&#8221; \u2014 masks dozens of specific attack vectors. Moreover, attack categories continue to expand beyond the model itself:<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Systemic integration risks occur when AI models interact with plugins, APIs, or orchestration layers like RAG pipelines. These interfaces often become entry points for serialization attacks, privilege escalation, or command injection. These risks are frequently missed by conventional security tools.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Runtime-only threats emerge under live input conditions. These include context overflow, logic corruption, and behavior drift, which only manifest under operational stress or dynamic user interaction. They are not detectable through static testing.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Data exposure and memory-based attacks exploit models that retain conversational context or ingest user data during inference. These can result in sensitive information leaking through outputs, logs, or the misuse of fine-tuned models.<\/p>\n<p data-component=\"related-article\" class=\"RelatedArticle\">Related:<a class=\"RelatedArticle-RelatedContent\" href=\"https:\/\/www.itprotoday.com\/software-development-techniques\/why-agentic-ai-is-a-developer-s-new-ally-not-adversary\" target=\"_self\" data-discover=\"true\" rel=\"nofollow noopener\">Why Agentic AI Is a Developer&#8217;s New Ally, Not Adversary<\/a><\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Just as traditional software security evolved from buffer overflows to sophisticated supply chain attacks, AI security is now on a similar trajectory. The space is early, and the discovery curve is steep.<\/p>\n<p>Executive-Level Concerns of AI Adoption<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">These technical vulnerabilities, if left untested, do not exist in isolation. They manifest as broader organizational risks that extend well beyond the engineering domain. When viewed through the lens of operational impact, the consequences of insufficient AI security testing map directly to failures in safety, security, and business assurance. Categorizing the risk landscape this way helps translate technical threats into executive-level priorities.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Safety risks involve failures in model behavior that lead to harmful or unintended outcomes. This includes misaligned outputs, instruction-following errors, and toxic or biased responses. Without adversarial prompt testing and stress testing under edge conditions, models may behave unpredictably or damage reputations, especially in regulated or customer-facing environments.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Security risks stem from adversarial exploitation of the model or its surrounding system. This includes prompt injection, jailbreaks, remote code execution via plugin interfaces, and data leakage through model outputs or persistent context. These vulnerabilities often escape traditional scanning tools and are enabled by tokenization quirks, malformed inputs, or untrusted integrations.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Business risks arise when AI tools fail to meet operational or compliance standards. This includes regulatory violations from unauthorized data processing, system outages due to untested model behavior at scale, and hidden costs from cascading system failures. These risks increase when AI tools are deployed in decision-making workflows without formal assurance processes.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">These are no longer theoretical. In regulated sectors such as finance, healthcare, and critical infrastructure, such failures could be catastrophic.<\/p>\n<p>Toward Adaptive, AI-Native Security<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Just as <a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_underline\" target=\"_self\" href=\"https:\/\/www.itprotoday.com\/it-security\/what-is-devsecops-\" rel=\"nofollow noopener\">DevSecOps<\/a> transformed how we deliver software, we now need adversarial AIOps: AI security embedded from model development to production runtime. Automated AI-specific security testing must become a standard part of <a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_underline\" target=\"_self\" href=\"https:\/\/www.itprotoday.com\/devops\/how-to-set-up-a-ci-cd-pipeline\" rel=\"nofollow noopener\">CI\/CD pipelines<\/a>, not a niche capability.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">Ultimately, the security bottleneck in AI isn&#8217;t due to a lack of awareness. It&#8217;s due to inertia. We&#8217;re applying yesterday&#8217;s tools to tomorrow&#8217;s risks. And unless we fix that, the promise of AI will remain shackled by vulnerabilities we chose not to confront.<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\">About the author:<\/p>\n<p class=\"ContentParagraph ContentParagraph_align_left\" data-testid=\"content-paragraph\"><a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_italic ContentText-BodyTextChunk_underline\" target=\"_blank\" href=\"https:\/\/www.linkedin.com\/in\/pgarraghan\/\" rel=\"nofollow noopener\">Dr. Peter Garraghan<\/a> is CEO &amp; co-founder at <a class=\"ContentText-BodyTextChunk ContentText-BodyTextChunk_link ContentText-BodyTextChunk_italic ContentText-BodyTextChunk_underline\" target=\"_blank\" href=\"http:\/\/mindgard.ai\" rel=\"nofollow noopener\">Mindgard<\/a>, the leader in artificial intelligence security testing. Founded at Lancaster University and backed by cutting-edge research, Mindgard enables organizations to secure their AI systems from new threats that traditional application security tools cannot address. As a professor of computer science at Lancaster University, Peter is an internationally recognized expert in AI security. He has devoted his career to developing advanced technologies to combat the growing threats facing AI. With over \u20ac11.6 million in research funding and more than 60 published scientific papers, his contributions span both scientific innovation and practical solutions.<\/p>\n","protected":false},"excerpt":{"rendered":"By Peter Garraghan, Mindgard The rapid adoption of AI has unleashed a flood of innovation, but it has&hellip;\n","protected":false},"author":2,"featured_media":1332,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[64,63,257,105],"class_list":{"0":"post-1331","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-au","9":"tag-australia","10":"tag-computing","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/1331","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=1331"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/1331\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/1332"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=1331"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=1331"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=1331"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}