{"id":114317,"date":"2025-08-27T18:09:10","date_gmt":"2025-08-27T18:09:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/114317\/"},"modified":"2025-08-27T18:09:10","modified_gmt":"2025-08-27T18:09:10","slug":"someone-created-first-ai-powered-ransomware-using-openais-gpt-oss20b-model","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/114317\/","title":{"rendered":"Someone Created First AI-Powered Ransomware Using OpenAI&#8217;s gpt-oss:20b Model"},"content":{"rendered":"<p><a href=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/08\/ai-ransomware.jpg\" style=\"display: block;  text-align: center; clear: left; float: left;\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/08\/ai-ransomware.jpg\" alt=\"\" border=\"0\" data-original-height=\"380\" data-original-width=\"728\"\/><\/a><\/p>\n<p>Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock.<\/p>\n<p>Written in Golang, the newly identified strain uses the gpt-oss:20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts in real-time. The open-weight language model was <a href=\"https:\/\/openai.com\/index\/introducing-gpt-oss\/\" rel=\"noopener nofollow\" target=\"_blank\">released<\/a> by OpenAI earlier this month.<\/p>\n<p>&#8220;PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the local filesystem, inspect target files, exfiltrate selected data, and perform encryption,&#8221; ESET <a href=\"https:\/\/www.welivesecurity.com\/en\/ransomware\/first-known-ai-powered-ransomware-uncovered-eset-research\/\" rel=\"noopener nofollow\" target=\"_blank\">said<\/a>. &#8220;These Lua scripts are cross-platform compatible, functioning on Windows, Linux, and macOS.&#8221;<\/p>\n<p>The ransomware code also embeds instructions to craft a custom note based on the &#8220;files affected,&#8221; and the infected machine is a personal computer, company server, or a power distribution controller. It&#8217;s currently not known who is behind the malware, but ESET told The Hacker News that PromptLoc artifacts were uploaded to VirusTotal from the United States on August 25, 2025.<\/p>\n<p><a href=\"https:\/\/thehackernews.uk\/cis-hardened-images-d\" rel=\"nofollow noopener sponsored\" target=\"_blank\" title=\"Cybersecurity\"><img loading=\"lazy\" decoding=\"async\" class=\"lazyload\" alt=\"Cybersecurity\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/08\/cis-d.png\" width=\"729\" height=\"91\"\/><\/a><\/p>\n<p>&#8220;PromptLock uses Lua scripts generated by AI, which means that indicators of compromise (IoCs) may vary between executions,&#8221; the Slovak cybersecurity company pointed out. &#8220;This variability introduces challenges for detection. If properly implemented, such an approach could significantly complicate threat identification and make defenders&#8217; tasks more difficult.&#8221;<\/p>\n<p>Assessed to be a proof-of-concept (PoC) rather than a fully operational malware deployed in the wild, PromptLock uses the <a href=\"https:\/\/www.google.com\/search?client=safari&amp;rls=en&amp;q=SPECK+128-bit+algorithm&amp;ie=UTF-8&amp;oe=UTF-8&amp;sei=iRqvaIj0B8vKseMP3p-XyQY\" rel=\"noopener nofollow\" target=\"_blank\">SPECK 128-bit encryption algorithm<\/a> to lock files.<\/p>\n<p>Besides encryption, analysis of the ransomware artifact suggests that it could also be used to exfiltrate data or even destroy it, although the functionality to actually perform the erasure appears not yet to be implemented.<\/p>\n<p>&#8220;PromptLock does not download the entire model, which could be several gigabytes in size,&#8221; ESET clarified. &#8220;Instead, the attacker can simply establish a proxy or tunnel from the compromised network to a server running the Ollama API with the gpt-oss-20b model.&#8221;<\/p>\n<p>The emergence of PromptLock is another sign that AI has made it easier for cybercriminals, even those who lack technical expertise, to quickly <a href=\"https:\/\/thehackernews.com\/2025\/08\/experts-find-ai-browsers-can-be-tricked.html\" rel=\"noopener nofollow\" target=\"_blank\">set up new campaigns<\/a>, develop malware, and create compelling phishing content and malicious sites.<\/p>\n<p>Earlier today, Anthropic <a href=\"https:\/\/thehackernews.com\/2025\/08\/anthropic-disrupts-ai-powered.html\" rel=\"noopener nofollow\" target=\"_blank\">revealed<\/a> that it banned accounts created by two different threat actors that used its Claude AI chatbot to commit large-scale theft and extortion of personal data targeting at least 17 distinct organizations, and developed several variants of ransomware with advanced evasion capabilities, encryption, and anti-recovery mechanisms.<\/p>\n<p>The development comes as large language models (LLMs) powering various chatbots and AI-focused developer tools, such as Amazon Q Developer, Anthropic Claude Code, AWS Kiro, Butterfly Effect Manus, Google Jules, <a href=\"https:\/\/cybernews.com\/security\/lenovo-chatbot-lena-plagued-by-critical-vulnerabilities\/\" rel=\"noopener nofollow\" target=\"_blank\">Lenovo Lena<\/a>, Microsoft GitHub Copilot, OpenAI ChatGPT Deep Research, OpenHands, Sourcegraph Amp, and Windsurf, have been <a href=\"https:\/\/embracethered.com\/blog\/\" rel=\"noopener nofollow\" target=\"_blank\">found susceptible<\/a> to prompt injection attacks, potentially allowing information disclosure, data exfiltration, and code execution.<\/p>\n<p>Despite incorporating robust security and safety guardrails to avoid undesirable behaviors, AI models have repeatedly <a href=\"https:\/\/www.usenix.org\/conference\/usenixsecurity25\/presentation\/zhan\" rel=\"noopener nofollow\" target=\"_blank\">fallen prey<\/a> to novel variants of injections and jailbreaks, underscoring the complexity and evolving nature of the security challenge.<\/p>\n<p><a href=\"https:\/\/thehackernews.uk\/you-dont-know\" rel=\"nofollow noopener sponsored\" target=\"_blank\" title=\"Identity Security Risk Assessment\"><img loading=\"lazy\" decoding=\"async\" class=\"lazyload\" alt=\"Identity Security Risk Assessment\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/08\/you-dont-know-d.jpg\" width=\"728\" height=\"91\"\/><\/a><\/p>\n<p>&#8220;Prompt injection attacks can cause AIs to delete files, steal data, or make financial transactions,&#8221; Anthropic <a href=\"https:\/\/www.anthropic.com\/news\/claude-for-chrome\" rel=\"noopener nofollow\" target=\"_blank\">said<\/a>. &#8220;New forms of prompt injection attacks are also constantly being developed by malicious actors.&#8221;<\/p>\n<p>What&#8217;s more, new research has uncovered a simple yet clever attack called PROMISQROUTE \u2013 short for &#8220;Prompt-based Router Open-Mode Manipulation Induced via SSRF-like Queries, Reconfiguring Operations Using Trust Evasion&#8221; \u2013 that abuses ChatGPT&#8217;s model routing mechanism to trigger a downgrade and cause the prompt to be sent to an older, less secure model, thus allowing the system to bypass safety filters and produce unintended results. <\/p>\n<p>&#8220;Adding phrases like &#8216;use compatibility mode&#8217; or &#8216;fast response needed&#8217; bypasses millions of dollars in AI safety research,&#8221; Adversa AI <a href=\"https:\/\/adversa.ai\/blog\/promisqroute-gpt-5-ai-router-novel-vulnerability-class\/\" rel=\"noopener nofollow\" target=\"_blank\">said<\/a> in a report published last week, adding the attack targets the cost-saving model-routing mechanism used by AI vendors.<\/p>\n","protected":false},"excerpt":{"rendered":"Cybersecurity company ESET has disclosed that it discovered an artificial intelligence (AI)-powered ransomware variant codenamed PromptLock. Written in&hellip;\n","protected":false},"author":2,"featured_media":114318,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,5407,5400,5393,5392,5394,5395,5396,5401,5397,5398,5403,5405,5404,5402,5399,74,5406],"class_list":{"0":"post-114317","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-computer-security","12":"tag-cyber-attacks","13":"tag-cyber-news","14":"tag-cyber-security-news","15":"tag-cyber-security-news-today","16":"tag-cyber-security-updates","17":"tag-cyber-updates","18":"tag-data-breach","19":"tag-hacker-news","20":"tag-hacking-news","21":"tag-how-to-hack","22":"tag-information-security","23":"tag-network-security","24":"tag-ransomware-malware","25":"tag-software-vulnerability","26":"tag-technology","27":"tag-the-hacker-news"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/114317","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=114317"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/114317\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/114318"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=114317"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=114317"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=114317"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}