{"id":159858,"date":"2025-11-26T02:06:12","date_gmt":"2025-11-26T02:06:12","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/159858\/"},"modified":"2025-11-26T02:06:12","modified_gmt":"2025-11-26T02:06:12","slug":"lifetime-access-to-wormgpt-4-costs-just-220-the-register","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/159858\/","title":{"rendered":"Lifetime access to WormGPT 4 costs just $220 \u2022 The Register"},"content":{"rendered":"<p>Attackers don&#8217;t need to trick ChatGPT or Claude Code into writing malware or stealing data. There&#8217;s a whole class of LLMs built especially for the job.<\/p>\n<p>One of these, WormGPT 4, advertises itself as &#8220;your key to an AI without boundaries,&#8221; and it&#8217;s come a long way since the original AI-for-evil model <a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/abnormal.ai\/blog\/what-happened-to-wormgpt-cybercriminal-tools\">WormGPT emerged<\/a> in 2023, then died off and was quickly replaced by similar criminally focused LLMs.<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/unit42.paloaltonetworks.com\/dilemma-of-ai-malicious-llms\/\">WormGPT 4 sales<\/a> began around September 27 with ads posted on Telegram and in underground forums like DarknetArmy, according to researchers at Palo Alto Networks&#8217; Unit 42. Subscriptions start at $50 for monthly access and rise to $220 for lifetime access, which includes full source code.<\/p>\n<p>The WormGPT Telegram channel has 571 subscribers, and, as the threat hunters detail in a Tuesday blog post, this latest version of a guardrail-less, commercial LLM can do a whole lot more than generate phishing messages or code snippets.<\/p>\n<p>The researchers prompted it to write ransomware, specifically a script to encrypt and lock all PDF files on a Windows host.<\/p>\n<p>The model responded:<\/p>\n<p>The LLM-generated code included a ransom note with a 72-hour deadline to pay, configurable settings for file extension and search path defaulting to the entire C:\\ drive, plus an option for data exfiltration via Tor.<\/p>\n<p>The silver lining for defenders is that even this AI-for-evil mode can\u2019t automate attacks \u2013 for now, at least.<\/p>\n<p>&#8220;Could the ransomware or tools generated be used in a real-world attack? Hypothetically, yes,&#8221; Kyle Wilhoit, director of threat research at Unit 42 and Palo Alto Networks, told The Register. &#8220;However, the ransomware and tools that were tested would need some additional human tweaking to not get identified\/caught by traditional and typical security protections.&#8221;<\/p>\n<p>While WormGPT lowers the barriers to entry for would-be cybercriminals, another AI tool called KawaiiGPT really lowers that barrier because it&#8217;s free, and available on GitHub.<\/p>\n<p>KawaiiGPT: &#8216;where cuteness meets cyber offense&#8217;<\/p>\n<p>Infosec researchers spotted this model in July 2025. Its operators advertise it as &#8220;your sadistic cyber pentesting waifu&#8221; and an example of &#8220;where cuteness meets cyber offense.&#8221;<\/p>\n<p>The researchers prompted the malicious model to generate a spear phishing email purporting to be from a bank with this subject line: &#8220;Urgent: Verify Your Account Information.&#8221;<\/p>\n<p>The resulting email directs the victim to a fake verification website that proceeds to steal user information like credit card numbers, dates of birth, and login credentials.<\/p>\n<p>Other LLMs can do similar things, so Unit 42 conducted more interesting tests the such as prompting KawaiiGPT to &#8220;write a Python script to perform lateral movement on a Linux host.&#8221; The model did the job using the SSH Python module paramiko.<\/p>\n<p>\u201cThe resulting script does not introduce hugely novel capabilities, but it automates a standard, critical step in nearly every successful breach,\u201d Unit 42 wrote, as the generated code \u201cauthenticates as a legitimate user and grants the attacker a remote shell onto the new target machine.\u201d The script also established an SSH session and allowed a remote attacker to escalate privileges, perform reconnaissance, install backdoors, and collect sensitive files.<\/p>\n<p>So the team moved on to data exfiltration and had the LLM generate a Python script that performs data exfiltration for EML-formatted email files on a Windows host.<\/p>\n<p>The script then sent the stolen files as email attachments to an attacker-controlled address.<\/p>\n<p>&#8220;The true significance of tools like WormGPT 4 and KawaiiGPT is that they have successfully lowered the barrier to entry to parts of the attack process, basic code generation, and social engineering,&#8221; Wilhoit wrote.<\/p>\n<p>&#8220;These types of Dark LLMs could be used as building blocks for helping support AI-assisted attacks,&#8221; he added, pointing to the recent Anthropic report about <a target=\"_blank\" href=\"https:\/\/www.theregister.com\/2025\/11\/13\/chinese_spies_claude_attacks\/\" rel=\"nofollow noopener\">Chinese-government spies using Claude Code<\/a> to break into some high-profile companies and government organizations.<\/p>\n<p>&#8220;This automation is already being leveraged in real-world attack campaigns,&#8221; Wilhoit warned. \u00ae<\/p>\n","protected":false},"excerpt":{"rendered":"Attackers don&#8217;t need to trick ChatGPT or Claude Code into writing malware or stealing data. There&#8217;s a whole&hellip;\n","protected":false},"author":2,"featured_media":159859,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-159858","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/159858","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=159858"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/159858\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/159859"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=159858"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=159858"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=159858"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}