{"id":274064,"date":"2025-11-06T00:23:11","date_gmt":"2025-11-06T00:23:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/274064\/"},"modified":"2025-11-06T00:23:11","modified_gmt":"2025-11-06T00:23:11","slug":"5-ai-developed-malware-families-analyzed-by-google-fail-to-work-and-are-easily-detected","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/274064\/","title":{"rendered":"5 AI-developed malware families analyzed by Google fail to work and are easily detected"},"content":{"rendered":"<p>The assessments provide a strong counterargument to the exaggerated narratives being trumpeted by AI companies, many seeking new rounds of venture funding, that AI-generated malware is widespread and part of a new paradigm that poses a current threat to traditional defenses.<\/p>\n<p>A typical example is Anthropic, which <a href=\"https:\/\/www.anthropic.com\/news\/detecting-countering-misuse-aug-2025\" rel=\"nofollow noopener\" target=\"_blank\">recently reported<\/a> its discovery of a threat actor that used its Claude LLM to \u201cdevelop, market, and distribute several variants of ransomware, each with advanced evasion capabilities, encryption, and anti-recovery mechanisms.\u201d The company went on to say: \u201cWithout Claude\u2019s assistance, they could not implement or troubleshoot core malware components, like encryption algorithms, anti-analysis techniques, or Windows internals manipulation.\u201d<\/p>\n<p>Startup ConnectWise <a href=\"https:\/\/www.connectwise.com\/blog\/the-dark-side-how-threat-actors-are-using-ai\" rel=\"nofollow noopener\" target=\"_blank\">recently said<\/a> that generative AI was \u201clowering the bar of entry for threat actors to get into the game.\u201d The post cited a <a href=\"https:\/\/cdn.openai.com\/threat-intelligence-reports\/influence-and-cyber-operations-an-update_October-2024.pdf\" rel=\"nofollow noopener\" target=\"_blank\">separate report<\/a> from OpenAI that found 20 separate threat actors using its ChatGPT AI engine to develop malware for tasks including identifying vulnerabilities, developing exploit code, and debugging that code. BugCrowd, meanwhile, <a href=\"https:\/\/www.bugcrowd.com\/resources\/report\/inside-the-mind-of-a-hacker\/\" rel=\"nofollow noopener\" target=\"_blank\">said<\/a> that in a survey of self-selected individuals, \u201c74 percent of hackers agree that AI has made hacking more accessible, opening the door for newcomers to join the fold.\u201d<\/p>\n<p>In some cases, the authors of such reports note the same limitations noted in this article. Wednesday\u2019s report from Google says that in its analysis of AI tools used to develop code for managing command and control channels and obfuscating its operations \u201cwe did not see evidence of successful automation or any breakthrough capabilities.\u201d OpenAI said much the same thing. Still, these disclaimers are rarely made prominently and are often downplayed in the resulting frenzy to portray AI-assisted malware as posing a near-term threat.<\/p>\n<p>Google\u2019s report provides at least one other useful finding. One threat actor that exploited the company\u2019s Gemini AI model was able to bypass its guardrails by posing as white-hat hackers doing research for participation in a capture-the-flag game. These competitive exercises are designed to teach and demonstrate effective cyberattack strategies to both participants and onlookers.<\/p>\n<p>Such guardrails are built into all mainstream LLMs to prevent them from being used maliciously, such as in cyberattacks and self-harm. Google said it has since better fine-tuned the countermeasure to resist such ploys.<\/p>\n<p>Ultimately, the AI-generated malware that has surfaced to date suggests that it\u2019s mostly experimental, and the results aren\u2019t impressive. The events are worth monitoring for developments that show AI tools producing new capabilities that were previously unknown. For now, though, the biggest threats continue to predominantly rely on old-fashioned tactics.<\/p>\n","protected":false},"excerpt":{"rendered":"The assessments provide a strong counterargument to the exaggerated narratives being trumpeted by AI companies, many seeking new&hellip;\n","protected":false},"author":2,"featured_media":76141,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-274064","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/274064","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=274064"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/274064\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/76141"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=274064"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=274064"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=274064"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}