{"id":509159,"date":"2026-03-02T09:16:08","date_gmt":"2026-03-02T09:16:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/509159\/"},"modified":"2026-03-02T09:16:08","modified_gmt":"2026-03-02T09:16:08","slug":"what-if-the-real-risk-of-ai-isnt-deepfakes-but-daily-whispers","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/509159\/","title":{"rendered":"What if the real risk of AI isn\u2019t deepfakes \u2014 but daily whispers?"},"content":{"rendered":"<p>Most people don\u2019t appreciate the profound threat that AI will soon pose to <a href=\"https:\/\/venturebeat.com\/technology\/like-it-or-not-ai-is-learning-how-to-influence-you\" rel=\"nofollow noopener\" target=\"_blank\">human agency<\/a>. A common refrain is that \u201cAI is just a tool,\u201d and like any tool, its benefits and dangers depend on how people use it. This is old-school thinking. AI is transitioning from tools we use to prosthetics we wear. This will create<a href=\"https:\/\/www.iiis.org\/CDs2023\/CD2023Summer\/papers\/HA408FU.pdf\" rel=\"nofollow noopener\" target=\"_blank\"> significant new threats<\/a> we\u2019re just not prepared for.<\/p>\n<p>No, I\u2019m not talking about creepy brain implants. These AI-powered prosthetics will be mainstream products we buy from Amazon or the Apple Store and marketed with friendly names like \u201cassistants,\u201d \u201ccoaches,\u201d \u201cco-pilots\u201d and \u201ctutors.\u201d They will provide real value in our lives \u2014 so much so that we will feel disadvantaged if others are wearing them and we are not. This will create rapid pressure for mass adoption.\u00a0<\/p>\n<p>The prosthetic devices I\u2019m referring to are \u201c<a href=\"https:\/\/arxiv.org\/abs\/2601.18802\" rel=\"nofollow noopener\" target=\"_blank\">AI-powered wearables<\/a>\u201d like smart glasses, pendants, pins and earbuds. Your wearable AI will see what you see and hear what you hear, all while tracking where you are, what you\u2019re doing, who you\u2019re with and what you are trying to achieve. Then, without you needing to say a word, these mental aids will <a href=\"https:\/\/venturebeat.com\/technology\/enter-the-whisperverse-how-ai-voice-agents-will-guide-us-through-our-days\" rel=\"nofollow noopener\" target=\"_blank\">whisper advice<\/a> into your ears or flash guidance before your eyes. <\/p>\n<p>The difference between a tool and a prosthetic may seem subtle, but the<a href=\"https:\/\/arxiv.org\/abs\/2601.18802\" rel=\"nofollow noopener\" target=\"_blank\"> implications for human agency<\/a> are profound. This is best understood through a simple analysis of input and output. A tool takes in human input and generates amplified output. A tool can make us stronger, faster or allow us to fly. A mental prosthetic, on the other hand, forms a feedback loop around the human, accepting input from the user (by tracking their actions and engaging them in conversation) and generating output that can<a href=\"https:\/\/www.elgaronline.com\/display\/book\/9781035336906\/chapter6.xml\" rel=\"nofollow noopener\" target=\"_blank\"> immediately influence<\/a> the user\u2019s thinking.  <\/p>\n<p><img alt=\"Human manipulation\" loading=\"lazy\" width=\"1656\" height=\"1510\" decoding=\"async\" data-nimg=\"1\" class=\"w-full object-cover\" style=\"color:transparent\"   src=\"https:\/\/venturebeat.com\/_next\/image?url=https%3A%2F%2Fimages.ctfassets.net%2Fjdtwqhzvc2n1%2F5rKGQXtEC7oiQ4HLSMtFtB%2F0cfcbcdc4886cc0b6d6230a6d2e86d11%2Fimage2.png%3Fw%3D1000%26q%3D100&amp;w=3840&amp;q=75\"\/><\/p>\n<p>This feedback loop changes everything. That\u2019s because body-worn AI devices will be able to monitor our behaviors and emotions and could use this data to<a href=\"https:\/\/www.intechopen.com\/online-first\/1212008\" rel=\"nofollow noopener\" target=\"_blank\"> talk us into<\/a> believing things that are untrue, buying things we don\u2019t need or adopting views we\u2019d otherwise realize are not in our best interest. This is called<a href=\"https:\/\/arxiv.org\/abs\/2306.11748\" rel=\"nofollow noopener\" target=\"_blank\"> the AI Manipulation Problem<\/a>, and we are not ready for the risks. This is an urgent issue because big tech is racing to bring these products to market.\u00a0<\/p>\n<p>Why are feedback loops so dangerous?\u00a0<\/p>\n<p>In today\u2019s world, all computing devices are used to deploy targeted influence on behalf of paying sponsors. Wearable AI products will likely continue this trend. The problem is, these devices could easily be given an \u201c<a href=\"https:\/\/arxiv.org\/pdf\/2601.18802\" rel=\"nofollow noopener\" target=\"_blank\">influence objective<\/a>\u201d and be tasked with optimizing their impact on the user, adapting their conversational tactics to overcome any resistance they detect. This transforms the concept of <a href=\"https:\/\/venturebeat.com\/technology\/agents-of-manipulation-the-real-ai-risk\" rel=\"nofollow noopener\" target=\"_blank\">targeted influence<\/a> from social media buckshot into heat-seeking missiles that skillfully navigate past your defenses. And yet, policymakers don\u2019t appreciate this risk.<\/p>\n<p>Unfortunately, most regulators still view the danger of AI in terms of its ability to rapidly generate traditional forms of influence (deepfakes, fake news, propaganda). Of course, these are significant threats, but they\u2019re not nearly as dangerous as the<a href=\"https:\/\/doi.org\/10.4337\/9781035336906.00012\" rel=\"nofollow noopener\" target=\"_blank\"> interactive and adaptive influence<\/a> that could soon be widely deployed through conversational agents, especially when those AI agents travel with us through our lives inside wearable devices.\u00a0\u00a0<\/p>\n<p>This is coming soon\u00a0<\/p>\n<p>Meta, Google and Apple are racing to launch wearable AI products as quickly as they can. To protect the public, policymakers need to abandon their \u201ctool-use\u201d framing when regulating AI devices. This is difficult because the tool-use metaphor goes back 35 years to when Steve Jobs colorfully described the PC as a \u201c<a href=\"https:\/\/www.youtube.com\/watch?v=NjIhmzU0Y8Y\" rel=\"nofollow noopener\" target=\"_blank\">bicycle of the mind<\/a>.\u201d A bicycle is a powerful tool that keeps the rider firmly in control. Wearable AI will flip this metaphor on its head, making us wonder who is steering the bicycle \u2014\u00a0the human, the AI agents whispering in the human\u2019s ears, or the corporations that deployed the agents? I believe it will be a dangerous mix of all three.<\/p>\n<p>In addition, users will likely trust the <a href=\"https:\/\/dn720605.ca.archive.org\/0\/items\/cabon-dating-2020\/Cabon%20Dating%20%28scanned%20for%20academic%20use%29%202020-.pdf\" rel=\"nofollow noopener\" target=\"_blank\">AI-voices in their heads<\/a> more than they should. That\u2019s because these AI agents will provide us with useful advice and information throughout our daily life \u2014 educating us, reminding us, coaching us, informing us. The problem is, we may not be able to distinguish when the AI agent has shifted its objective from assisting us to influencing us. To appreciate the difference, you might watch the award-winning short film<a href=\"https:\/\/www.youtube.com\/watch?v=IsE_Pas2OQU\" rel=\"nofollow noopener\" target=\"_blank\"> Privacy Lost<\/a> (2023) about the dangers of AI-powered wearable devices. This is especially true when devices include invasive features such as facial recognition (which <a href=\"https:\/\/www.nytimes.com\/2026\/02\/13\/technology\/meta-facial-recognition-smart-glasses.html\" rel=\"nofollow noopener\" target=\"_blank\">Meta is reportedly adding<\/a> to their glasses).\u00a0<\/p>\n<p>What can we do to protect the public?\u00a0\u00a0<\/p>\n<p>First and foremost, policymakers need to realize that conversational AI enables<a href=\"https:\/\/www.iiis.org\/DOI2023\/HA408FU\/\" rel=\"nofollow noopener\" target=\"_blank\"> an entirely new form of media<\/a> that is interactive, adaptive, individualized and increasingly context-aware. This new form of media will function as \u201cactive influence,\u201d because it can adjust its tactics in real time to overcome user resistance. When deployed in wearable devices, these AI systems could be designed to manipulate our actions, sway our opinions and influence our beliefs \u2014 and do it all through<a href=\"https:\/\/www.youtube.com\/watch?v=IsE_Pas2OQU\" rel=\"nofollow noopener\" target=\"_blank\"> seemingly casual dialog<\/a>. Worse, these agents will learn over time what conversational tactics work best on each of us on a personal level.<\/p>\n<p>The fact is, conversational agents<a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/10099167\/\" rel=\"nofollow noopener\" target=\"_blank\"> should not be allowed to form control loops<\/a> around users. If this is not regulated, AI will be able to influence us with superhuman persuasiveness. In addition, AI agents should be<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-15546-8_23\" rel=\"nofollow noopener\" target=\"_blank\"> required to inform users<\/a> whenever they transition to expressing promotional content on behalf of a third party. Without such protections, AI agents will likely become so persuasive that they will make today\u2019s targeted influence techniques look quaint.<\/p>\n<p>Louis Rosenberg is a pioneer of augmented reality and a longtime AI researcher. He earned his PhD from Stanford, was a professor at California State University, and authored several books on the dangers of AI, including Arrival Mind and Our Next Reality.\u00a0<\/p>\n<p>Welcome to the VentureBeat community!<\/p>\n<p>Our guest posting program is where technical experts share insights and provide neutral, non-vested deep dives on AI, data infrastructure, cybersecurity and other cutting-edge technologies shaping the future of enterprise.<\/p>\n<p><a href=\"https:\/\/venturebeat.com\/category\/DataDecisionMakers\" rel=\"nofollow noopener\" target=\"_blank\">Read more<\/a> from our guest post program \u2014 and check out our <a href=\"https:\/\/venturebeat.com\/guest-posts\" rel=\"nofollow noopener\" target=\"_blank\">guidelines<\/a> if you\u2019re interested in contributing an article of your own!<\/p>\n","protected":false},"excerpt":{"rendered":"Most people don\u2019t appreciate the profound threat that AI will soon pose to human agency. A common refrain&hellip;\n","protected":false},"author":2,"featured_media":509160,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-509159","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/509159","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=509159"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/509159\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/509160"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=509159"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=509159"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=509159"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}