{"id":26959,"date":"2025-07-27T03:32:22","date_gmt":"2025-07-27T03:32:22","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/26959\/"},"modified":"2025-07-27T03:32:22","modified_gmt":"2025-07-27T03:32:22","slug":"predicting-the-digital-superpowers-we-could-have-by-2030","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/26959\/","title":{"rendered":"Predicting the &#8220;digital superpowers&#8221; we could have by 2030"},"content":{"rendered":"<p>\n                    Sign up for Big Think on Substack              <\/p>\n<p>\n                    The most surprising and impactful new stories delivered to your inbox every week, for free.           <\/p>\n<p>It\u2019s 2025, the year when mainstream computing will start to shift from a race to develop increasingly powerful tools to a race to develop increasingly powerful abilities. The difference between a tool and an ability is subtle yet profound. From the very first hammerstones to the latest quantum computers, tools are external artifacts that help us humans overcome our organic limitations. Humanity\u2019s ingenious tools have greatly expanded what we can accomplish as individuals, teams, and massive civilizations.<\/p>\n<p>Abilities are different. We experience abilities in the first person as self-embodied capabilities that feel internal and instantly accessible to our conscious minds. For example, language and mathematics are human technologies that we install in our brains and carry around with us throughout our lives, expanding our abilities to think, create, and collaborate. They are genuine <a href=\"https:\/\/bigthink.com\/the-future\/ar-will-make-us-superhumans\/\" rel=\"nofollow noopener\" target=\"_blank\">superpowers<\/a> and they feel so inherent to our existence that we rarely think of them as technologies at all.\u00a0<\/p>\n<p>\u201cAugmented mentality\u201d<\/p>\n<p>Unlike our verbal and mathematical superpowers, the next wave of superhuman abilities will require some hardware, but we will still experience them as self-embodied skills that we carry around with us throughout our lives. These abilities will emerge from the convergence of AI, augmented reality, and conversational computing. They will be unleashed by <a href=\"https:\/\/www.researchgate.net\/publication\/367183000_The_Metaverse_The_Ultimate_Tool_of_Persuasion\" rel=\"nofollow noopener\" target=\"_blank\">context-aware AI agents<\/a> that are loaded into <a href=\"https:\/\/bigthink.com\/the-future\/the-whisperverse-the-future-of-mobile-computing-is-an-ai-voice-inside-your-head\/\" rel=\"nofollow noopener\" target=\"_blank\">body-worn devices<\/a> that see what we see, hear what we hear, experience what we experience, and provide us with enhanced abilities to perceive and interpret our world. I refer to this new technological direction as <a href=\"https:\/\/bigthink.com\/the-future\/the-whisperverse-the-future-of-mobile-computing-is-an-ai-voice-inside-your-head\/\" rel=\"nofollow noopener\" target=\"_blank\">augmented mentality<\/a> and I predict that by 2030, a majority of us will live our lives with context-aware AI agents bringing <a href=\"https:\/\/medium.com\/predict\/augmented-reality-will-give-us-superpowers-e0a2f4a8d777\" rel=\"nofollow noopener\" target=\"_blank\">digital superpowers<\/a> into our daily experiences.\u00a0\u00a0<\/p>\n<p>The majority of these superpowers will be delivered through <a href=\"https:\/\/www.privacylost.org\/\" rel=\"nofollow noopener\" target=\"_blank\">AI-powered glasses<\/a> with cameras and microphones that act as their eyes and ears, but there will be other form factors for people who just don\u2019t like eyewear. For example, there will be earbuds that have cameras built in \u2014 a reasonable alternative if you don\u2019t have long hair.\u00a0We will whisper to these intelligent devices, and they will whisper back, giving us recommendations, guidance, <a href=\"https:\/\/patents.google.com\/patent\/US7577522B2\/en\" rel=\"nofollow noopener\" target=\"_blank\">spatial reminders<\/a>, directional cues, <a href=\"https:\/\/medium.com\/@history-of-immersive-computing\/history-of-haptic-computing-538a84bb4303\" rel=\"nofollow noopener\" target=\"_blank\">haptic nudges<\/a>, and other verbal and perceptual content that will coach us through our days like an omniscient alter ego.\u00a0\u00a0<\/p>\n<p>How will our superpowers unfold?\u00a0<\/p>\n<p>Consider this common scenario: You\u2019re walking downtown and spot a store across the street. You wonder: What time does it open?\u00a0So, you grab your phone and type (or say) the name of the store. You quickly find the hours on a website and maybe review other info about the store as well. That is the basic tool-use model of computing prevalent today.<\/p>\n<p>Now, let\u2019s look at how Big Tech will transition to an ability computing model:\u00a0\u00a0\u00a0\u00a0\u00a0<\/p>\n<p>Phase 1: You are wearing AI-powered glasses that can see what you see, hear what you hear, and process your surroundings through a multimodal large language model.\u00a0Now when you spot that store across the street, you simply whisper to yourself, \u201cI wonder when it opens?\u201d and a voice will instantly ring back into your ears, \u201c10:30 a.m.\u201d\u00a0<\/p>\n<p>I know this is a subtle shift from asking your phone to look up the name of a store, but it will feel profound. The reason is that the context-aware AI agent will share your personal reality. It\u2019s not merely tracking your location like GPS \u2014 it\u2019s seeing what you see, hearing what you hear, and paying attention to what you are paying attention to. This will make it feel far less like a tool and far more like an internal ability directly linked to your own first-person experiences.\u00a0<\/p>\n<p>In addition, it will not be a one-way interaction in which we ask the AI agent for assistance. The AI agent will often be proactive and will ask us questions based on the context of our world (listen to this <a href=\"https:\/\/www.youtube.com\/watch?v=XoO8X0LAC8Q\" rel=\"nofollow noopener\" target=\"_blank\">fun audio-play<\/a> for examples). And when we are questioned by the AI that whispers in our ears, we will often answer by just <a href=\"https:\/\/patents.google.com\/patent\/US7489979B2\" rel=\"nofollow noopener\" target=\"_blank\">nodding our heads to affirm<\/a> or shaking our heads to reject. It will feel so natural and seamless that we might not even consciously realize that we replied. It will feel like a deliberation within ourselves.<\/p>\n<p>Phase 2: By 2030, we will not need to whisper to the AI agents traveling with us through our lives. Instead, you will be able to simply mouth the words, and the AI will know what you are saying by reading your lips and detecting activation signals from your muscles. I am confident that \u201cmouthing\u201d will be deployed because it\u2019s more private, more resilient to noisy spaces, and most importantly, it will feel more personal, internal, and self-embodied.\u00a0<\/p>\n<p>Phase 3: By 2035, you may not even need to mouth the words. That\u2019s because the AI will learn to interpret the signals in our muscles with such subtlety and precision \u2014 we will simply need to think about mouthing the words to convey our intent. You will be able to focus your attention on any item or activity in your world and think something and useful information will ring back from your AI glasses like an <a href=\"https:\/\/bigthink.com\/the-future\/the-whisperverse-the-future-of-mobile-computing-is-an-ai-voice-inside-your-head\/\" rel=\"nofollow noopener\" target=\"_blank\">all-knowing alter ego<\/a> in your head.<\/p>\n<p>Of course, the capabilities will go far beyond just wondering about items and activities around you. That\u2019s because the onboard AI that shares your first-person reality will learn to anticipate the information you desire before you even ask for it. For example, when a coworker approaches from down the hall, and you can\u2019t quite remember her name, the AI will sense your unease and a voice will ring: \u201cJenny\u00a0from quantum computing.\u201d<\/p>\n<p>Or when you grab a box of cereal in a store and are curious about the carbs, or wonder whether it\u2019s cheaper at Walmart, the answers will just ring in your ears or appear visually. It will even give you superhuman abilities to assess the emotions on other people\u2019s faces, predict their moods, goals, or intentions, coaching you during real-time conversations to make you more compelling, appealing, or persuasive (see a fun <a href=\"https:\/\/www.youtube.com\/watch?v=IsE_Pas2OQU\" rel=\"nofollow noopener\" target=\"_blank\">video example<\/a>).\u00a0<\/p>\n<p>As AI-powered glasses add mixed-reality features that incorporate seamless visual content into our surroundings, these devices will give us literal superpowers, like X-ray vision. For example, the hardware will have access to digital models of your home and will use it to let you peer through the walls and instantly find studs, pipes, or wiring.<\/p>\n<p>I know some people will be skeptical about my prediction of mass adoption by 2030, but I don\u2019t make these claims lightly. I have been focused on technologies that <a href=\"https:\/\/spectrum.ieee.org\/history-of-augmented-reality\" rel=\"nofollow noopener\" target=\"_blank\">augment our reality and expand human abilities<\/a> for over 30 years and I can say without question that the mobile computing market is about to run in this direction in a very big way.\u00a0\u00a0<\/p>\n<p>Over the past 12 months, two of the most influential and innovative companies in the world, Meta and Google, revealed their goal to give us superpowers. Meta made the first big move by adding a context-aware AI to their Ray-Ban glasses and by showing off their Orion mixed reality prototype that adds impressive visual capabilities.\u00a0Meta is now very well positioned to leverage their big investments in AI and XR and become a major player in the mobile computing market and they will likely do it by selling us superpowers we can\u2019t resist.\u00a0\u00a0\u00a0<\/p>\n<p>Not to be outdone, Google recently <a href=\"https:\/\/blog.google\/products\/android\/android-xr\/\" rel=\"nofollow noopener\" target=\"_blank\">announced Android XR<\/a>, a new AI-powered operating system for <a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-030-89906-6_1\" rel=\"nofollow noopener\" target=\"_blank\">augmenting our world<\/a> with seamless context-aware content. They also announced a partnership with Samsung to bring new glasses and headsets to market. With over 70% market share for mobile operating systems and an increasingly strong AI presence with Gemini, Google is well-positioned to be the leading provider of technology-enabled human superpowers within the next 18 months.<\/p>\n<p>But what about the risks?\u00a0<\/p>\n<p>To quote the famous <a href=\"https:\/\/www.marvel.com\/comics\/issue\/16926\/amazing_fantasy_1962_15\" rel=\"nofollow noopener\" target=\"_blank\">1962 Spiderman comic<\/a>, \u201cWith great power comes great responsibility.\u201d This wisdom is literally about superpowers. The difference is that primary responsibility will not fall on the consumers who receive these techno-powers but on the companies that provide them and the regulators that oversee them.\u00a0<\/p>\n<p>After all, when wearing AI-powered AR eyewear, each of us could find ourselves in a new reality where technologies <a href=\"https:\/\/dl.acm.org\/doi\/full\/10.1145\/3614426\" rel=\"nofollow noopener\" target=\"_blank\">controlled by third parties<\/a> can selectively alter what we see and hear, while AI-powered voices whisper in our ears with targeted advice and guidance. While the intentions might be positive, the <a href=\"https:\/\/www.researchgate.net\/publication\/368492998_The_Metaverse_and_Conversational_AI_as_a_Threat_Vector_for_Targeted_Influence\" rel=\"nofollow noopener\" target=\"_blank\">potential for abuse<\/a> is equally profound.\u00a0\u00a0\u00a0<\/p>\n<p>To avoid the dystopian outcomes, my most significant recommendation to both consumers and manufacturers is to <a href=\"https:\/\/www.researchgate.net\/publication\/362541437_Regulating_the_Metaverse_a_Blueprint_for_the_Future\" rel=\"nofollow noopener\" target=\"_blank\">adopt a subscription business model<\/a>. If the arms race for selling superpowers is driven by which company can provide the most amazing new abilities for a reasonable monthly fee, then we will all benefit. If instead, the business model becomes a competition to monetize superpowers by delivering the most effective targeted influence into our eyes and ears, consumers could <a href=\"https:\/\/github.com\/GenerativeAIandHCI\/GenerativeAIandHCI.github.io\/blob\/main\/papers\/2023\/Conversational%20AI%20and%20the%20Threat%20to%20Epistemic%20Agency%20-%20CHI%202023%20.pdf\" rel=\"nofollow noopener\" target=\"_blank\">easily be manipulated<\/a> throughout our daily lives.\u00a0<\/p>\n<p>I know some people find the concept of AI-powered glasses invasive or even creepy and can\u2019t imagine wanting or needing these products. I understand the sentiment, but by 2030 the superpowers that these devices give us won\u2019t feel optional. After all, not having them could put us at a social and cognitive disadvantage. It is now up to the industry and regulators to ensure that we roll out these new abilities in a manner that is not intrusive, invasive, manipulative, or dangerous. It requires careful planning and oversight.<\/p>\n<p>\n                    Sign up for Big Think on Substack              <\/p>\n<p>\n                    The most surprising and impactful new stories delivered to your inbox every week, for free.           <\/p>\n","protected":false},"excerpt":{"rendered":"Sign up for Big Think on Substack The most surprising and impactful new stories delivered to your inbox&hellip;\n","protected":false},"author":2,"featured_media":26960,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[49,48,285,61],"class_list":{"0":"post-26959","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-ca","9":"tag-canada","10":"tag-computing","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/26959","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=26959"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/26959\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/26960"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=26959"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=26959"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=26959"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}