{"id":475033,"date":"2026-02-14T19:17:22","date_gmt":"2026-02-14T19:17:22","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/475033\/"},"modified":"2026-02-14T19:17:22","modified_gmt":"2026-02-14T19:17:22","slug":"we-can-choose-not-to-let-ai-destroy-us","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/475033\/","title":{"rendered":"We Can Choose Not to Let AI Destroy Us"},"content":{"rendered":"<p>Going to be a short newsletter today because Sarah and I did a super-sized Secret Podcast this morning. You\u2019re going to love it. Instant classic this week.<\/p>\n<p>Also: Next week is going to be a weird Triad schedule because I\u2019m not sure when I\u2019ll find windows to write while heading to Minneapolis for the live shows. (It looks like just a few tickets are still available for <a href=\"https:\/\/www.thebulwark.com\/p\/bulwark-events\" rel=\"nofollow noopener\" target=\"_blank\">the February 18 event<\/a>.)<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!GN-v!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F047826e4-7d3c-4ebb-b5e9-d992f56361d5_6500x4333.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/02\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/047826e4-7d3c-4ebb-b5e9-d992f56361d5_6500.jpeg\" width=\"1456\" height=\"971\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/047826e4-7d3c-4ebb-b5e9-d992f56361d5_6500x4333.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:19298607,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https:\/\/www.thebulwark.com\/i\/187756574?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F047826e4-7d3c-4ebb-b5e9-d992f56361d5_6500x4333.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   fetchpriority=\"high\" class=\"sizing-normal\"\/><\/a>(Shutterstock)<\/p>\n<p>This week a tech guy named Matt Shumer wrote <a href=\"https:\/\/shumer.dev\/something-big-is-happening\" rel=\"nofollow noopener\" target=\"_blank\">a big, apocalyptic warning<\/a> about the chaos AI is about to unleash. <\/p>\n<p>His essay is worth your time, so I\u2019d encourage you to read it in full. But the basic summary is:<\/p>\n<p>AI has experienced a step-change in quality in recent months.<\/p>\n<p>The pace of AI improvement, from model to model, has sped up noticeably.<\/p>\n<p>AI models are now useful enough that they help build their successors.<\/p>\n<p>There\u2019s a lot more in it. Again, <a href=\"https:\/\/shumer.dev\/something-big-is-happening\" rel=\"nofollow noopener\" target=\"_blank\">read the whole thing<\/a>. What worries me\u2014what I want to talk about today\u2014is the problem of speed. <\/p>\n<p>If I could do one thing to change American education it would be to focus on ecology early and often. That\u2019s because humans don\u2019t think enough about systems and the easiest way to introduce <a href=\"https:\/\/www.thebulwark.com\/p\/democracy-simple-complex-systems\" rel=\"nofollow noopener\" target=\"_blank\">the concept of a system<\/a> is to talk about local environments.<\/p>\n<p>Get kids thinking about how an ecosystem works and they can learn how a financial market, or an industry, or a network functions. It helps them understand stable-states, and systemic shocks, and evolutionary change. There\u2019s a lot to learn.<\/p>\n<p>One of the big lessons of ecology is that <a href=\"https:\/\/www.thebulwark.com\/p\/failure-and-thanksgiving\" rel=\"nofollow noopener\" target=\"_blank\">complex systems are tremendously resilient and adaptable<\/a> if the change comes slowly enough. Complex systems are not vulnerable to change so much as they are vulnerable to shocks\u2014sudden, rapid change.<\/p>\n<p>That\u2019s what worries me most about AI.<\/p>\n<p data-attrs=\"{&quot;url&quot;:&quot;https:\/\/www.thebulwark.com\/p\/we-can-choose-not-to-let-ai-artificial-intelligence-destroy-us?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\" class=\"button-wrapper\"><a href=\"https:\/\/www.thebulwark.com\/p\/we-can-choose-not-to-let-ai-artificial-intelligence-destroy-us?utm_source=substack&amp;utm_medium=email&amp;utm_content=share&amp;action=share\" rel=\"nofollow noopener\" class=\"button primary\" target=\"_blank\">Share<\/a><\/p>\n<p>In the early days of ChatGPT, people were worried about the robot apocalypse. The big fear of the moment is white-collar job displacement, especially at the entry level. What happens when AI can do everything a paralegal, or a research assistant, or a data analyst does, and cheaper? What happens when AI can do journalism, coding, graphic design, and anything you might have hired McKinsey to do?<\/p>\n<p>A lot of white-collar workers may be out of a job.<\/p>\n<p>That wouldn\u2019t worry me if it happened over the course of twenty years. Because the market would adapt. New industries would emerge; new pathways would be established. The system would find a <a href=\"https:\/\/en.wikipedia.org\/wiki\/Pareto_efficiency\" rel=\"nofollow noopener\" target=\"_blank\">Pareto optimal<\/a> state.<\/p>\n<p>But what if the pace of adoption is much faster? What if the AI-induced shifts happen over a 5- or 10-year timeline?<\/p>\n<p>It\u2019s the second-order effects that scare me. <\/p>\n<p>Let\u2019s say you own Acme Widgets and you have a stable business. You discover that you can keep productivity constant, but cut costs by employing AI to do the work of 10 percent of your workforce. So you cut those workers. You\u2019re now making more money. Good for you.<\/p>\n<p>But workers are also consumers. And if many other companies are also finding productivity gains by replacing their workers with AI, then suddenly there are going to be a lot of workers without jobs\u2014which means a lot of consumers without paychecks.<\/p>\n<p>Which means a lot of crashing demand for goods and services, across the board.<\/p>\n<p>I suspect that there\u2019s a sliding scale for AI adoption where, if the number of displaced workers is low enough, then the value of productivity gains outbalances the value of lost consumption from unemployed workers. But as you slide up the scale to more and more workers being displaced, that balance shifts. There must be a point at which the destruction of jobs is actually a net harm to the macroeconomy\u2014because the zeroing out of consumer demand far outweighs productivity gains. <\/p>\n<p>Of course, what I\u2019m describing here is a classic shock. If either the scale of the change is small enough or the timeline in which the change is introduced is long enough, then the system will be able to manage it. Not painlessly. Not perfectly. But we\u2019ll all muddle through to some new equilibrium.<\/p>\n<p>What worries me is that if the pace of AI development is accelerating, then we\u2019re both (a) increasing the scale of the coming change while (b) shrinking the timeline on which it will arrive.<\/p>\n<p>Which is a recipe for overwhelming the system. And when systems\u2014even complex systems\u2014become overwhelmed, they are vulnerable to collapse.<\/p>\n<p data-attrs=\"{&quot;url&quot;:&quot;https:\/\/www.thebulwark.com\/p\/we-can-choose-not-to-let-ai-artificial-intelligence-destroy-us\/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\" class=\"button-wrapper\"><a href=\"https:\/\/www.thebulwark.com\/p\/we-can-choose-not-to-let-ai-artificial-intelligence-destroy-us\/comments\" rel=\"nofollow noopener\" class=\"button primary\" target=\"_blank\">Leave a comment<\/a><\/p>\n<p>I don\u2019t know what the answer is here. Maybe AI will be less impactful than people like Matt Shumer think. Or maybe it will develop on a time horizon that is manageable.<\/p>\n<p>But if it\u2019s neither of those things? If it\u2019s as disruptive as people expect and it materializes faster? Then what?<\/p>\n<p>One option would be artificial controls.<\/p>\n<p>Technologists like to say that genies cannot be put back into bottles, but that is not exactly true. A technology can\u2019t be unlearned, exactly, but it can be regulated. It is possible for our society to choose to limit the application of AIs, and to enforce that limitation by law.<\/p>\n<p>Maybe that\u2019s a bad idea. Maybe it won\u2019t be necessary. But it is an option. Just as we have rules for how labor works, or salaries are paid, or taxes are levied, we can create rules that govern how industries may use AI. <\/p>\n<p>We do not have to walk into a dystopian future just because OpenAI builds it. <\/p>\n<p>We have agency. It is possible to use society\u2019s power\u2014the consent of the governed\u2014to establish laws that mandate the use of certain human labor and prohibit the use of certain machine labor. This is no different in principle from how regulations and laws govern the use of pesticides, or the genetic manipulation of crops, or the use of chemicals on livestock.<\/p>\n<p>We get to decide how technology is used; or if it is used at all.<\/p>\n<p>This seems like something the next Democrat who wants to be president should think about. \ud83e\udd37\u200d\u2642\ufe0f <\/p>\n<p data-attrs=\"{&quot;url&quot;:&quot;https:\/\/www.thebulwark.com\/p\/we-can-choose-not-to-let-ai-artificial-intelligence-destroy-us\/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}\" data-component-name=\"ButtonCreateButton\" class=\"button-wrapper\"><a href=\"https:\/\/www.thebulwark.com\/p\/we-can-choose-not-to-let-ai-artificial-intelligence-destroy-us\/comments\" rel=\"nofollow noopener\" class=\"button primary\" target=\"_blank\">Leave a comment<\/a><\/p>\n<p>Anthropic\u2019s Claude is my AI of choice. The New Yorker has a profile of it.<\/p>\n<p>A large language model is nothing more than a monumental pile of small numbers. It converts words into numbers, runs those numbers through a numerical pinball game, and turns the resulting numbers back into words. Similar piles are part of the furniture of everyday life. Meteorologists use them to predict the weather. Epidemiologists use them to predict the paths of diseases. Among regular people, they do not usually inspire intense feelings. But when these A.I. systems began to predict the path of a sentence\u2014that is, to talk\u2014the reaction was widespread delirium. As a cognitive scientist wrote recently, \u201cFor hurricanes or pandemics, this is as rigorous as science gets; for sequences of words, everyone seems to lose their mind.\u201d<\/p>\n<p>It\u2019s hard to blame them. Language is, or rather was, our special thing. It separated us from the beasts. We weren\u2019t prepared for the arrival of talking machines. Ellie Pavlick, a computer scientist at Brown, has drawn up a taxonomy of our most common responses. There are the \u201cfanboys,\u201d who man the hype wires. They believe that large language models are intelligent, maybe even conscious, and prophesy that, before long, they will become superintelligent. The venture capitalist <a href=\"https:\/\/www.newyorker.com\/magazine\/2015\/05\/18\/tomorrows-advance-man\" rel=\"nofollow noopener\" target=\"_blank\">Marc Andreessen<\/a> has described A.I. as \u201cour alchemy, our Philosopher\u2019s Stone\u2014we are literally making sand think.\u201d The fanboys\u2019 deflationary counterparts are the \u201ccurmudgeons,\u201d who claim that there\u2019s no there there, and that only a blockhead would mistake a parlor trick for the soul of the new machine. In the recent book \u201c<a href=\"https:\/\/www.amazon.com\/AI-Fight-Techs-Create-Future\/dp\/1847928625\" rel=\"nofollow noopener\" target=\"_blank\">The AI Con<\/a>,\u201d the linguist Emily Bender and the sociologist Alex Hanna belittle L.L.M.s as \u201cmathy maths,\u201d \u201cstochastic parrots,\u201d and \u201ca racist pile of linear algebra.\u201d<\/p>\n<p>But, Pavlick writes, \u201cthere is another way to react.\u201d It is O.K., she offers, \u201cto not know.\u201d<\/p>\n<p>What Pavlick means, on the most basic level, is that large language models are black boxes. We don\u2019t really understand how they work. We don\u2019t know if it makes sense to call them intelligent, or if it will ever make sense to call them conscious. But she\u2019s also making a more profound point. The existence of talking machines\u2014entities that can do many of the things that only we have ever been able to do\u2014throws a lot of other things into question. We refer to our own minds as if they weren\u2019t also black boxes. We use the word \u201cintelligence\u201d as if we have a clear idea of what it means. It turns out that we don\u2019t know that, either.<\/p>\n<p>Now, with our vanity bruised, is the time for experiments. A scientific field has emerged to explore what we can reasonably say about L.L.M.s\u2014not only how they function but what they even are. New cartographers have begun to map this terrain, approaching A.I. systems with an artfulness once reserved for the study of the human mind. Their discipline, broadly speaking, is called interpretability. Its nerve center is at a \u201cfrontier lab\u201d called Anthropic.<\/p>\n<p><a href=\"https:\/\/www.newyorker.com\/magazine\/2026\/02\/16\/what-is-claude-anthropic-doesnt-know-either\" rel=\"nofollow noopener\" target=\"_blank\">Read the whole thing.<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Going to be a short newsletter today because Sarah and I did a super-sized Secret Podcast this morning.&hellip;\n","protected":false},"author":2,"featured_media":475034,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-475033","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/475033","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=475033"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/475033\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/475034"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=475033"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=475033"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=475033"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}