This newsletter is brought to you by Squarespace.

If you, like me, would like to have a personal or professional home on the internet, but prefer it to be somewhere that is not a “social media account” through which you are obligated to “post,” allow me to recommend: Squarespace.

What I like about my Squarespace website (you can see it here) is that it’s mine, and that I can set it up and design it how I like. (In my case, I tasked a top designer, five-year-old Gus Read, to help me put together a visual identity for the site, and between his bold sketches in KidPix, and Squarespace’s endlessly customizable, extremely intuitive design tools, I was able to run up a gorgeous website in roughly an hour, give or take the time needed to fetch Ritz crackers for my design consultant.)

Best of all, unlike a personal account on a platform, it’s easy to go back in and update, if and when the site needs to grow, or the designer has a different sensibility. At some point I can set up a storefront, add more pages, and use S.E.O. tools to improve the ability of the poor souls who Google “Max Read” to find me.

If you need a website, portfolio page, storefront, or nearly anything else, Squarespace is perfect. The only thing it can’t provide is KidPix images my son made.

Greetings from Read Max HQ! In today’s newsletter, a look at the state of A.I. discourse.

A reminder: This newsletter, and all the other newsletters you receive from Read Max, would not exist without the generosity and support of paying subscribers. This newsletter takes work, not least because I am an astonishingly slow thinker and writer, and the money I make from subscriptions allows me to really devote time and care to what I do. If you find it at all enlightening, interesting, educational, funny (?), or just “adequately distracting,” to the extent that you would buy me a beer if you saw me at a bar, consider paying the equivalent in a subscription: $5/month or $50/year.

From time to time, in my capacity as a professional assessor of vibes, I like to dip into the A.I. discourse on sites like Bluesky and X.com and evaluate the outlook of the sector based on the metaphors being used to describe the development and significance of the various technologies bundled together under the term “A.I.” What do we think A.I. is most like these days? What is the proper conceptual context to best understand it?

The answer to these questions, lately, at least on the part of the people most interested in and enthusiastic about “A.I.,” is “the global COVID-19 pandemic.”

X avatar for @NathanpmYoung

Nathan 🔎@NathanpmYoung

if you’re walking round SF does it feel like the early days of covid, where it’s clear what’s on everyone’s mind?

11:35 PM · Feb 5, 2026 · 678K Views

20 Replies · 3 Reposts · 226 Likes

X avatar for @AndyMasley

Andy Masley@AndyMasley

I know everyone’s saying it’s feeling a lot like February 2020 but it is feeling a lot like February 2020

X avatar for @DKThomp

Derek Thompson @DKThomp

for me the odds that AI is a bubble declined significantly in the last 3 weeks and the odds that we’re actually quite under-built for the necessary levels of inference/usage went significantly up in that period

basically I think AI is going to become the home screen of a

10:52 PM · Feb 5, 2026 · 150K Views

24 Replies · 27 Reposts · 727 Likes

The Wuhan phase of A.I. discourse cycle began bubbling up last week, driven to some extent by impressive results for OpenAI’s state-of-the-art GPT 5.2 model in evaluations and the spooky fable of the bots-only Reddit clone “Moltbook,” but moreso by the increasingly widespread adoption of the A.I. start-up Anthropic’s command-line developer tool Claude Code and its newer, general-use equivalent, Claude Cowork. These are so-called “agentic” A.I. programs: Tools that can, to a fairly high degree of accuracy and consistency, and with a minimum of oversight or correction, plan and execute multi-step tasks like building an app or organizing files based on natural language commands like those you’d type into a chatbot.

The Claudes Code and Cowork are extremely cool and impressive tools, especially to people like me with no real prior coding ability. I had it make me a widget to fetch assets and build posts for Read Max’s regular weekly roundups, a task it completed with astonishingly little friction. Admittedly, the widget will only save me 10 or so minutes of busy work every week, but suddenly, a whole host of accumulated but untouched wouldn’t-that-be-nice-to-have ideas for widgets and apps and pages and features has opened itself up to me.

Put another way, these are the first L.L.M. apps to capture broad attention and interest whose most obvious use is not “producing more slop for the platforms.” For a few years now, the conventional wisdom among savvy pundits has been that A.I. would revolutionize work, but on the evidence it mostly seemed to be revolutionizing spam and screen-augmented psychosis.

Now, the revolution is back on. In the visions of pundits like Derek Thompson, “AI is going to become the home screen of a ludicrously high percentage of white collar workers in the next two years and parallel agents will be deployed in the battlefield of knowledge work at downright Soviet levels.” (Anthropic, which is particularly focused on enterprise applications, has been a distinct beneficiary of this energy: This week, we were treated to a long and fascinating New Yorker profile of the company, as well as a Times interview with its founder, Dario Amodei.)

But the let’s-call-it-optimistic feeling gathering over the past six months or so that A.I. might finally be finding a productive and practical form, and that the “slop era” of L.L.M. development might be waning has been matched by a creeping sense of dread. What is cool and impressive to me is, to a certain breed of programmer, a kind of existential dilemma, and, to a certain kind of boss, an obvious opportunity: If we have a bot that can program for us, why do we need to employ programmers?

The paranoid sense that the bottom is about to fall out on employment in the software sector has been cultivated for a while now in the hothouse of A.I. Twitter, and extrapolated out to white-collar work in general: We are all to become casualties on the Battlefield of Knowledge Work, victims of suspicious friendly fire, our deaths covered up by the Knowledge Work Pentagon.

Add to that a short historical memory, and more than a little repetition compulsion, and you begin to understand the attraction of the pandemic metaphor, which reached (let’s hope) its peak on Tuesday with the publication on X.com of a viral essay called “Something Big Is Happening,” by an A.I. entrepreneur named Matt Shumer, which begins:

Think back to February 2020.

If you were paying close attention, you might have noticed a few people talking about a virus spreading overseas. But most of us weren’t paying close attention. The stock market was doing great, your kids were in school, you were going to restaurants and shaking hands and planning trips. If someone told you they were stockpiling toilet paper you would have thought they’d been spending too much time on a weird corner of the internet. Then, over the course of about three weeks, the entire world changed. Your office closed, your kids came home, and life rearranged itself into something you wouldn’t have believed if you’d described it to yourself a month earlier.

I think we’re in the “this seems overblown” phase of something much, much bigger than Covid. […]

[N]othing that can be done on a computer is safe in the medium term. If your job happens on a screen (if the core of what you do is reading, writing, analyzing, deciding, communicating through a keyboard) then AI is coming for significant parts of it. The timeline isn’t “someday.” It’s already started.

I have to confess I find myself taken aback at the popularity of Shumer’s essay–75 million views, according to X.com’s statistics–which contains almost nothing you could not have read at some point in the past year in a thousand identical LinkedIn and Medium posts, all of them designed to one-shot bosses. (The last time Shumer got this much attention was when he made fraudulent claims about an L.L.M. he’d trained.) Was it the novelty of a Twitter essay? Did Elon’s algorithms over-promote it? Was it the endorsement of Shazam star Zachary Levi? Did the largely A.I.-generated prose actually … resonate with people?

In the end I suspect it was just a right-place, right-time kind of a thing: The mood on X.com needed a scary essay about A.I., and here one was for the taking. As John Herrman puts it:

[…] it was written and passed along as a necessary, urgent, and awaited work of translation from one world — where, to put it mildly, people are pretty keyed up — to another. To that end, it effectively distilled the multiple crazy-making vibes of the AI community into something potent, portable, and ready for external consumption: the collective episodes of manic acceleration and excitement, which dissipate but also gradually accumulate; the open despair and constant invocations of inevitability by nearby workers; the mutual surveillance for signals and clues about big breakthroughs; and, of course, the legions of trailing hustlers and productivity gurus. This last category is represented at the end of 26-year-old Shumer’s post by an unsatisfying litany of advice: “Lean into what’s hardest to replace”; “Build the habit of adapting”; because while this all might sound very disruptive, your “dreams just got a lot closer.”

I have neither the interest nor the ability to address the Shumer essay in any kind of substantive way, except to say that you should not make any kind of changes to your life, career, business, or finances based on something you read in an essay posted to X.com, the Everything App. What I am more interested in is where we are in the A.I. hype cycle and discourse wars.

Since the launch of ChatGPT in 2022, we’ve been in the midst of a long macro hype cycle in which a number of smaller hype epicycles have already played out. Here’s something I wrote in March 2025 trying to outline the first of these epicycles:

Since the release of ChatGPT in 2022, A.I. discourse has gone through at least two distinct cycles, at least in terms of how it’s been talked about and understood on social media, and, to a lesser extent, in the popular press. First came the hype cycle, which lasted through most of 2023, during which the loudest voices were prophesying near-term chaos and global societal transformation in the face of unstoppable artificial intelligence, and Twitter was dominated by LinkedIn-style A.I. hustle-preneur morons claiming that “AI is going to nuke the bottom third of performers in jobs done on computers — even creative ones — in the next 24 months.”

When the much-hyped total economic transformation failed to arrive in the shortest of the promised timeframes–and when too many of the highly visible, actually existing A.I. implementations turned out to be worse-than-useless dogshit–a backlash cycle emerged, and the overwhelming A.I. hype on social media was matched by a strong anti-A.I. sentiment. For many people, A.I. became symbolic of a wayward and over-powerful tech industry, and many people who admitted or encouraged the use of A.I., especially in creative fields, was subject to intense criticism.

The occasion of that post, which was published not quite a year ago, was the emergence of a “backlash to the backlash”–the early stages of a new hype epicycle driven by new L.L.M. capabilities and products like Deep Research. The 2025 epicycle reached its own valley late last summer, with the disappointing release of GPT-5; now, almost right on cue, we find ourselves at or near the peak of a new epicycle brought on by Claude Cowork–one which, no matter how well deserved, will likely bottom out later this year or early next as the practical limitations of agentic A.I.–whatever they really end up being–become clear through extended use, and as excitement gives way to habituation.

That we are in a relatively familiar place on the hype graph, however, doesn’t mean that nothing has changed. At a very basic level, time has passed. “A.I.” has advanced by basically any metric–scope, speed, ability, accuracy, reliability; at the same time it’s been the subject of extensive practical use, deployment, and experimentation. And, maybe most importantly for the specific interests of this newsletter, the fact that we now have some experience with this software paradigm means the way we’re talking about A.I. is changing.

This time last year, at the height of the backlash-to-the-backlash, “artificial general intelligence” was on the tip of every booster’s tongue and everyone was (reportedly) asking each other “can you feel the A.G.I.?” This time around, “A.G.I.” doesn’t seem to be coming up as much, despite the fact that the current state of the technology makes a much greater claim on the idea. Instead, the operative framework for understanding where A.I. is headed is “remember the week that the N.B.A. shut down?”

The pandemic is not a cheery metaphor, precisely, but it is a practical one: An experience everyone participating in the discourse has lived through. The world on the other side of the pandemic is different–worse–but not unrecognizably so. For all that the “it’s February 2020!” claim is meant to alarm and disquiet its readers, it’s not really a messianic or eschatological fable but a social and economic parallel, and I think its popularity is, counterintuitively, an admission that even as A.I. continues to make impressive advances, the range of credible futures is narrowing.

Narrowing, I should say, from both ends: You hear less often about an existential singularity, but at the same time, fewer and fewer people dismiss A.I. as an N.F.T.-level “scam.” It’s not that there are no true believers left–check out Bluesky, or talk to Dario Amodei–or that this conventional wisdom is wholly settled. But people have been actually using L.L.M.s for a while now, and the wild fantasies and nightmares of the last few years are increasingly meeting the real world of institutions and inertia, markets and budgets, jobs and people.

It can be a bit hard to sense amidst the noise, but the shifts in mood across epicycles have acted something like a pendulum, swinging less and less wildly with every passage, slowly converging with time on a kind of conventional wisdom about L.L.M.s and the possible outcomes they engender: They are “intelligent” in some functional sense but not conscious; transformative but not apocalyptic. The truly open questions now are more limited: What kind of intelligent? What kind of transformative, and how, and when?

If you’ve been heavily involved in A.I. discourse online, I think it’s probably frustrating to realize that this kind of broad convergence on some of the big early questions of the hype cycle is on the horizon. Both the heaviest skeptics and the most enthusiastic exponents have long relied implicitly on the expectation of a kind of vindicating moment of truth: A total-wipeout market crash Emperor’s-New-Clothes moment, or, on the other hand, The Actual Singularity.

I always hated this implication–the strange, quasi-religious deadline of “A.G.I.,” or the forlorn hope for an Emperor’s New Clothes moment finally revealing A.I. for a sham. Shumer’s essay suffers from some of the same urgent thinking, but to the extent I can credit the “February 2020” metaphor, I appreciate that it moves A.I. futurology every so slightly away from “threshold” or “take-off” or “and then the monster woke up” frameworks and toward examples of ongoing complex processes.

In that sense there are even better historical precedents than a global pandemic. So much of recent A.I. discourse is focused on productivity and the labor market, but what if its influence is most strongly felt elsewhere? What if A.I. is more like “deindustrialization,” or “the internet”: An unquestionably transformative, multi-decade process whose most clear and striking effects are social, political, and qualitative. In each of those cases, if you had kept your eye only on top-line economic statistics, you might have missed where the change was actually happening. I suspect the same might be true of A.I.

But more to the point, in those cases, as in the real world in general, there was no day of judgment or final settling of accounts, no point where everyone had to go through their Substack and X.com posts and admit where they were wrong and apologize to their betters. Instead, the world just kept moving.