Posted on February 14, 2026
Posted by John Scalzi

Because it feels like a good time to do it, some current thoughts on “AI” and where it, we and I are about the thing, midway through February 2026. These are thoughts in no particular order. Some of them I’ve noted before, but will note again here mostly for convenience. Here we go:
1. I don’t and won’t use “AI” in the text of any of my published work. There are several reasons for this, including the fact that “AI”-generated text is not copyrightable and I don’t want any issues of ownership clouding my work, and the simple fact that my book contracts oblige me to write everything in those books by myself, without farming it out to either ghostwriters or “AI.” But mostly, it’s because I write better than “AI” can or ever will, and I can do it with far less energy draw. I don’t need to destroy a watershed to write a novel. I can write a novel with Coke Zero and snacks. Using “AI” in my writing would create more work for me, not less, and I really have lived my life with the idea of doing the least amount of work possible.
If you’re reading a John Scalzi book, it all came out of my brain, plain and simple. Better for you! Easier for me!
2. I’m not worried about “AI” replacing me as a novelist. Sure, someone can now prompt a novel-length work out of “AI” faster than I or any other human can write a book, and yes, people are doing just that, pumping into Kindle Unlimited and other such places a vast substrate of “AI” text slop generated faster than anyone could read it. Nearly all of it will sit there, unread, until the heat death of the universe.
Now, you might say that’s because why would anyone read something that no one actually took any effort to write, and that will be maybe about 5% of the reason. The other 95% of the reason, however, will be discoverability. Are the people pumping out the wide sea of “AI” text slop planning to make the spend for anyone to find that work? What are their marketing plans other than “toss it out, see who locates it by chance”? And if there is a marketing budget, if you can generate dozens or hundreds of “AI” text slop tomes in a year, how do you choose which to highlight? And will the purveyors of such text slop acknowledge that the work they’re promoting was written by no one?
(Answer: No. No they won’t).
I am not worried about being replaced as a novelist because I already exist as a successful author, and my publishers are contractually obliged to market my novels every time they come out. This will be the case for a while, since I have a long damn contract. Readers will know when my new books are out, and they will be able to find them in bookstores, be they physical or virtual. This is a huge advantage over any “AI” text slop that might be churned out. And while I don’t want to overstate the amount of publicity/marketing traditional publishers will do for their debut or remaining mid-list authors, they will do at least some, and that visibility is an advantage that “AI” text slop won’t have. Even indie authors, who must rely on themselves instead of a publicity department to get the word out about their work, have something “AI” text slop will never have: They actually fucking care about their own work, and want other people to see it.
I do understand it’s more than mildly depressing to think that a major market difference between “AI” text slop and stuff actual people wrote is marketing, but: Welcome to capitalism! It’s not the only difference, obviously. But it is a big one. And one that is likely to persist, because:
3. People in general are burning out on “AI.” Not just in creative stuff: Microsoft recently finally admitted that no one likes its attempt to shove its “AI” Copilot into absolutely everything, whether it needs to be there or not, and is making adjustments to its businesses to reflect that. “AI” as a consumer-facing entity rarely does what it does, better than the programs and apps it is replacing (see: Google’s Gemini replacing Google Assistant), and sucks up far more energy and resources. Is your electric bill higher recently? Has the cost of a computer gone up because suddenly memory prices have doubled (or more)? You have “AI” to thank for that. It’s the solution to a problem that not only did no one actually have, but wasn’t a problem in the first place. There are other issues with “AI” larger than this — mostly that it’s a tool to capture capital at the expense of labor — but I’m going to leave those aside for now to focus on the public exhaustion and dissatisfaction with “AI” as a product category.
In this sort of environment, human-generated work has a competitive advantage, because people see it as more authentic and real (which it is, to the extent that “authentic” and “real” mean “a product of an actual human brain”), and more likely to have the ability to surprise and engage the people who encounter it. I don’t want to oversell this — humans are still as capable of creating lazy, uninspired junk as they ever were, and some people really do think of their entertainment as bulk purchases. Those vaguely sad people will be happy that “AI” gives them more, even if it’s of lesser quality. But I do think in general when people are given a choice, that they will generally prefer to give their time and money to the output of an actual human making an effort, than to the product of a belching drain on the planet’s resources whose use primarily benefits people who are already billionaires dozens of times over. Call me optimistic.
Certainly that’s the case with me:
4. I’m supporting human artists, including as they relate to my own work. I’ve noted before that I have it as a contractual point that my book covers, translations and copyediting have to be done by humans. This is again both a practical issue (re: copyrights, quality of work, etc) and a moral one, but also, look, I like that my work pays other humans, and I want that to continue. Also, in my personal life, I’m going to pay artists for stuff. When I buy art, I’m going to buy from people who created it, not generated it out of a prompt. I’m not going to knowingly post or promote anything that is not human-created. Just as I wish to be supported by others, I am going to support other artists. There is no downside to not promoting/paying for “AI” generated work, since there was no one who created it. There is an upside to promoting and paying humans. They need to eat and pay rent.
“But what if they use AI?” In the case of the people working on my own stuff, it’s understood that the final product, the stuff that goes into my book, is the result of their own efforts. As for everything else, well, I assume most artists are pretty much like me: using “AI” for their primary line of creativity is just introducing more work, not less. Also I’m going to trust other creators; if they tell me they’re not using “AI” in their finished work then I’m going to believe them in the absence of a compelling reason not to. I don’t particularly have the time or interest in being the “AI” police. Anyway, if they’re misrepresenting their work product, that eventually gets found out. Ask a plagiarist about that.
With all that said:
5. “AI” is Probably Sticking Around In Some Form. This is not an “‘AI’ Is Inevitable and Will Take Over the World” statement, since as noted above people are getting sick of it being aggressively shoved at them, and also there are indications that a) “this is the worst it will ever be” is not true of AI, as people actively note that recent versions of ChatGPT were worse to use than earlier versions, b) investors are getting to the point of wanting to see an actual return on their investments, which is the cue for the economic bubble around AI to pop. This going to be just great for the economy. “AI,” as the current economic and cultural phenomenon, is likely to be heading for a fall.
Once all that drama is done and we’ve sorted through the damage, the backend of “AI” and its various capabilities will still be around, either relabeled or as is, just demoted from being the center of the tech universe and people making such a big deal about it, scaled down and hopefully more efficient. I understand that the “AI will probably persist” position is not a popular one in the creative circles in which I exist, and that people hope it vaporizes entirely, like NFTs and blockchains. I do have to admit I wouldn’t mind being wrong about this. But as a matter of capital investment and corporate integration, NFTs, etc are a blip compared to what’s been invested in “AI” overall, and how deep its use has sunk into modern capitalism (more on that in a bit).
Another reason I think “AI” is likely to stick around in some form:
6. “AI” is a marketing term, not a technical one, and encompasses different technologies. The version that the creative class gets (rightly) worked up about is generative “AI,” the most well-known versions of which were trained on vast databases of work, much of which was and is copyrighted and not compensated for. This is, however, only one subset of a larger group of computational systems which are also called “AI,” because it’s a sexy term that even non-nerds have heard of before, and far less confusing than, say, “neural networks” or such. Not all “AI” is as ethically compromised as large-scale generative “AI,” and a lot of it existed and was being used non-controversially before generative “AI” blew up as the wide-scale rights disaster it turned out to be.
It’s possible that “AI” as a term is going to be forever tainted as a moral hazard, disliked by the public and seen as a promotions drag by marketing departments. If and when that happens, a lot of things currently hustled under the “AI” umbrella will be quietly removed from it, either returning to previous, non-controversial labels or given new labels entirely. Lots of “AI” will still be around, just no one will call it that, and outside of obvious generative “AI” that presents rights issues, fewer people will care.
On the matter of generative “AI,” here’s a thought:
7. There were and are ethical ways to have trained generative “AI” but because they weren’t done, the entire field is suspect. Generative “AI” could easily have been trained solely on material in the public domain and/or on appropriately-licensed Creative Commons material, and an opt-in licensing gateway to acquire and pay for copyrighted work used in training, built and used jointly by the companies needing training data, could have happened. This was all a solvable problem! But OpenAI, Anthropic, et al decided to train first, ask forgiveness later, on the idea that would be cheaper simply to do it first and to litigate later. I’m not entirely sure this will turn out to be true, but it is possible that at this late stage, some of the companies will go under before any settlements can be achieved, which will have the same effect.
There are companies who have chosen to train their generative models with compensation; I know of music software companies that make a point of showing how artists they worked were both paid for creating samples and other material, and get paid royalties when work generated from those samples, etc is made by people using the software. I think that’s fine! As long as everyone involved is happy with the arrangement, no harm, and no foul. But absent of that sort of clear and unambiguous declaration of provenance and compensation regarding training data, one has to assume that any generative “AI” has used stolen work. It’s so widely pervasive at this point that this has to be a foundational assumption.
And here is a complication:
8. The various processes lumped into “AI” are likely to be integrated into programs and applications that are in business and creative workflows. One, because they already were prior to “AI” being the widely-used rubric, and two, because these companies need to justify their investments somehow. Some of these systems and processes aren’t tainted by the issues of “generative AI” but many of them are, including some that weren’t previously. When I erase a blotch in an image with Photoshop, the process may or may not use Generative AI and when it does, it may or may not use Adobe’s Firefly model (which Adobe maintains, questionably, is trained only on material it has licensed).
Well, don’t use Photoshop, I hear you say. Which, okay, but I have some bad news for you: Nearly every photoediting suite at this point incorporates “AI” at some point in its workflow, so it’s six of one and half dozen of the other. And while I am a mere amateur when it comes to photos, lots of professional photographers use Adobe products in their workflow, either because they’ve been using it for years and don’t want to train on new software (which, again, probably has “AI” in its workflow), or they’re required to use it by their clients because it’s the “industry standard.” A program being the “industry standard” is one reason I use Microsoft Word, and now that program is riddled with “AI.” At a certain point, if you are using 21st century computer-based tools, you are using “AI” of some sort, whether you want to or not. Some of it you can turn off or opt out of. Some of it you can’t.
(Let’s not even talk about my Google Pixel Phone, which is now so entirely festooned with “AI” that it’s probably best to think of it as an “AI” computer with a phone app, than the other way around.)
This is why earlier in this piece, I talk about the “final product” being “AI”-free — because it’s almost impossible at this point to avoid “AI” in computer-based tools, even if one wants to. Also, given the fact that “AI” is a marketing rather than a technical term, what the definition of “AI” is, and what is an acceptable level of use, will change from one person to another. Is Word’s spellcheck “AI”? Is Photoshop’s Spot Healing brush tool? Is Logic Pro’s session drummer? At what point does a creative tool become inimical to creation?
(On a much larger industrial scale, this will be an extremely interesting question when it comes to animation, CGI and VFX. “AI” is already here in video games with DLSS, which upscales and adds frames to games; if similar tech isn’t already being used for inbetweening in animation, it’s probably not going to be long until it is.)
Again, I’m not interested in being, nor have the time to be, the “AI” police. I choose to focus on the final product and the human element in that, because that is honestly the only part of the process that I, and most people, can see. I’m certainly not going to penalize a creative person because Adobe or Microsoft or whomever incorporated “AI” into a tool they need to use in order to do their work. I would be living in a glass house if I threw that particular stone.
9. It’s all right to be informed about the state of the art when it comes to “AI.” Do I use “AI” in my text? No. Do I think it makes sense to have an understanding of where “AI” is at, to know how the companies who make it create a business case for it, and to keep tabs on how it’s actually being used in the real word? Yes. So I check out latest iterations of ChatGPT/Claude/Gemini/Copilot, etc (I typically steer clear of Grok if only because I’m not on the former Twitter anymore) and the various services and capabilities they offer.
The landscape of “AI” is still changing rapidly, and if you’re still at the “lol ‘AI’ can’t draw hands” level of thinking about the tech, you’re putting yourself at a disadvantage, particularly if you’re a creative person. Know your enemy, or at least, know the tools your enemies are making. Again, I’m not worried about “AI” replacing me as a novelist. But it doesn’t have to be at that level of ability to wreak profound and even damaging changes to creative fields. We see that already.
One final, possibly heretical thought:
10. Some people are being made to use “AI” as a condition of their jobs. Maybe don’t give them too much shit for it. I know at least a couple of people who were recently hired for work, who were told they needed to be fluent in computer systems that had “AI” as part of their workflow. Did they want or need to use those systems to do the actual job they were hired for? Almost certainly not! Did that matter? Nope! Was it okay that their need to eat and pay rent outweighed their ethical annoyance/revulsion with “AI” and the fact it was adding more work, not less, onto their plate? I mean (waves at the world), you tell me. Personally speaking, I’m not the one to tell a friend that they and their kid and cat should live in a Toyota parked at a Wal-Mart rather than accept a corporate directive made by a mid-level manager with more jargon in their brain than good sense. I may be a softie.
Be that as it may, to the extent you can avoid “AI,” do so, especially if you have a creative job, where it’s almost always just going to get in your way. Your fans, the ones that exist and the ones you have yet to make, will appreciate that what they get from you is from you. That’s what people mostly want from art: Entertainment and connection. You will always be able to do that better than “AI.” There is no statistical model that can create what is uniquely you.
— JS
Like this:
Like Loading…