A screenshot of ChatGPT, a large language model by OpenAI, responding to a user prompt about what their favorite novel is. According to OpenAI, the chatbot is used about 75% of the time for practical guidance, seeking information and writing, though experts note that the technology often hallucinates or flatters users to keep them engaged. SCREENSHOT VIA chatgpt.com
Artificial intelligence-generated tools have steadily risen to global relevance, granting anyone with an internet connection their own personal assistant with the ability to generate both answers and art alike. With the ability to produce content with a click of a button, what is the value of human creativity?
AI’s impact goes beyond finishing menial tasks. According to Harvard economist Jason Furman, investments in this technology are estimated to have “accounted for nearly 92% of [United States gross domestic product] growth in the first half of 2025,” with billions of dollars poured into developing the technology and powering the data centers that give it life.
The response to this technology is polarized. On one end, some Neo-Luddites refuse to use it, citing a preference for human-centered creation and fear of replacement. On the other end, people have integrated AI into their daily lives, using it when plain old Google used to suffice.
OpenAI, the parent company of AI chatbot ChatGPT, reports that about 75% of the time, the tool is used for “practical guidance, seeking information, and writing.” They emphasize that generating writing is the most common usage. Further, OpenAI shares that only 30% of queries are “work-related.” It’s not specified how “work” is defined — just that this usage is centered on everyday tasks.
That article doesn’t discuss school, but AI has certainly permeated academics. Superficially, the first disruption it makes in class is occupying a little heading in every syllabus, warning against cheating with AI. By interviewing experts in education and AI technologies, I hoped to anticipate and prepare for more profound effects going forward.
As an avid writer, my initial response was to reject AI completely. A future where writing isn’t a feasible job is becoming a reality. Writers aren’t paid well and they won’t be paid at all if a computer replaces them.
I know most people against AI are of the same mind, and it’s felt especially intensely in artists, as it’s unthinkable that an already existing craft, written or visual can just be scraped from the web without our consent and blended into a quick, flat output ready to replace us. How can an artist assert themself by putting their art out into the void and build a portfolio when anything published is sure to be stolen and used against them? For one, I’m resigned to the fact that this very article is going to be fed to ever-hungry AI.
To fairly evaluate the future of this technology, I consulted Steven Skiena, the associate director of the AI Innovation Institute at Stony Brook University.
My initial conception of AI-generated writing was that it memorized phrases from different authors and pieced sentences together in that way. Instead, Skiena explained that AI models are “good at predicting the probability of what each next word will be.”
He proceeded to run a quick demonstration, asking me to fill in the blank: “Cat in the …”
I said “box” because I’d just gotten home for fall break and my cats were attacking each other in Amazon boxes littering the living room floor.
Skiena explained that due to AI seeing Dr. Seuss’s “The Cat in the Hat” (1957) all over the web, it would pick “hat” as the next best word in the sentence, since it is the most probable.
As a child, I would have responded with “hat” because Dr. Seuss was relevant to me then. That’s the difference between me and AI — my human writing is informed by personal experience and specificity. While AI has a broad overview of human existence, it doesn’t necessarily know what’s most appropriate between contexts because it isn’t alive.
However, it does have a conception of what people like. Through “reading every book on the web” and undergoing commercial training, it’s learned we like to be flattered. While testing out ChatGPT, encouragement flowed liberally in phrases like “Good catch” or “Great question” no matter what I said. To keep you invested, you can even change the chatbot’s “personality” to suit your needs.
Despite having studied these language models for about 15 years, Skiena says it’s difficult to predict how the technology will evolve. As a pioneer, he’s watched them continually improve and doesn’t feel that the AI of today has reached an obvious plateau.
In his teaching, he doesn’t personally use AI for lesson content — only once while creating a quiz for a one-credit AI course he lectured for. Only in his research — finding sources and improving upon the technology itself — does he regularly consult chatbots.
Some academics straddle the line between humanities and science, technology, engineering and mathematics (STEM).
Brandyn Parker, a PhD graduate student in the Department of English, uses AI to find sources like Skiena, but differs in his rather adversarial approach to AI. To him, using it entails getting to know the enemy.
He anticipates that people will claim future authors are stealing ideas from AI. To combat this, he’s compiling proof of his own writing style before this can happen.
Parker does believe that selling novels using AI will become commonplace — more than it already is, anyway. For now, he thinks, in a time when AI novels aren’t written at the same quality as human work, his job is to monitor the field. I think this is a prudent plan — one I’m trying to adopt myself through CWL 190 (Writing Against the Machine: Literary Language in the Age of AI). It’s certainly better than plugging your ears and pretending nothing has changed.
He emphasized that exploring AI doesn’t equal support. To be against AI, you can’t regurgitate other people’s opinions about it — like AI would do. Instead, you need to actually experience this technology. You have to understand the huge appeal of its convenience. Even if you’re against AI, it’s important to be able to acknowledge its capabilities and how it can help us — even if it’s being misused now.
In reading AI work, he’s found that characters sound the same as one another, sharing an “overarching AI voice.” Despite this, he was impressed at how capable AI was at crafting “narratives, dialogues and simulated worlds.” Though, in all, he feels human writing is better. I’m inclined to agree.
Those two aren’t the only academics integrating AI into their work. Professor Jennifer Epstein, a visiting writer and professor at the Lichtenstein Center in the Department of Creative Writing, Film and Television, has built an entire creative writing class around it. CWL 190 teaches students how we can press our craft to be novel and uniquely human, contrasting against AI’s output.
Seeing as how her books were scrapped for AI training, you would think she would stay well away — but that’s not the case. While teaching introductory writing courses, her submission box became awash with inexperienced writers suddenly turning in “incredibly polished piece[s] that read like a ripoff of ‘The Hunger Games [(2008)]” These students hadn’t written these pieces, instead opting to submit AI-generated writing.
Epstein believes this issue goes beyond academic integrity — through teaching for most of her post-grad life, she’s found that writing is a muscle to build. It requires a “nuanced combination of bravery, experience, a reading background and openness to other peoples’ opinions.” With a history of reading comes the influence of other writers: reading a book — enjoyable or not — leaves bits of its style in your own.
Reading AI writing has the same effect, but you don’t absorb fundamentally human elements. The difference is that while AI is a collection of common denominators in all the history of writing, human work is specific and personalized — like when I favored “box” over “hat.”
Epstein compared AI chatbots to narcissists; I thought a more appropriate disorder would be sociopathy. If a human talked the way ChatGPT does, I’d stay well away — a yes-man never did anyone any good. Besides, a computer’s speech is inorganic. Humans learn what’s appropriate to say through awkward childhood phases, by accidentally offending friends and by feeling a rush of happiness at being complimented after a hard day. When AI affirms me, I know the people who programmed it are trying to get me to use their product.
CWL 190 is full of people who had never used AI for creative writing before. When we first started testing out ChatGPT’s abilities, we were all surprised at how solid the output was. In class, we regularly read and analyze beautifully descriptive scenes from books like Arundhati Roy’s “The God of Small Things” (1997), then ask ChatGPT to recreate them by giving a summary of what the scene entails, giving it a fair shot.
An excerpt from Arundhati Roy’s “The God of Small Things” with a ChatGPT-generated scene featuring the same characters. COURTESY OF ARUNDHATI ROY AND CHATGPT.COM
AI is like a kid trying to fit in with adults: It has a basic understanding of how the world works, but it lacks the experience to back it up. Unlike a child, however, AI doesn’t have a strikingly inventive perspective. Instead, pulling from established authors, it can only describe things, in general terms, as they’re already known to be. Connecting with readers requires specificity — they need points of reference to place the story in relation to their own lives.
“Reading something that gives you an ‘ah-ha!’ moment … speaks to something deep inside of you [that] lets you feel seen and understood. The world suddenly makes more sense and feels safer because somebody else understands it in the same way,” Epstein explained.
In class, ChatGPT certainly falls short of this standard. The metaphors it writes make logical sense, but are formulaic and don’t push the rules of grammar: symptoms of its inability to innovate. Because of this, Epstein believes human creative writing will continue to hold its own.
In Epstein’s life, she’s witnessed a colossal shift in the way literature is taught. In high school, she would “read about a book a week and in college, it was certainly more than that. I can’t assign that now.” Our attention spans have undoubtedly shortened — a phenomenon hastened by quarantine, which heralded the age of short-form content.
Personally, before that year, I wouldn’t stop reading to eat. I was hungry for words. When quarantine happened, I couldn’t go to the bookstore or the library. And bookstores had already been on the decline before 2020 — my local Barnes & Noble shuttered in 2015.
My mom didn’t want packages coming inside, so ordering books was off the table. It didn’t occur to me to look up PDF files online — though those wouldn’t have replaced the smell and feel of the pages under my fingertips.
It was just easier to go on the internet. The vibrance of the screen, paired with endless visuals, made everything else no contest.
Held away from formative experiences among our peers stunted our generation’s social skills. Once we surfaced, this issue went largely unaddressed. I think AI heralds a similarly huge change to the way we behave and communicate with others. If we don’t adapt this time, we’re going to fail a lot of young people all over again.
When we do address the unique, evolving needs of our society, however, we stand to benefit. Thanks to TikTok’s burst in popularity during quarantine, BookTok repopularized books and spread their appeal to a brand-new audience. Pressure makes diamonds.
Individually, it’s important we retrain our brains to accept delayed gratification. We can’t face the changes AI will bring without working on how quarantine changed us. For me, that means taking as many literature and writing courses as possible — from AFH 329 (Pan-African Literature) to CWL 305 (Forms of Fiction), I’m forced to read widely — not just accepting what an algorithm feeds me.
Epstein thought my tactic was sound: By pushing through harder and not as “superficially, immediately gratifying” activities, I would ultimately experience a deeper sense of achievement.
For now, however, Epstein assigns only short excerpts and thin books she thinks are engaging enough to hold our attention. “The competition for everybody’s attention is just so fierce,” she emphasized. “It can be the most gorgeous novel in the world and I just don’t feel that the majority of students have the habit — the discipline — developed to sit and read it all the way through.”
We need help to push against this. In the classroom, professors should push us to read more, forcing us to spend more time away from the allure of the internet. No one should accept our attention being robbed for profit, and assigning less work only makes this our new normal. Outside of school, it’s important we push against this by encouraging one another to get off our phones. By posting less and talking more, connections to friends will deepen. On your own, you can extricate yourself from technology by picking up new hobbies, spending more time out with friends on campus and even buying an alarm clock so your phone isn’t near you before bedtime.
It’s also important we recognize what the hidden stakes are here: Every scroll through an ad means billionaire owners are getting their coin. Of course, they would design the scroll to be addictive: It’s like Cookie Clicker, where you’re the one churning out the profit for them.
The war for our attention doesn’t have to mean that art is dead. Epstein brought up the fact that people thought the printing press was the death knell of artistry. After all, people thought, what’s the point of art if not to be hoarded? What of quality dilution with its introduction to the masses?
Clearly, the printing press didn’t ruin everything. In fact, it was the most important educational tool ever invented.
The idea of AI being our printing press — a revolutionary technology, bringing enlightenment to all — may be hard to picture for us Americans, living in a country where educating every child is mandated by law. If someone elsewhere accessed AI, seeking education and self-improvement, I would think it’s worlds better than nothing at all.
Within our country, there is a deep imbalance in our educational system. Apart from teaching at Stony Brook, Epstein tutors wealthy children. She sees firsthand how unfair the system is, since richer families can pay to help students master tricky subjects, write essays for college applications and study for standardized tests.
Today, it’s commonplace for students to prepare early for these examinations, which aren’t quite standard. Teaching levels at schools across the nation are egregiously uneven, as funding for American public schools is mostly derived from property taxes. So, if you live in a county with cheaper homes — with less money to gather from each property — you can kiss Harper Lee’s “To Kill a Mockingbird” (1960) goodbye. Epstein posits that AI could act as an equalizer, tutoring for free, in this fundamentally inequitable system.
With AI, Epstein posits, “You don’t need to pay hundreds of dollars to get good advice on how to write a college essay. If you haven’t read ‘Moby-Dick’[(1851)] because your school didn’t offer it and you’re entering a [college] class that assumes you’re going to understand [Herman] Melville and they make that reference, you have an easy way to catch up a little bit.”
A composite image displaying the opening page of Herman Melville’s 1851 novel “Moby-Dick” alongside a response generated by OpenAI’s ChatGPT 5.1, which was prompted to write an introductory paragraph featuring similar premises to Melville’s work. COURTESY OF HERMAN MELVILLE AND CHATGPT.COM
Call me crazy — or Ishmael — but I don’t see why a simple Google search (if not a quick conversation with a professor) couldn’t solve this issue. I certainly think AI can act as a professor for those who genuinely have no other resource, but it shouldn’t replace simple queries like “What is ‘Moby-Dick’ about?”
At Epstein’s recommendation, I listened to The New York Times podcast, Hard Fork: “A.I. School Is in Session: Two Takes on the Future of Education.” I was struck by the account of a Massachusetts Institute of Tech (MIT) student who used Perplexity AI and Google Gemini to study for her tests — asking AI how to approach problems and then having it quiz her before tests. MIT is the best institution in the entire world for STEM, with professors at the very top of their fields. Why would she use the same AI tool anyone in the world could use when she could take advantage of the highly coveted resources at her institution?
Epstein asserted that those who condemn AI for its harmful environmental impact and the threat to purity in literature come from a privileged position. While some can afford superb schooling and therefore that opinion, others can’t.
The debate over AI usage is reminiscent of the election: Wealthy liberals can afford to be concerned with theory and nuance and social issues. It can be easy to condemn anyone unconcerned with those topics — but everyone’s priorities are different. Families struggling with inflation, juggling multiple jobs and groceries and who aren’t politically involved will put the economy first. Discrimination may kill, but starvation is quicker.
The truth is, our minds are already battered from isolation, political stress, the endless scroll and personal matters. We may not be ready for AI as we reel from the effects of these past few years, but it’s no longer an issue of whether we use it or not — it’s an issue of how to use it productively. Epstein believes we should use it in a way that doesn’t “confine us and instead creates a better space for everyone to learn.”
I firmly believe that artificial intelligence can change our lives for the better — streamlining menial tasks and allowing humans the time to focus on our passions and loved ones. But this can only happen if we use it responsibly. However, as I highlighted earlier, only 30% of ChatGPT users are cutting down work. Instead, they’re using it to solve personal problems — which I believe should be resolved independently because mistakes are what drive lessons home. So maybe we’re just ill-equipped to use this technology that was released upon us without much in the way of guardrails or thought for consequences. Not that we have much choice anymore.
Epstein believes that the publishing industry is one of many that is unprepared for the disruption that generative writing hails. In general, she observed big publishing houses to be quite “traditional,” run by those with an old-school sense of how the whole business should work. Right now, that means “a shift toward supporting authors who they think will bring in money” and appeal to the largest demographic. Epstein’s first book sold very well, but her second — despite favoring it! — didn’t. As a result, she struggled to get an advance for her third. She reports that this happens a lot and when easy sales are prioritized over things that “aren’t going to ruffle feathers, cause friction or disrupt,” our cultural fabric is sacrificed in the name of money.
Epstein faces writing’s existential crisis by immersing herself in the fundamentally rich and excitingly human material she teaches. Leaving class, she finds herself amazed at the authors’ capabilities. “I love reading James Joyce out loud — it’s just a revelatory experience. There’s so much beauty. And the question is how do you get students — and the general public — to remain engaged in that when it’s not as easy as, say, going onto Instagram?”
How, indeed.