Robert Diab is a law professor at Thompson Rivers University.

Many professionals now rely on AI to write for them, often on the assumption that no one will notice. That assumption is increasingly false.

Across law, consulting and higher education, I see AI used with growing frequency to draft entire essays, court briefs and reports. The appeal is obvious: the prose is fluent, confident and easy to generate. But it rests on a second assumption as well – that what AI produces is good enough for work that depends on judgment.

It isn’t.

For a while after tools like ChatGPT first appeared, it may have been hard to tell that what you were reading was AI. But at this point, many attuned readers can spot it quickly, and, as a recent study shows, those who themselves use AI for writing or editing can recognize such content almost infallibly.

The reason is simple: we know the patterns. And encountering them more often is prompting us to ask a more basic question: why, in many cases, is writing that looks competent still not good enough for the task at hand?

Six questions to ask yourself before using AI at work

AI writing tends to have a steady, even cadence. It leans heavily on clichéd terms drawn from its training data, like “delve,” “tapestry,” or “showcasing.” It overuses emphatic descriptors like “game-changing” or “transformative,” and relies on tidy triads (“this, this, and this”) and neat oppositions (“it’s not X, it’s Y”).

Every paragraph resolves without friction. And above all, the writing always stakes out a safe middle ground, betraying no sign of idiosyncrasy and no real sense of voice.

It’s not any one of these things that gives the game away, but the presence of many at once.

It’s tempting to assume these are merely temporary limitations that will soon be ironed out as models improve. But this isn’t likely, given how language models work. They generate text by predicting the most probable next word in a sequence. They can be tuned to select words that are more or less probable, giving models different personalities. But they cannot escape their dependence on statistical prediction. This means that AI writing will likely exhibit common patterns for the foreseeable future.

As more of us come to see that AI writing is easy to spot, it will change the way we use AI. The question will no longer be whether the output looks polished enough, but whether someone would mind if the document were labeled “drafted by AI.”

Opinion: AI is winning hearts and minds in the classroom – but at what cost to our cognitive future?

In some cases, the answer may well be no: a meeting summary or short e-mail. But in many cases, the answer would clearly be yes, for one key reason among others.

We expect lawyers to write their own court briefs and consultants to write their own reports because we’re looking for something irreducible to an algorithm. The ancient Greek philosopher Aristotle had a name for it: phronesis, or practical wisdom – the ability to decide what to do when a problem doesn’t conform neatly to prior rules or knowledge. A person with practical wisdom draws on a store of experience and judgment to find the most relevant analogy. A language model makes a random prediction.

Most professional writing does more than merely transmit information. It reflects judgment about priorities and trade-offs. This often involves emotional intelligence, reading the room, or seeing the whole picture. When we outsource a document to AI, we abdicate this role, a role that no AI can play for us.

The same is true in education. We assign essays to help students learn to write, but also to grapple with ambiguity and complex ideas – to develop judgment, which AI can simulate but not acquire.

AI may be rapidly advancing and useful in many fields. But as we become better at spotting what it produces, we’re reminded of what it can’t do and likely never will. The question we should be asking is no longer “Will anyone notice?” but “Would it matter – and why?”