When I need help with something, I usually turn to the most reliable source of information—strangers on the internet.

Note that I say “reliable” and not “credible.” While internet advice is far from expert, I find forum sites like Reddit and Quora always seem to have a plethora of comments applicable to any anecdotal experience. These sites have proved to me there’s not such thing as an original experience. Whether it’s the best recipes to make when I feel a cold coming on, or advice on the best graduate programs from real students, I know someone out there has the answer; if not based on research, then based on real-life events. And as AI-generated content dominates nearly half the internet today, I find myself seeking human advice more and more.

My preference may be unique. While people generally prefer the empathy involved in human responses, internet users increasingly approach AI with questions. In 2025, OpenAI reported ChatGPT received over 2.5 billion prompts each day. It’s easy to see why AI seems to have all the answers: Large learning models (LLMs) are trained on more text than a single human could read in several lifetimes.

But it’s important not to confuse knowledge with understanding. ChatGPT doesn’t “know” what you’re asking it, and it doesn’t “know” what it responds: instead, it assigns a probable weight to each word of your prompt, and generates the most common answer based on its training with these weights. This works well for drafting emails or solving simple equations, but some questions are more than the sum of their parts. For some questions, real-world experience trumps probability.

Here’s an example: I’m currently fostering two elderly cats, a bonded pair of brothers named Pip and Teddy. Pip recently passed away, leaving Teddy on his own for the first time in 14 years. When I asked ChatGPT how to best support him, it presented a list of behaviours to watch out for and routines to maintain—all things I could find on any veterinarian blog.

Fostering cats is a unique experience, where owners can become attached to pets without being their lifelong caregivers. ChatGPT seemed to ignore this part of the prompt, likely due to its probabilistic unimportance in the scheme of my question. But continuing to foster Teddy seemed like a difficult task in the face of his brother’s passing and his own uncertain adoption, and my anxiety about this situation was left unaddressed by AI.

When I turned to Reddit with the same question, the replies were flooded with people sharing stories of their own cats passing, offering specific tricks that helped comfort themselves and their cats through grief. While ChatGPT’s advice was practical, what I found the most helpful was sifting through comments about cats with ridiculous names like “Mister” and “Moonpie,” finding comfort among human beings who shared the same strange experience.

The truth is, I do believe human users are more qualified to answer anecdotal questions which rely on human evidence. When it comes to specific situations, I don’t need an AI-generated “listicle” telling me the “Top 10 Tips for Studying Hard,” I need to know the perfect, Breaking Bad-esque mix of caffeine and over-the-counter stimulants needed to pass my statistics exam.

If ChatGPT has never camped out in Mitchell Hall studying for more than 24 consecutive hours, I’m frankly not interested in its advice.

Tags

Advice, AI, artificial intelligence, ChatGPT, Reddit, Student life

All final editorial decisions are made by the Editor(s) in Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.