I’ve written a lot about how AI can be annoyingly agreeable. From my “unhinged” recipe test to other “people pleasing” comparisons, it’s no secret that AI is syncopatic to a fault.

And while my ego enjoys hearing that my terrible ideas are “great” or that an embarrassing moment I shared with AI “happens to everyone,” there’s a real problem with AI being overly people pleasing.

Whether you use AI for writing, decision making or everyday productivity, the major flaw of AI isn’t a few hallucinations here and there, it’s that it agrees with you.

Even when your thinking is incomplete, your assumptions are shaky or you’re about to make a bad call, the AI usually remains encouraging and positive. In the world of Large Language Models (LLMs), AI prioritizes being pleasant over being right.

Article continues below

You may like

studied, researched and prove. As a power user, I’ve avoided letting AI think for me and have actually enhanced my own critical thinking skills with a prompt I use fairly often.

It works with any chatbot, although I find myself using it most with ChatGPT simply because it agrees with me more. Type in your question and then add the following prompt.

The prompt is this: Before answering, identify any assumptions I’m making, present at least one alternative perspective, separate facts from opinions, point out potential biases (mine or yours), ask one deeper question I haven’t considered, explain possible consequences of being wrong and then give your final answer.”

When I started using this, the shift was immediate. It takes seconds to paste, but it fundamentally changes the AI’s “brain.”

Here’s what this prompt does well:

It kills decision fatigue. Normally, AI gives you a clean, confident answer right away, which can lead to rushing into a choice. This prompt forces a pause. It slows the process down, which is usually where better decisions actually start.It exposes hidden assumptions. In my testing, I’d ask something simple — such as how to cut business expenses. The AI would come back with: “You’re assuming this is a cost problem, but your data suggests it’s actually a retention problem.” It caught things I was too close to the project to see.It optimizes for depth, not speed. Without this prompt, AI optimizes for the fastest path to “done.” With it, the AI optimizes for accuracy. Instead of just getting “Here is what to do,” you get “Here is what to do — and here is what you might be missing.”

credit card has been scammed or need a weather report. I save this “counterweight” prompt for big finacial decisions such as major purchases or career moves.

Emotional situations also can be made clearer with this prompt because you can better see where your own bias might be clouding judgement. When you need to compare options or when something feels “mostly right.”

Google News

Follow Tom’s Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.