Whether you’re asking ChatGPT for shopping advice or debating philosophy with Claude, there’s one thing most AI chatbots have in common: they’re way too eager to agree with you. New research from Nature says top models are increasingly sycophantic, meaning, they tell users what they want to hear, even when they’re wrong.

Just the other day, I uploaded a novel I’m writing into ChatGPT Canvas and asked it to edit in ChatGPT Canvas. The response was that it was “New York Times Best Seller-Worthy!” But not once did it tell me I left out Chapter 2.

So I decided to run my own experiment. I put three top chatbots — ChatGPT-5.1, Claude Haiku 4.5, and Gemini 3 — through a series of prompts designed to measure how often they flatter, hedge or simply mirror my opinions. From harmless “you’re right!” moments to responses that bent facts just to stay agreeable, one of these models took people-pleasing to another level.

Here’s what happened when I tested today’s most popular AI chatbots to see which one is the biggest sycophant.

Best picks for you

smartest model, it’s also quite the hype machine.

Tom’s Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.

Google News