This next round of AI Madness brings together two top contenders for the smartest, fastest and most useful AI assistants. ChatGPT beat out Perplexity in the first round and Google Gemini beat Alexa+. Now the two go head-to-head with seven prompts designed to reflect how people actually use AI day to day.

These real prompts are the kind users might ask — from math and debugging code, to making a tough decision or just trying to get through your day a little easier. Some tests were about accuracy. Others focused on reasoning, creativity or how well each model handled uncertainty. And in a few cases, I intentionally set traps to see which one would hallucinate.

Both models are getting very good — but they’re getting good in different ways. Here are the results of this exciting round.

Article continues below

You may like

OpenAI‘s model consistently won on clarity, structure and speed. From fixing code and solving a problem to making a decision — it showed up as the more reliable everyday tool.

Google Gemini showed up for this round with strong ability to unpack complexity, depth and added context, which can be incredibly valuable in areas like research, writing and ambiguity.

Each model stood out in different ways with a strong performance by both. It’s clear that although not every AI assistant does everything perfectly, knowing which tool does the better job depending on the task, can help boost workflow. The people who understand that shift early are the ones who will get the most out of each model.

With a close, but solid win, ChatGPT moves on to the next round.

Google News

Follow Tom’s Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.