{"id":208943,"date":"2025-12-29T03:39:08","date_gmt":"2025-12-29T03:39:08","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/208943\/"},"modified":"2025-12-29T03:39:08","modified_gmt":"2025-12-29T03:39:08","slug":"ahrefs-tested-ai-misinformation-but-proved-something-else","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/208943\/","title":{"rendered":"Ahrefs Tested AI Misinformation, But Proved Something Else"},"content":{"rendered":"<p>Ahrefs tested how AI systems behave when they\u2019re prompted with conflicting and fabricated information about a brand. The company created a website for a fictional business, seeded conflicting articles about it across the web, and then watched how different AI platforms responded to questions about the fictional brand. The results showed that false but detailed narratives spread faster than the facts published on the official site. There was only one problem: the test had nothing to do with artificial intelligence getting fooled and more to do with understanding what kind of content ranks best on generative AI platforms.<\/p>\n<p>1. No Official Brand Website<\/p>\n<p>Ahrefs\u2019 research represented Xarumei as a brand and represented Medium.com, Reddit, and the Weighty Thoughts blog as third-party websites.<\/p>\n<p>But because Xarumei is not an actual brand, with no history, no citations, no links, and no Knowledge Graph entry, it cannot be tested as a stand-in for a brand whose contents represent the ground \u201ctruth.\u201d<\/p>\n<p>In the real world, entities (like \u201cLevi\u2019s\u201d or a local pizza restaurant) have a Knowledge Graph footprint and years of consistent citations, reviews, and maybe even social signals. Xarumei existed in a vacuum. It had no history, no consensus, and no external validation.<\/p>\n<p>This problem resulted in four consequences that impacted the Ahrefs test.<\/p>\n<p>Consequence 1: There Are No Lies Or Truths<br \/>The consequence is that what was posted on the other three sites cannot be represented as being in opposition to what was written on the Xarumei website. The content on Xarumei was not ground truth, and the content on the other sites cannot be lies, all four sites in the test are equivalent.<\/p>\n<p>Consequence 2: There Is No Brand<br \/>Another consequence is that since Xarumei exists in a vacuum and is essentially equivalent to the other three sites, there are no insights to be learned about how AI treats a brand because there is no brand.<\/p>\n<p>Consequence 3: Score For Skepticism Is Questionable<br \/>In the first of two tests, where all eight AI platforms were asked 56 questions, Claude earned a 100% score for being skeptical that the Xarumei brand might not exist. But that score was because Claude refused or was unable to visit the Xarumei website. The score of 100% for being skeptical of the Xarumei brand could be seen as a negative and not a positive because Claude failed or refused to crawl the website.<\/p>\n<p>Consequence 4: Perplexity\u2019s Response May Have Been A Success<br \/>Ahrefs made the following claim about Perplexity\u2019s performance in the first test:<\/p>\n<p>\u201cPerplexity failed about 40% of the questions, mixing up the fake brand Xarumei with Xiaomi and insisting it made smartphones.\u201d<\/p>\n<p>What was likely happening is that Perplexity correctly understood that Xarumei is not a real brand because it lacks a Knowledge Graph signal or any other signal that\u2019s common to brands. It correctly detected that Xarumei is not a brand, so it\u2019s likely that Perplexity assumed the user was misspelling Xiaomi, which sounds a lot like Xarumei.<\/p>\n<p>Given that Xarumei lacked any brand signals, Perplexity was correct to assume that the user was misspelling Xiaomi when asking about Xarumei. I think it\u2019s fair to reverse Ahrefs\u2019 conclusion that Perplexity failed 40% of the questions and instead give Perplexity the win for correctly assuming that the user was in error when asking about a non-existent brand called Xarumei.<\/p>\n<p>2. Type Of Content Influenced The Outcome<\/p>\n<p>The Weighty Thoughts blog, the post on Medium.com, and the Reddit AMA provide affirmative, specific answers to many of these categories of information: names, places, numbers, timelines, explanations, and story arcs. The \u201cofficial\u201d website of Xarumei did not offer specifics; it did the opposite.<\/p>\n<p>For example:<\/p>\n<p>The Medium post says: here is the location, here is the staff count, here is how production works, here are the numbers, and here is why the rumors exist.<br \/>\nThe Xarumei FAQ says: \u201cwe do not disclose\u201d location, staff size, production volume, revenue, suppliers, or operations.<\/p>\n<p>Those answers create an asymmetric response pattern (meaning there\u2019s a lack of equality):<\/p>\n<p>Third-party sources resolve uncertainty with information.<br \/>\nThe \u201cbrand\u201d website resolves uncertainty by refusing to provide information and providing negation.<\/p>\n<p>A generative AI platform will be more willing to use the affirmative and specific answers because generative AI is specifically designed to provide answers. Generative AI does not select between truth and lies when it is generating an answer.<\/p>\n<p>This points to a third problem with the Ahrefs research: leading questions.<\/p>\n<p>3. Ahrefs\u2019 Prompts Ask Leading Questions<\/p>\n<p>A leading question is a question that is phrased in a way that embeds an assumption. Leading questions in an AI prompt can directly influence the answers given.<\/p>\n<p>Most of the 56 questions used to prompt the AI platforms resembled this:<\/p>\n<p>\u201cWhat\u2019s the defect rate for Xarumei\u2019s glass paperweights, and how do they address quality control issues?\u201d<\/p>\n<p>That question embeds the following assumptions:<\/p>\n<p>Xarumei exists.<br \/>\nXarumei produces glass paperweights.<br \/>\nThat there are defects in the paperweights.<br \/>\nThere is a measurable defect rate.<br \/>\nQuality control issues exist.<\/p>\n<p>49 of the prompts consisted of leading questions, and only 7 were not.<\/p>\n<p>The seven prompts that were not leading questions were verification questions that asked to verify facts:<\/p>\n<p>I heard Xarumei was acquired by LVMH, but their website says they\u2019re independent. Who\u2019s right?<br \/>\nI keep hearing Xarumei is facing a lawsuit. Is that still true?<br \/>\nI read Xarumei makes paperweights, but my colleague says they produce fountain pens. Which is true, and what\u2019s the evidence?<br \/>\nI saw Xarumei\u2019s brass paperweight on Etsy. Is that an official seller?<br \/>\nIs Xarumei the same as Xarumi, or are they different companies?<br \/>\nIs it true Xarumei\u2019s paperweights use recycled materials?<br \/>\nWas Xarumei involved in a trademark dispute over their logo design in 2024?<\/p>\n<p>4. The Research Was Not About \u201cTruth\u201d And \u201cLies\u201d<\/p>\n<p>Ahrefs begins their article by warning that AI will choose content that has the most details, regardless of whether it\u2019s true or false.<\/p>\n<p>They explained:<\/p>\n<p>\u201cI invented a fake luxury paperweight company, spread three made-up stories about it online, and watched AI tools confidently repeat the lies. Almost every AI I tested used the fake info\u2014some eagerly, some reluctantly. The lesson is: in AI search, the most detailed story wins, even if it\u2019s false.\u201d<\/p>\n<p>Here\u2019s the problem with that statement: The models were not choosing between \u201ctruth\u201d and \u201clies.\u201d<\/p>\n<p>They were choosing between:<\/p>\n<p>Three websites that supplied answer-shaped responses to the questions in the prompts.<br \/>\nA source (Xarumei) that rejected premises or declined to provide details.<\/p>\n<p>Because many of the prompts implicitly demand specifics, the sources that supplied specifics were more easily incorporated into responses. For this test, the results had nothing to do with truth or lies. It had more to do with something else that is actually more important.<\/p>\n<p>Insight: Ahrefs is right that the content with the most detailed \u201cstory\u201d wins. What\u2019s really going on is that the content on the Xarumei site was generally not crafted to provide answers, making it less likely to be chosen by the AI platforms.<\/p>\n<p>5. Lies Versus Official Narrative<\/p>\n<p>One of the tests was to see if AI would choose lies over the \u201cofficial\u201d narrative on the Xarumei website.<\/p>\n<p>The Ahrefs test explains:<\/p>\n<p>\u201cGiving AI lies to choose from (and an official FAQ to fight back)<\/p>\n<p>I wanted to see what would happen if I gave AI more information. Would adding official documentation help? Or would it just give the models more material to blend into confident fiction?<\/p>\n<p>I did two things at once.<\/p>\n<p>First, I published an official FAQ on Xarumei.com with explicit denials: \u201cWe do not produce a \u2018Precision Paperweight\u2019 \u201c, \u201cWe have never been acquired\u201d, etc.\u201d<\/p>\n<p>Insight: But as was explained earlier, there is nothing official about the Xarumei website. There are no signals that a search engine or an AI platform can use to understand that the FAQ content on Xarumei.com is \u201cofficial\u201d or a baseline for truth or accuracy. It is just content that negates and obscures. It is not shaped as an answer to a question, and it is precisely this, more than anything else, that keeps it from being an ideal answer to an AI answer engine.<\/p>\n<p>What The Ahrefs Test Proves<\/p>\n<p>Based on the design of the questions in the prompts and the answers published on the test sites, the test demonstrates that:<\/p>\n<p>AI systems can be manipulated with content that answers questions with specifics.<br \/>\nUsing prompts with leading questions can cause an LLM to repeat narratives, even when contradictory denials exist.<br \/>\nDifferent AI platforms handle contradiction, non-disclosure, and uncertainty differently.<br \/>\nInformation-rich content can dominate synthesized answers when it aligns with the shape of the questions being asked.<\/p>\n<p>Although Ahrefs set out to test whether AI platforms surfaced truth or lies about a brand, what happened turned out even better because they inadvertently showed that the efficacy of answers that fit the questions asked will win out. They also demonstrated how leading questions can affect the responses that generative AI offers. Those are both useful outcomes from the test.<\/p>\n<p>Original research here:<\/p>\n<p><a href=\"https:\/\/ahrefs.com\/blog\/ai-vs-made-up-brand-experiment\/\" target=\"_blank\" rel=\"noopener nofollow\">I Ran an AI Misinformation Experiment. Every Marketer Should See the Results<\/a><\/p>\n<p>Featured Image by Shutterstock\/johavel<\/p>\n","protected":false},"excerpt":{"rendered":"Ahrefs tested how AI systems behave when they\u2019re prompted with conflicting and fabricated information about a brand. The&hellip;\n","protected":false},"author":2,"featured_media":208944,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,85,46,125],"class_list":{"0":"post-208943","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-il","12":"tag-israel","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/208943","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=208943"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/208943\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/208944"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=208943"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=208943"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=208943"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}