{"id":363257,"date":"2026-03-28T20:45:09","date_gmt":"2026-03-28T20:45:09","guid":{"rendered":"https:\/\/www.newsbeep.com\/il\/363257\/"},"modified":"2026-03-28T20:45:09","modified_gmt":"2026-03-28T20:45:09","slug":"im-a-copywriter-the-internet-is-about-to-get-a-lot-worse","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/il\/363257\/","title":{"rendered":"I\u2019m a copywriter. The internet is about to get a lot worse."},"content":{"rendered":"<p class=\"slate-paragraph slate-graf\" data-word-count=\"21\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66rlgu000w3b7bmpwad2vr@published\"><a href=\"https:\/\/slate.com\/theslatest?utm_source=slate&amp;utm_medium=article&amp;utm_campaign=article_plain_text_topper&amp;sailthru_source=Article-TopperText-CTA\" rel=\"nofollow noopener\" target=\"_blank\">Sign up for the Slatest<\/a> to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"93\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66gz7j004qszktwaescqej@published\">If you\u2019ve ever used the internet to plan a trip, chances are you\u2019ve taken advice on what to see and do from someone who has never been to your destination. In fact, your guide probably has had no direct knowledge of\u2014or even personal interest in\u2014sunbathing on the Gulf Coast, rock climbing in Moab, or marveling at the architecture of Milan. And yet, on\u00a0travel websites across the internet, writers provide jet-setters with terrifically specific guidance: what time of day to head out, what kind of shoes to wear, and where to score a deal.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"60\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcf8001j3b7b66a06syo@published\">In the past, you might have purchased a travel book written by someone who actually went to a place (or who, at the very least, did old-school reporting on it, making phone calls to gather and verify information from people who had been there). Today the recommendations you find via Google are made by people who, well, also used Google.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"134\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcf8001k3b7bknhtbvuo@published\">This is a problem facing not just travel advice. It infects everything recommendation-related. Every day, writers are paid a pittance by marketing companies, big brands, and a swarm of content mills attempting to capture a place in our search results and hoover up our attention with very specific advice. I am one of those writers, churning out that work: During my decade as a word monkey, I\u2019ve recommended drinks and dishes from bars and restaurants I\u2019ve never been to and waxed lyrical about hunting equipment despite having shot precisely one gun in my life. I\u2019ve even written product descriptions for items that aren\u2019t available in my country. (There are about half a dozen compression-sleeve brands that apparently ship only to the U.S., not my native England, much to the disappointment of my dodgy knee.)<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"84\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcf9001l3b7bhiyshj78@published\">The information included in these articles is pulled from a number of sources. Sometimes they\u2019re more official, like brand webpages. But often, they\u2019re sources like Tripadvisor, Amazon reviews, or even random posts on niche subreddits. And not every writer will be like me, making good use of what I learned via my history degree and careful to include solely information that has been repeated in multiple places with strong reputations. When deadlines and bills are circling you, the temptation to cut corners is\u00a0extremely powerful.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"123\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcf9001m3b7bbw2brf9o@published\">Even though I research extensively and pride myself on accuracy, without direct experience things go wrong. In the past, I\u2019ve accidentally given incorrect public transit information when writing about how to get to a museum, or reported the wrong number of poles in the product description for a tent. Small mistakes, but ones that don\u2019t happen when you take a journey yourself or hold an item in your hands. Such errors can be corrected, and they aren\u2019t always consequential. But they can be: Imagine someone with impaired mobility expecting a ramp at a museum and showing up to find steps\u2014having their meticulously planned day out ruined, all because someone had to hit a deadline and assumed that a beloved tourist attraction was accessible.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"83\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcf9001n3b7bdixoo8g3@published\">Through methods like search engine optimization and other nifty page-ranking subterfuge, this nonverified content climbs to the top of search results and people\u2019s consciousness. Yes, there\u2019s really good travel\u2014and product, and drink\u2014advice out there, based on real experiences. But better-researched pieces by actual experts might not have the benefit of being buoyed by SEO tricks, as the people producing that content won\u2019t know the importance of internal linking, keyword repetition, and other factors that can help a page shoot up in search results.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"34\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcfa001o3b7brk0wi1j9@published\">With the rise of large language models, the problem of not-quite-right advice will only get worse. The quickly written, often shoddily verified content is going to become what the LLMs take as the truth.<\/p>\n<p>    <a href=\"https:\/\/slate.com\/technology\/2025\/10\/job-search-artificial-intelligence-chatgpt-resume-cover-letter.html\" class=\"recirc-line__content\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>          <img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/il\/wp-content\/uploads\/2026\/03\/314451f9-fd3a-4873-9f2a-325d2f23775d.jpeg\" width=\"141\" height=\"94\"   alt=\"\" loading=\"lazy\"\/><\/p>\n<p>\n          Tim Rogers<br \/>\n        I Got Laid Off. Job Hunting in the Age of Robots Has Been a Pain.<br \/>\n        Read More\n      <\/p>\n<p>    <\/a><\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"145\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcfa001p3b7b66iz4kc8@published\">LLMs don\u2019t search for information like we would. Instead, they produce responses via <a href=\"https:\/\/learn.microsoft.com\/en-us\/dotnet\/ai\/conceptual\/understanding-tokens\" rel=\"nofollow noopener\" target=\"_blank\">token prediction<\/a>, effectively a more complex version of predictive text. (Tokens are numerical values given to words, parts of words, and sometimes even letters, thus allowing the computer to \u201cread\u201d them.) But these predictions are based on data fed to machines, and information that is consistent and considered \u201chigher quality\u201d can be <a href=\"https:\/\/www.ibm.com\/think\/topics\/llm-parameters\" rel=\"nofollow noopener\" target=\"_blank\">given more weight<\/a> in the model\u2019s internal logic during its training. An LLM doesn\u2019t know whether what it is saying is right. It is designed not to provide the truth\u2014simply to <a href=\"https:\/\/www.nytimes.com\/2025\/06\/13\/technology\/chatgpt-ai-chatbots-conspiracies.html\" rel=\"nofollow noopener\" target=\"_blank\">provide answers<\/a>. You can see this clearly when models lead their users into \u201c<a href=\"https:\/\/www.404media.co\/ai-psychosis-help-gemini-chatgpt-claude-chatbot-delusions\/\" rel=\"nofollow noopener\" target=\"_blank\">A.I. psychosis<\/a>.\u201d The LLM doesn\u2019t care where it\u2019s taking you. It simply\u00a0chooses the most plausible word to follow the previous one, based on preset parameters and the vast quantities of information it has been trained on.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"169\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcfa001q3b7b06op5w7y@published\">Although many LLM engineers can tinker with source weighting, <a href=\"https:\/\/www.telegraph.co.uk\/business\/2025\/11\/21\/x-grok-chatbot-musk-better-jesus\/\" rel=\"nofollow noopener\" target=\"_blank\">like those at X do to Grok<\/a> every time it veers too close to the actual truth rather than whatever Elon Musk thinks, the people who run popular large language models like ChatGPT and Google Gemini say they prioritize training their models via sources that are <a href=\"https:\/\/najumi.fr\/en\/article\/how-ai-answer-engines-choose-cited-websites\/\" rel=\"nofollow noopener\" target=\"_blank\">generally seen as more authoritative<\/a>. However, that doesn\u2019t mean that those sources will always provide the truth or that the chatbot will always repeat it. It means that chatbots will try to collect information from sources that tick the correct boxes. Those sources can be wrong, and facts can be lost or warped in the game of telephone. What\u2019s more, marketing professionals are already studying how LLMs rank sources to ensure that their <a href=\"https:\/\/linkbuilder.io\/google-ai-overviews\/\" rel=\"nofollow noopener\" target=\"_blank\">content is picked up<\/a> in A.I. overviews. That is, an incorrect fact in hastily produced copy\u2014intended, at the end of the day, to capture as many eyeballs as possible rather than to inform\u2014can all too easily be repeated by an LLM.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"110\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcfa001r3b7bveorc5f2@published\">The stakes aren\u2019t very high when a model believes that a hotel is 50 feet from the beach when it\u2019s really 500, or that the stain remover some copywriter was paid hardly anything to \u201creview\u201d doesn\u2019t actually work on colors. But the amount of people using generative A.I. for things like mental health support and nutrition advice makes these discrepancies troubling. Leaders in the A.I. space, like <a href=\"https:\/\/developer.nvidia.com\/blog\/prevent-llm-hallucinations-with-the-cleanlab-trustworthy-language-model-in-nvidia-nemo-guardrails\/\" rel=\"nofollow noopener\" target=\"_blank\">Nvidia<\/a> and <a href=\"https:\/\/developers.openai.com\/cookbook\/examples\/how_to_use_guardrails\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a>, claim that there exist robust safeguards against this crystallization of falsity into fact, but OpenAI researchers have already admitted that \u201c<a href=\"https:\/\/www.computerworld.com\/article\/4059383\/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html\" rel=\"nofollow noopener\" target=\"_blank\">hallucinations are mathematically inevitable<\/a>,\u201d and industry experts note that there are some real issues with <a href=\"https:\/\/www.linkedin.com\/pulse\/investigation-guardrails-gpt-lies-groks-dark-wisdom-brian-draper-szg3c\/\" rel=\"nofollow noopener\" target=\"_blank\">homogenous errors across multiple models<\/a>.<\/p>\n<p>          <a href=\"https:\/\/slate.com\/technology\/2026\/03\/copywriter-confession-internet-advice-travel-artificial-intelligence.html\" class=\"in-article-recirc__link\" rel=\"nofollow noopener\" target=\"_blank\"><\/p>\n<p>            I\u2019m a Copywriter. I Know What\u2019s About to Happen to the Internet.<br \/>\n          <\/a><\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"194\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcfa001s3b7bg00casbb@published\">Consider the following hypothetical: a natural health brand looking to sell its supplements to a broader audience. It might hire a writer to, in a piece on its site, extol the virtues of zinc and magnesium, focusing on the alleged immunity-boosting properties of taking supplements with a particular blend of the two (which the company, of course, sells). This writer, keen to do a good job, then reads some studies that showcase this fact, but due to a lack of understanding of the science, or a deficiency in understanding statistics, makes an erroneous claim. (One of the most spurious phrases in modern advertising is studies show.) The writer, thanks to their ability to improve page rankings via keywords and section headings, will have created an article that looks like information but is really a thinly disguised advertisement. It floats to the top of Google \u2026 and is copied again and again by others selling vitamins. This claim will then be included in top-line A.I. responses about the benefits of magnesium and zinc supplements, as the LLM considers it the most \u201cprobable\u201d answer to, say, common questions about staying healthy during cold and flu season.<\/p>\n<p class=\"slate-paragraph slate-graf\" data-word-count=\"148\" data-uri=\"slate.com\/_components\/slate-paragraph\/instances\/cmn66tcfb001t3b7bf41u1eop@published\">The tips and tricks I use to avoid being taken in by sloppy A.I.-generated content are the same that have always existed for combating disinformation, and were honed mainly during my humanities degree. I double-check facts and figures and ensure they\u2019re from reputable sources, ideally with multiple additional sources backing them up. (Often, articles on a topic will cite the same incorrect source\u2014so be careful!) Polarized viewpoints often rise to the top: If I read something that either makes my blood boil or completely aligns with my own perspective, I make sure to check the source. When it comes to your health, <a href=\"https:\/\/slate.com\/technology\/2026\/01\/health-chatbot-medicine-ai-doctor-google.html\" rel=\"nofollow noopener\" target=\"_blank\">experts stress the importance of having a \u201chuman in the loop\u201d<\/a>\u2014that is, checking with your doctor before taking advice from a machine. And on your next vacation? Well, if you use ChatGPT to plan it, maybe just bake in extra time in case things go awry.<\/p>\n<p>          <img alt=\"\" class=\"newsletter-signup__img\" hidden=\"\" data-src-light=\"https:\/\/dot.cdnslate.com\/static\/media\/components\/newsletter-signup\/the-slatest.49f353b.png\" data-src-dark=\"https:\/\/dot.cdnslate.com\/static\/media\/components\/newsletter-signup\/the-slatest-dark.ca73d21.png\" width=\"130\" height=\"58.7\"\/><\/p>\n<p>      Sign up for Slate&#8217;s evening newsletter.<\/p>\n","protected":false},"excerpt":{"rendered":"Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to&hellip;\n","protected":false},"author":2,"featured_media":363258,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[345,343,344,501,85,46,125,244],"class_list":{"0":"post-363257","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-family","12":"tag-il","13":"tag-israel","14":"tag-technology","15":"tag-travel"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/363257","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/comments?post=363257"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/posts\/363257\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media\/363258"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/media?parent=363257"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/categories?post=363257"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/il\/wp-json\/wp\/v2\/tags?post=363257"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}