{"id":400474,"date":"2026-01-10T12:21:05","date_gmt":"2026-01-10T12:21:05","guid":{"rendered":"https:\/\/www.newsbeep.com\/ca\/400474\/"},"modified":"2026-01-10T12:21:05","modified_gmt":"2026-01-10T12:21:05","slug":"ais-memorization-crisis-the-atlantic","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ca\/400474\/","title":{"rendered":"AI&#8217;s Memorization Crisis &#8211; The Atlantic"},"content":{"rendered":"<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Editor\u2019s note: This work is part of <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/category\/ai-watchdog\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">AI Watchdog<\/a>, The Atlantic\u2019s ongoing investigation into the generative-AI industry.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">On Tuesday, researchers at Stanford and Yale <a data-event-element=\"inline link\" href=\"https:\/\/arxiv.org\/abs\/2601.02671\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">revealed<\/a> something that AI companies would prefer to keep hidden. Four popular large language models\u2014OpenAI\u2019s GPT, Anthropic\u2019s Claude, Google\u2019s Gemini, and xAI\u2019s Grok\u2014have stored large portions of some of the books they\u2019ve been trained on, and can reproduce long excerpts from those books.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In fact, when prompted strategically by researchers, Claude delivered the near-complete text of Harry Potter and the Sorcerer\u2019s Stone, The Great Gatsby, 1984, and Frankenstein, in addition to thousands of words from books including The Hunger Games and The Catcher in the Rye. Varying amounts of these books were also reproduced by the other three models. Thirteen books were tested.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This phenomenon has been called \u201cmemorization,\u201d and AI companies have long denied that it happens on a large scale. In a 2023 letter to the U.S. Copyright Office, OpenAI <a data-event-element=\"inline link\" href=\"https:\/\/www.regulations.gov\/comment\/COLC-2023-0006-8906\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">said<\/a> that \u201cmodels do not store copies of the information that they learn from.\u201d Google similarly <a data-event-element=\"inline link\" href=\"https:\/\/www.regulations.gov\/comment\/COLC-2023-0006-9003\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">told<\/a> the Copyright Office that \u201cthere is no copy of the training data\u2014whether text, images, or other formats\u2014present in the model itself.\u201d <a data-event-element=\"inline link\" href=\"https:\/\/www.regulations.gov\/comment\/COLC-2023-0006-9021\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Anthropic<\/a>, <a data-event-element=\"inline link\" href=\"https:\/\/www.regulations.gov\/comment\/COLC-2023-0006-9027\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Meta<\/a>, <a data-event-element=\"inline link\" href=\"https:\/\/www.regulations.gov\/comment\/COLC-2023-0006-8750\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">Microsoft<\/a>, and others have made similar claims. (None of the AI companies mentioned in this article agreed to my requests for interviews.)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The Stanford study proves that there are such copies in AI models, and it is just the latest of several studies to do so. In my own investigations, I\u2019ve found that image-based models can reproduce some of the art and photographs they\u2019re trained on. This may be a massive legal liability for AI companies\u2014one that could potentially cost the industry billions of dollars in copyright-infringement judgments, and lead products to be taken off the market. It also contradicts the basic explanation given by the AI industry for how its technology works.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">AI is frequently explained in terms of metaphor; tech companies like to say that their products learn, that LLMs have, for example, developed an understanding of English writing without explicitly being told the rules of English grammar. This new research, along with several other studies from the past two years, undermines that metaphor. AI does not absorb information like a human mind does. Instead, it stores information and accesses it.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In fact, many AI developers use a more technically accurate term when talking about these models: lossy compression. It\u2019s beginning to gain traction outside the industry too. The phrase was recently invoked by a court in Germany that <a data-event-element=\"inline link\" href=\"https:\/\/aifray.com\/wp-content\/uploads\/2025\/11\/42-O-14139-24-Endurteil.pdf\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">ruled against OpenAI<\/a> in a case brought by GEMA, a music-licensing organization. GEMA showed that ChatGPT could output close imitations of song lyrics. The judge compared the model to MP3 and JPEG files, which store your music and photos in files that are smaller than the raw, uncompressed originals. When you store a high-quality photo as a JPEG, for example, the result is a somewhat lower-quality photo, in some cases with blurring or visual artifacts added. A lossy-compression algorithm still stores the photo, but it\u2019s an approximation rather than the exact file. It\u2019s called lossy compression because some of the data are lost.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">From a technical perspective, this compression process is much like what happens inside AI models, as researchers from several AI companies and universities have explained to me in the past few months. They ingest text and images, and output text and images that approximate those inputs.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But this simple description is less useful to AI companies than the learning metaphor, which has been used to claim that the statistical algorithms known as AI will eventually make novel scientific discoveries, undergo boundless improvement, and recursively train themselves, possibly leading to an \u201c<a data-event-element=\"inline link\" href=\"https:\/\/www.theguardian.com\/technology\/2026\/jan\/06\/leading-ai-expert-delays-timeline-possible-destruction-humanity\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">intelligence explosion<\/a>.\u201d The whole industry is staked on a shaky metaphor.<\/p>\n<p><img decoding=\"async\" alt=\"Garfunkel_and_Oates_from_cdn-pastemagazine-com.jpg\" loading=\"lazy\" class=\"Image_root__XxsOp Image_lazy__hYWHV ArticleInlineImagePicture_image__I79fR\"  src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/1768047664_140_original.jpg\" width=\"514\" height=\"514\"\/>Source: Courtesy of Kyle Christy \/ IFC<img decoding=\"async\" alt=\"Garfunkel_and_Oates_from_stable_diffusion.png\" loading=\"lazy\" class=\"Image_root__XxsOp Image_lazy__hYWHV ArticleInlineImagePicture_image__I79fR\"  src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/original.png\" width=\"512\" height=\"512\"\/>Output from Stable Diffusion 1.4<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">The problem becomes clear if we look at AI image generators. In September 2022, Emad Mostaque, a co-founder and the then-CEO of Stability AI, <a data-event-element=\"inline link\" href=\"https:\/\/youtu.be\/Snn4Pq5DBIo?si=osHBLblTwMk_wQ43&amp;t=191\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">explained<\/a> in a podcast interview how Stable Diffusion, Stability\u2019s image model, was built. \u201cWe took 100,000 gigabytes of images and compressed it to a two-gigabyte file that can re-create any of those and iterations of those\u201d images, he said.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">One of the many experts I spoke with while reporting this article was an independent AI researcher who has studied Stable Diffusion\u2019s ability to reproduce its training images. (I agreed to keep the researcher anonymous, because they fear repercussions from major AI companies.) Above is one example of this ability: On the left is the original from the web\u2014a promotional image from the TV show Garfunkel and Oates\u2014and on the right is a version that Stable Diffusion generated when prompted with a caption the image appears with on the web, which includes some HTML code: \u201cIFC Cancels Garfunkel and Oates.\u201d Using this simple technique, the researcher showed me how to produce near-exact copies of several dozen images known to be in Stable Diffusion\u2019s training set, most of which include visual residue that looks something like lossy compression\u2014the kind of glitchy, fuzzy effect you may notice in your own photos from time to time.<\/p>\n<p><img decoding=\"async\" alt=\"Karla_Ortiz_from_Karla_Ortiz_com.jpeg\" loading=\"lazy\" class=\"Image_root__XxsOp Image_lazy__hYWHV ArticleInlineImagePicture_image__I79fR\"   src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/1768047665_390_original.jpg\" width=\"655\" height=\"1074\"\/><\/p>\n<p>Source: Karla Ortiz<\/p>\n<p>Original artwork by Karla Ortiz (The Death I Bring, 2016, graphite)<\/p>\n<p><img decoding=\"async\" alt=\"Karla_Ortiz_from_stable_diffusion.png\" loading=\"lazy\" class=\"Image_root__XxsOp Image_lazy__hYWHV ArticleInlineImagePicture_image__I79fR\"  src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/1768047665_797_original.png\" width=\"604\" height=\"981\"\/><\/p>\n<p>Source: United States District Court,\u00a0 Northern District of California<\/p>\n<p>Output from Stability&#8217;s Reimagine XL product (based on Stable Diffusion XL)<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Above is another pair of images taken from a lawsuit against Stability AI and other companies. On the left is an original work by Karla Ortiz, and on the right is a variation from Stable Diffusion. Here, the image is a bit further from the original. Some elements have changed. Instead of compressing at the pixel level, the algorithm appears to be copying and manipulating objects from multiple images, while maintaining a degree of visual continuity.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">As companies explain it, AI algorithms extract \u201cconcepts\u201d from training data and learn to make original work. But the image on the right is not a product of concepts alone. It\u2019s not a generic image of, say, \u201can angel with birds.\u201d It\u2019s difficult to pinpoint why any AI model makes any specific mark in an image, but we can reasonably assume that Stable Diffusion can render the image on the right partly because it has stored visual elements from the image on the left. It isn\u2019t collaging in the physical cut-and-paste sense, but it also isn\u2019t learning in the human sense the word implies. The model has no senses or conscious experience through which to make its own aesthetic judgments.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">Google has <a data-event-element=\"inline link\" href=\"https:\/\/www.regulations.gov\/comment\/COLC-2023-0006-9003\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">written<\/a> that LLMs store not copies of their training data but rather the \u201cpatterns in human language.\u201d This is true on the surface but misleading once you dig into it. As has been widely <a data-event-element=\"inline link\" href=\"https:\/\/huggingface.co\/learn\/llm-course\/en\/chapter2\/4\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">documented<\/a>, when a company uses a book to develop an AI model, it splits the book\u2019s text into tokens or word fragments. For example, the phrase hello, my friend might be represented by the tokens he, llo, my, fri, and end. Some tokens are actual words; some are just groups of letters, spaces, and punctuation. The model stores these tokens and the contexts in which they appear in books. The resulting LLM is essentially a huge database of contexts and the tokens that are most likely to appear next.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The model can be visualized as a map. Here\u2019s an example, with the actual most-likely tokens from Meta\u2019s Llama-3.1-70B:<\/p>\n<p><img decoding=\"async\" alt=\"flow chart \" loading=\"lazy\" class=\"Image_root__XxsOp Image_lazy__hYWHV ArticleInlineImagePicture_image__I79fR\"  src=\"https:\/\/www.newsbeep.com\/ca\/wp-content\/uploads\/2026\/01\/1768047665_590_original.png\" width=\"665\" height=\"467\"\/>Source: The Atlantic \/ Llama<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">When an LLM \u201cwrites\u201d a sentence, it walks a path through this forest of possible token sequences, making a high-probability choice at each step. Google\u2019s description is misleading because the next-token predictions don\u2019t come from some vague entity such as \u201chuman language\u201d but from the particular books, articles, and other texts that the model has scanned.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">By default, models will sometimes diverge from the most probable next token. This behavior is often framed by AI companies as a way of making the models more \u201ccreative,\u201d but it also has the benefit of concealing copies of training text.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Sometimes the language map is detailed enough that it contains exact copies of whole books and articles. This past summer, <a data-event-element=\"inline link\" href=\"http:\/\/arxiv.org\/abs\/2505.12546v3\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">a study<\/a> of several LLMs found that Meta\u2019s Llama 3.1-70B model can, like Claude, effectively reproduce the full text of Harry Potter and the Sorcerer\u2019s Stone. The researchers gave the model just the book\u2019s first few tokens, \u201cMr. and Mrs. D.\u201d In Llama\u2019s internal language map, the text most likely to follow was: \u201cursley, of number four, Privet Drive, were proud to say that they were perfectly normal, thank you very much.\u201d This is precisely the book\u2019s first sentence. Repeatedly feeding the model\u2019s output back in, Llama continued in this vein until it produced the entire book, omitting just a few short sentences.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Using this technique, the researchers also showed that Llama had losslessly compressed large portions of other works, such as Ta-Nehisi Coates\u2019s famous Atlantic essay \u201c<a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/magazine\/archive\/2014\/06\/the-case-for-reparations\/361631\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">The Case for Reparations<\/a>.\u201d By prompting with the essay\u2019s first sentence, more than 10,000 words, or two-thirds of the essay, came out of the model verbatim. Large extractions also appear to be possible from Llama 3.1-70B for George R. R. Martin\u2019s A Game of Thrones, Toni Morrison\u2019s Beloved, and others.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">The Stanford and Yale researchers also showed this week that a model\u2019s output can paraphrase a book rather than duplicate it exactly. For example, where A Game of Thrones reads \u201cJon glimpsed a pale shape moving through the trees,\u201d the researchers found that GPT-4.1 produced \u201cSomething moved, just at the edge of sight\u2014a pale shape, slipping between the trunks.\u201d As in the Stable Diffusion example above, the model\u2019s output is extremely similar to a specific original work.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">This isn\u2019t the only research to demonstrate the casual plagiarism of AI models. \u201cOn average, 8\u201315% of the text generated by LLMs\u201d also exists on the web, in exactly that same form, according to <a data-event-element=\"inline link\" href=\"http:\/\/arxiv.org\/abs\/2411.10242\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">one study<\/a>. Chatbots are routinely breaching the ethical standards that humans are normally held to.<\/p>\n<p class=\"ArticleParagraph_root__4mszW ArticleParagraph_dropcap__uIVzg\" data-flatplan-paragraph=\"true\" data-flatplan-dropcap=\"true\">Memorization could have legal consequences in at least two ways. For one, if memorization is unavoidable, then AI developers will have to somehow prevent users from accessing memorized content, as law scholars have <a data-event-element=\"inline link\" href=\"https:\/\/scholarship.kentlaw.iit.edu\/cklawreview\/vol100\/iss1\/9\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">written<\/a>. Indeed, at least one court has already <a data-event-element=\"inline link\" href=\"https:\/\/www.courtlistener.com\/docket\/68889092\/291\/concord-music-group-inc-v-anthropic-pbc\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">required<\/a> this. But existing techniques are easy to circumvent. For example, 404 Media has <a data-event-element=\"inline link\" href=\"https:\/\/www.404media.co\/openai-cant-fix-soras-copyright-infringement-problem-because-it-was-built-with-stolen-content\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">reported<\/a> that OpenAI\u2019s Sora 2 would not comply with a request to generate video of a popular video game called Animal Crossing but would generate a video if the game\u2019s title was given as \u201c\u2018crossing aminal\u2019 [sic] 2017.\u201d If companies can\u2019t guarantee that their models will never infringe on a writer\u2019s or artist\u2019s copyright, a court could require them to take the product off the market.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">A second reason that AI companies could be liable for copyright infringement is that a model itself could be considered an illegal copy. Mark Lemley, a Stanford law professor who has represented Stability AI and Meta in such lawsuits, told me he isn\u2019t sure whether it\u2019s accurate to say that a model \u201ccontains\u201d a copy of a book, or whether \u201cwe have a set of instructions that allows us to create a copy on the fly in response to a request.\u201d Even the latter is potentially problematic, but if judges decide that the former is true, then plaintiffs could seek the destruction of infringing copies. Which means that, in addition to fines, AI companies could in some cases face the <a data-event-element=\"inline link\" href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3654699\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">possibility<\/a> of being legally compelled to retrain their models from scratch, with properly licensed material.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In a lawsuit, The New York Times alleged that OpenAI\u2019s GPT-4 could reproduce dozens of Times articles nearly verbatim. OpenAI (which has a corporate partnership with The Atlantic) responded by <a data-event-element=\"inline link\" href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.nysd.612697\/gov.uscourts.nysd.612697.52.0_1.pdf\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">arguing<\/a> that the Times used \u201cdeceptive prompts\u201d that violated the company\u2019s terms of service and prompted the model with sections from each of those articles. \u201cNormal people do not use OpenAI\u2019s products in this way,\u201d the company wrote, and even claimed \u201cthat the Times paid someone to hack OpenAI\u2019s products.\u201d The company has also <a data-event-element=\"inline link\" href=\"https:\/\/openai.com\/index\/openai-and-journalism\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">called<\/a> this type of reproduction \u201ca rare bug that we are working to drive to zero.\u201d<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">But the emerging research is making clear that the ability to plagiarize is inherent to GPT-4 and all other major LLMs. None of the researchers I spoke with thought that the underlying phenomenon, memorization, is unusual or could be eradicated.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">In copyright lawsuits, the learning metaphor lets companies make misleading comparisons between chatbots and humans. At least one judge has repeated these comparisons, <a data-event-element=\"inline link\" href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.cand.434709\/gov.uscourts.cand.434709.231.0_2.pdf\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">likening<\/a> an AI company\u2019s theft and scanning of books to \u201ctraining schoolchildren to write well.\u201d There have also been two lawsuits in which <a data-event-element=\"inline link\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2025\/07\/anthropic-meta-ai-rulings\/683526\/\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">judges ruled<\/a> that training an LLM on copyrighted books was fair use, but both rulings were flawed in their handling of memorization: One judge <a data-event-element=\"inline link\" href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.cand.415175\/gov.uscourts.cand.415175.598.0_1.pdf\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">cited<\/a> expert testimony that showed that Llama could reproduce no more than 50 tokens from the plaintiffs\u2019 books, though <a data-event-element=\"inline link\" href=\"http:\/\/arxiv.org\/abs\/2505.12546v1\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">research<\/a> has since been published that proves otherwise. The other judge acknowledged that Claude had memorized significant portions of books but <a data-event-element=\"inline link\" href=\"https:\/\/storage.courtlistener.com\/recap\/gov.uscourts.cand.434709\/gov.uscourts.cand.434709.231.0_2.pdf\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">said<\/a> that the plaintiffs had failed to allege that this was a problem.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Research on how AI models reuse their training content is still primitive, partly because AI companies are motivated to keep it that way. Several of the researchers I spoke with while reporting this article told me about memorization research that has been censored and impeded by company lawyers. None of them would talk about these instances on the record, fearing retaliation from companies.<\/p>\n<p class=\"ArticleParagraph_root__4mszW\" data-flatplan-paragraph=\"true\">Meanwhile, OpenAI CEO Sam Altman has <a data-event-element=\"inline link\" href=\"https:\/\/youtu.be\/tn0XpTAD_8Q?t=1960\" rel=\"noopener noreferrer nofollow\" target=\"_blank\">defended<\/a> the technology\u2019s \u201cright to learn\u201d from books and articles, \u201clike a human can.\u201d This deceptive, feel-good idea prevents the public discussion we need to have about how AI companies are using the creative and intellectual works upon which they are utterly dependent.<\/p>\n","protected":false},"excerpt":{"rendered":"Editor\u2019s note: This work is part of AI Watchdog, The Atlantic\u2019s ongoing investigation into the generative-AI industry. On&hellip;\n","protected":false},"author":2,"featured_media":400475,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[62,276,277,49,48,61],"class_list":{"0":"post-400474","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ca","12":"tag-canada","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/400474","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/comments?post=400474"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/posts\/400474\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media\/400475"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/media?parent=400474"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/categories?post=400474"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ca\/wp-json\/wp\/v2\/tags?post=400474"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}