{"id":163375,"date":"2025-11-28T00:32:10","date_gmt":"2025-11-28T00:32:10","guid":{"rendered":"https:\/\/www.newsbeep.com\/ie\/163375\/"},"modified":"2025-11-28T00:32:10","modified_gmt":"2025-11-28T00:32:10","slug":"a-trillion-dollars-is-a-terrible-thing-to-waste","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/ie\/163375\/","title":{"rendered":"A trillion dollars is a terrible thing to waste"},"content":{"rendered":"<p>Breaking news from famed machine learning researcher Ilya Sutskever:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!XMEf!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F819a7586-0a0a-4713-89e8-4255ca710295_1253x1147.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/819a7586-0a0a-4713-89e8-4255ca710295_1253.jpeg\" width=\"1253\" height=\"1147\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/819a7586-0a0a-4713-89e8-4255ca710295_1253x1147.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1147,&quot;width&quot;:1253,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1604435,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/180117740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F819a7586-0a0a-4713-89e8-4255ca710295_1253x1147.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   fetchpriority=\"high\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>Below is another summary of a just-released <a href=\"https:\/\/www.dwarkesh.com\/p\/ilya-sutskever-2\" rel=\"nofollow noopener\" target=\"_blank\">interview<\/a> of his that is making waves, a bit more technical. Basically Sutskever is saying that scaling (achieving improvements in AI through more chips and more data) is flattening out, and that we need new techniques; he is even open to <a href=\"https:\/\/open.substack.com\/pub\/garymarcus\/p\/how-o3-and-grok-4-accidentally-vindicated?r=8tdk6&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false\" rel=\"nofollow noopener\" target=\"_blank\">neurosymbolic<\/a> techniques, and innateness. He is clearly not forecasting a bright future for pure large language models.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!R9kz!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F604596b2-f33f-4364-bfb3-c787ffac18df_1349x1505.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/604596b2-f33f-4364-bfb3-c787ffac18df_1349.jpeg\" width=\"1349\" height=\"1505\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/604596b2-f33f-4364-bfb3-c787ffac18df_1349x1505.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1505,&quot;width&quot;:1349,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:286599,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/180117740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F604596b2-f33f-4364-bfb3-c787ffac18df_1349x1505.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   class=\"sizing-normal\"\/><\/a><\/p>\n<p>Sutskever also said that \u201cThe thing which I think is the most fundamental is that these models somehow just generalize dramatically worse than people. And it\u2019s super obvious. That seems like a very fundamental thing.\u201d<\/p>\n<p>Some of this may come as news to a lot of the machine learning community; it might be surprising coming from Sutskever, who is an icon of deep learning, having worked, inter alia, on the critical 2012 paper that showed how much GPUs could improve deep learning, the foundation of LLMs, in practice. He is also a co-founder of OpenAI, considered by many to have been their leading researcher until he departed after a failed effort to oust Sam Altman. <\/p>\n<p>But none of what Sutskever said should actually come as a surprise, especially not to readers of this Substack, or to anyone who followed me over the years. Essentially all of it was in my pre-GPT 2018 article \u201c<a href=\"https:\/\/arxiv.org\/pdf\/1801.00631\" rel=\"nofollow noopener\" target=\"_blank\">Deep learning: A Critical Appraisal<\/a>\u201d, which argued for neurosymbolic approaches to complement neural networks (as Sutskever now is), for more <a href=\"https:\/\/arxiv.org\/abs\/1801.05667\" rel=\"nofollow noopener\" target=\"_blank\">innate<\/a> (i.e., built-in, rather than learned) constraints (what Sutskever calls \u201cnew inductive constraints\u201d) and\/or in my 2022 \u201c<a href=\"https:\/\/nautil.us\/deep-learning-is-hitting-a-wall-238440\/\" rel=\"nofollow noopener\" target=\"_blank\">Deep learning is hitting a wall<\/a>\u201d evaluation of LLMs, which explicitly argued that the Kaplan scaling laws would eventually reach a point of diminishing returns (as Sutskever just did), and that problems with hallucinations, truth, generalization and reasoning would persist even as models scaled, much of which Sutskever just acknowledged.  <\/p>\n<p>Subbarao Kambhampati, meanwhile, has been arguing or years about <a href=\"https:\/\/cotopaxi.eas.asu.edu\" rel=\"nofollow noopener\" target=\"_blank\">limits on planning with LLMs<\/a>. Emily Bender has been saying for ages that an excess focus on LLMs has been \u201csucking the oxygen from the room\u201d relative to other research approaches. The <a href=\"https:\/\/open.substack.com\/pub\/garymarcus\/p\/a-knockout-blow-for-llms?r=8tdk6&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false\" rel=\"nofollow noopener\" target=\"_blank\">unfairly dismissed Apple reasoning paper<\/a> laid bare the generalization issues; another paper called \u201c<a href=\"https:\/\/arxiv.org\/abs\/2508.01191v3\" rel=\"nofollow noopener\" target=\"_blank\">Is Chain-of-Thought Reasoning of LLMs Mirage? A Data Distribution Lens<\/a>\u201d put a further nail in the LLM reasoning and generalization coffin. <\/p>\n<p>None of what Sutskever said should come as a surprise.  A machine learning researcher at Samsung, Alexia Jolicoeur-Martineau summed the situation up well on X, Tuesday, following the release of the Sutskever\u2019s interview:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!zSvc!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e70d67c-7675-4986-a1cc-dd57c78bcbc4_1263x367.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/7e70d67c-7675-4986-a1cc-dd57c78bcbc4_1263.jpeg\" width=\"1263\" height=\"367\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/7e70d67c-7675-4986-a1cc-dd57c78bcbc4_1263x367.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:367,&quot;width&quot;:1263,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:75808,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:&quot;&quot;,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/180117740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7e70d67c-7675-4986-a1cc-dd57c78bcbc4_1263x367.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\" title=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>\u00a7<\/p>\n<p>Of course it ain\u2019t over til it\u2019s over. Maybe pure scaling (adding more data and compute without fundamental architectural changes) will somehow magically yet solve what researchers as such Sutskever, LeCun, Sutton, Chollet and myself no longer think it could.  <\/p>\n<p>And investors may be loathe to kick the habit. As Phil Libin put it presciently last year, scaling\u2014not the generation of new ideas\u2014is what investors know best<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!QFgM!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff01c2271-6378-4e97-a0bc-70b79df4acb1_1195x1503.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/ie\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/f01c2271-6378-4e97-a0bc-70b79df4acb1_1195.jpeg\" width=\"1195\" height=\"1503\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/f01c2271-6378-4e97-a0bc-70b79df4acb1_1195x1503.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1503,&quot;width&quot;:1195,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:337820,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/180117740?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff01c2271-6378-4e97-a0bc-70b79df4acb1_1195x1503.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>And it\u2019s not just that venture capitalists know more about scaling businesses than inventing new ideas, it\u2019s for the venture capitalists that have driven so much of field, scaling, even if it fails, has been a great run: it\u2019s been a way to take their 2% management fee investing someone else\u2019s money on plausible-ish sounding bets that were truly massive, which makes them rich no matter how things turn out. To be sure, the VC get even richer still if the investments pan out, to be sure. but they are covered either way; even if it all falls apart, the venture capitalists themselves will become wealthy from the management fees alone. (It is their clients, such as pension funds, that will take the hit). So venture capitalists may continue to support LLM mania, at least for a while.<\/p>\n<p>But let\u2019s suppose for the sake of argument that Sutskever and the rest of us are correct, and that AGI will never emerge straight from LLMs, and that to a certain extent that they have run their course, and that we do in fact need new ideas. <\/p>\n<p>The question then becomes, what did it cost the field and society that it took so long for the machine learning mainstream to figure out what some of us, including virtually the entire neurosymbolic AI community had been saying for years?<\/p>\n<p>\u00a7<\/p>\n<p>The first and most obvious answer is money, which I estimate, back of the envelope as (roughly) a trillion dollars, much of it on Nvidia chips and massive salaries. (Zuckerberg has apparently hired some machine learning experts at salaries of $100,000,000 a year). <\/p>\n<p>According to Ed Zitron\u2019s calculations, \u201c<a href=\"https:\/\/www.wheresyoured.at\/big-tech-2tr\/\" rel=\"nofollow noopener\" target=\"_blank\">Big Tech Needs $2 Trillion In AI Revenue By 2030 or They Wasted Their Capex<\/a>\u201d. If Sutskever and I are right about the limits of LLMs, the only way to get to that $2T is to invent new ideas.<\/p>\n<p>If the definition of insanity is doing the same thing over and over and expecting different results, trillion dollar investments in ever more expensive experiments aiming to reach AGI may be delusional to the highest degree.<\/p>\n<p>To a first approximation, all the big tech companies, from OpenAI to Google to Meta to xAI to Anthropic to several Chinese companies, keep doing the same experiment over and over: building ever larger LLMs in hopes of reaching AGI. <\/p>\n<p>It has never worked. Each new bigger, more expensive model ekes out measurable improvements, but returns appear to be diminishing (that\u2019s what Sutskever is saying about <a href=\"https:\/\/arxiv.org\/abs\/2001.08361\" rel=\"nofollow noopener\" target=\"_blank\">the Kaplan laws<\/a>) and none of these experiments has solved core issues around hallucinations, generalization, planning and reasoning, as Sutskever too now recognizes. <\/p>\n<p>But it\u2019s not just that a trillion dollars or more might go down the drain, but that there might be considerable collateral damage, to the rest of society, both economic and otherwise (e.g., in terms of how <a href=\"https:\/\/nymag.com\/intelligencer\/article\/openai-chatgpt-ai-cheating-education-college-students-school.html\" rel=\"nofollow noopener\" target=\"_blank\">LLMs have undermined college education<\/a>). As Rog\u00e9 Karma put in a recent article in The Atlantic, \u201c<a href=\"https:\/\/www.theatlantic.com\/economy\/archive\/2025\/09\/ai-bubble-us-economy\/684128\/\" rel=\"nofollow noopener\" target=\"_blank\">The entire U.S. economy is being propped up by the promise of productivity gains that seem very far from materializing.<\/a>\u201d   <\/p>\n<p>To be fair,  nobody knows for sure what the blast radius would be. If LLM-powered AI didn\u2019t meet expectations and became valued less, who would take the hit? Would it  just be the \u201climited partners\u201d like pension funds who entrusted their money with VC firms? Or might the consequences be much broader? Might banks go down with the ship, in 2008-style liquidity crisis,possibly forcing taxpayers to bail them out? In the worst case, the impact of a deflated AI bubble could be immense. (Consumer spending, much of it fueled by wealthy people who could a hit on the stock market, might also drop, a recipe for recession.) <\/p>\n<p>Even the White House has admitted concerns about this. As the White House AI and Crypto Czar David Sacks himself put it earlier this week, referring to a Wall Street Journal analysis, \u201cAl-related investment accounts for half of GDP growth. A reversal [in that]  would risk recession.\u201d  <\/p>\n<p>Quoting from Karma\u2019s article in The Atlantic :<\/p>\n<p>That prosperity [that GenAI was supposed to deliver]  has largely yet to materialize anywhere other than their share prices. (The exception is Nvidia, which provides the crucial inputs\u2014advanced chips\u2014that the rest of the Magnificent Seven are buying.) As The Wall Street Journal reports, Alphabet, Amazon, Meta, and Microsoft have seen their <a href=\"https:\/\/www.wsj.com\/economy\/the-ai-booms-hidden-risk-to-the-economy-731b00d6\" rel=\"nofollow noopener\" target=\"_blank\">free cash flow<\/a> decline by 30 percent over the past two years. By one <a href=\"https:\/\/www.wheresyoured.at\/the-haters-gui\/\" rel=\"nofollow noopener\" target=\"_blank\">estimate<\/a>, Meta, Amazon, Microsoft, Google, and Tesla will by the end of this year have collectively spent $560 billion on AI-related capital expenditures since the beginning of 2024 and have brought in just $35 billion in AI-related revenue. OpenAI and Anthropic are <a href=\"https:\/\/www.reuters.com\/business\/anthropic-hits-3-billion-annualized-revenue-business-demand-ai-2025-05-30\/\" rel=\"nofollow noopener\" target=\"_blank\">bringing<\/a> in lots of revenue and are growing fast, but they are still <a href=\"https:\/\/www.reuters.com\/technology\/artificial-intelligence\/openai-does-not-expect-be-cash-flow-positive-until-2029-bloomberg-news-reports-2025-03-26\/?utm_source=chatgpt.com\" rel=\"nofollow noopener\" target=\"_blank\">nowhere<\/a> <a href=\"https:\/\/ca.finance.yahoo.com\/news\/anthropic-projects-soaring-growth-34-002016322.html?utm_source=chatgpt.com\" rel=\"nofollow noopener\" target=\"_blank\">near<\/a> profitable. Their valuations\u2014roughly <a href=\"https:\/\/www.nytimes.com\/2025\/08\/01\/business\/dealbook\/openai-ai-mega-funding-deal.html\" rel=\"nofollow noopener\" target=\"_blank\">$300 billion<\/a> and <a href=\"https:\/\/www.reuters.com\/business\/anthropics-valuation-more-than-doubles-183-billion-after-13-billion-fundraise-2025-09-02\/\" rel=\"nofollow noopener\" target=\"_blank\">$183 billion<\/a>, respectively, and <a href=\"https:\/\/www.nytimes.com\/2025\/08\/19\/technology\/openai-chatgpt-stock-sale-valuation.html\" rel=\"nofollow noopener\" target=\"_blank\">rising<\/a>\u2014are many multiples higher than their current revenues. (OpenAI <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2025-03-26\/openai-expects-revenue-will-triple-to-12-7-billion-this-year\" rel=\"nofollow noopener\" target=\"_blank\">projects<\/a> about $13 billion in revenues this year; <a href=\"https:\/\/www.theinformation.com\/articles\/anthropic-projects-soaring-growth-to-34-5-billion-in-2027-revenue\" rel=\"nofollow noopener\" target=\"_blank\">Anthropic<\/a>, $2 billion to $4 billion.) Investors are betting heavily on the prospect that all of this spending will soon generate record-breaking profits. If that belief collapses, however, investors might start to sell en masse, causing the market to experience a large and painful correction.<\/p>\n<p>\u2026<\/p>\n<p>The dot-com crash was bad, but it did not trigger a crisis. An AI-bubble crash could be different. AI-related investments have already <a href=\"https:\/\/paulkedrosky.com\/honey-ai-capex-ate-the-economy\/\" rel=\"nofollow noopener\" target=\"_blank\">surpassed<\/a> the level that telecom hit at the peak of the dot-com boom as a share of the economy. In the first half of this year, business spending on AI added more to GDP growth than all consumer spending combined. Many experts believe that a major reason the U.S. economy has been able to weather tariffs and mass deportations without a recession is because all of this AI spending is acting, in the <a href=\"https:\/\/paulkedrosky.com\/honey-ai-capex-ate-the-economy\/\" rel=\"nofollow noopener\" target=\"_blank\">words<\/a> of one economist, as a \u201cmassive private sector stimulus program.\u201d An AI crash could lead broadly to less spending, fewer jobs, and slower growth, potentially dragging the economy into a recession. The economist Noah Smith <a href=\"https:\/\/www.noahpinion.blog\/p\/will-data-centers-crash-the-economy\" rel=\"nofollow noopener\" target=\"_blank\">argues<\/a> that it could even lead to a financial crisis if the unregulated \u201cprivate credit\u201d loans funding much of the industry\u2019s expansion all go bust at once.<\/p>\n<p>The whole thing looks incredibly fragile. <\/p>\n<p>\u00a7<\/p>\n<p>To put it bluntly, the world has gone \u201call in\u201d on LLMs, but, as Sutskever\u2019s interview highlights, there are many reasons to doubt that LLMs will ever deliver the rewards that many people expected. <\/p>\n<p>The sad part is that most of the reasons have been known \u2013 though not widely accepted \u2013 for a very long time.  It all could have been avoided. But the machine learning community has arrogantly excluded other voices, and indeed whole other fields like the cognitive sciences. And we all now may be about to pay the price. <\/p>\n<p>An old saying about such follies is that \u201csix months in the lab can you save you an afternoon in the library\u201d; here we may have wasted a trillion dollars and several years to rediscover what cognitive science already knew.<\/p>\n<p>A trillion dollars is a terrible amount of money to have perhaps wasted. If the blast radius is wider it could be a lot more. It is all starting to feel like a tale straight out of Greek tragedy, an avoidable mixture of arrogance and power that just might wind up taking down the economy. <\/p>\n","protected":false},"excerpt":{"rendered":"Breaking news from famed machine learning researcher Ilya Sutskever: Below is another summary of a just-released interview of&hellip;\n","protected":false},"author":2,"featured_media":163376,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[220,218,219,61,60,80],"class_list":{"0":"post-163375","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-ie","12":"tag-ireland","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/163375","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/comments?post=163375"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/posts\/163375\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media\/163376"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/media?parent=163375"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/categories?post=163375"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/ie\/wp-json\/wp\/v2\/tags?post=163375"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}