{"id":377339,"date":"2025-12-29T03:23:21","date_gmt":"2025-12-29T03:23:21","guid":{"rendered":"https:\/\/www.newsbeep.com\/au\/377339\/"},"modified":"2025-12-29T03:23:21","modified_gmt":"2025-12-29T03:23:21","slug":"we-might-finally-know-what-will-burst-the-ai-bubble","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/au\/377339\/","title":{"rendered":"We might finally know what will burst the AI bubble"},"content":{"rendered":"<p>A chill seems to be setting in over Wall Street. Tech billionaire Peter Thiel\u2019s hedge fund recently sold its entire $100m (\u00a376m) stake in Nvidia, the world\u2019s most valuable chip company at the heart of the <a href=\"https:\/\/www.sciencefocus.com\/future-technology\/artificial-intelligence-ai\/\" rel=\"nofollow noopener\" target=\"_blank\">artificial intelligence<\/a> (AI) boom.<\/p>\n<p>Meanwhile, Michael Burry \u2013 famed for sounding the alarm before the 2008 financial crisis and Christian Bale\u2019s depiction of him in the movie The Big Short \u2013 bet almost $200m (\u00a3152m) against the chipmaker.\u00a0<\/p>\n<p>Why would two investors of their calibre turn against a company whose share price has risen almost 15-fold in 5 years?<\/p>\n<p>Partly because this isn\u2019t just an Nvidia issue. The company makes the processors on which much of the AI industry rests \u2013 an industry already worth trillions of dollars that is almost single-handedly driving US economic growth.\u00a0<\/p>\n<p>But something in that growth story may be starting to fray. Many researchers and investors now suspect that AI\u2019s astonishing momentum rests on a technical assumption that may not hold forever.<\/p>\n<p>In other words, an AI bubble may be forming \u2013 and could easily be popped by a fatal flaw hiding in plain sight.<\/p>\n<p>The big bet: bigger models = better AI<\/p>\n<p>To understand what\u2019s going on in the global economy right now, you first need to understand what AI, as we know it, actually is.\u00a0<\/p>\n<p>The current boom in AI technology has ridden on a wave known as \u2018deep learning\u2019, which is an approach to creating intelligent computer systems using \u2018artificial neural networks\u2019.\u00a0<\/p>\n<p>Neural networks aren\u2019t new: the idea dates back to 1944, but only recently have they become big and fast enough to work well.<\/p>\n<p>These systems are made up of interconnected nodes (artificial \u2018neurons\u2019) that process information and pass it to other nodes. Deep learning models stack multiple layers of these nodes, each layer extracting increasingly complex features from the data. The \u2018deep\u2019 refers to having many layers.<\/p>\n<p>Without getting too bogged down in details, what this produces is models that are very good approximators, learning to predict what things should look like based on the patterns in their training data.\u00a0<\/p>\n<p>Large language models (LLMs) are the type of deep learning model most people are now familiar with, powering chatbots like OpenAI\u2019s ChatGPT, Google\u2019s Gemini and Anthropic\u2019s Claude.\u00a0<\/p>\n<p>LLMs are trained on vast amounts of text so that they become very adept at predicting the next word in a sequence.\u00a0<\/p>\n<p>\u201cThey\u2019re sort of like autocomplete on your phone,\u201d says Gary Marcus, a leading voice in the AI sceptic community.\u00a0<\/p>\n<p>\u201cYou type something, and it guesses what\u2019s going to come next.\u00a0<\/p>\n<p>\u201cBasically, these are really sophisticated devices for making that prediction \u2013 looking at context, not just from the last few words like your phone might use, but using everything maybe in all the conversations that you&#8217;ve had going back some distance.\u201d\u00a0<\/p>\n<p>Over time, a simple mantra took hold in Silicon Valley: make the models bigger and they\u2019ll get better.<\/p>\n<p>There are three levers to pull to achieve this.<\/p>\n<p>First, increase the model size. This entails adding more layers or nodes so the system learns far more parameters (the internal variables that encode knowledge).\u00a0<br \/>\nSecond, increase the amount of training data. By feeding the model more examples, it can learn more patterns.\u00a0<br \/>\nThird, increase the amount of computing power, known in the industry as \u2018compute\u2019. This involves using more and faster chips during training, allowing the model to learn from the data more effectively.\u00a0<\/p>\n<p>Increasing these three factors in tandem led to what appeared to be a remarkably predictable rise in performance. Thus, the \u2018scaling laws\u2019 were born.<\/p>\n<p>Much like Moore\u2019s Law (which predicted that the number of transistors on a chip would double roughly every two years, enabling computers to shrink from room-sized to pocket-sized), AI companies assumed that simply scaling up models \u2013 making them larger, training on more data and using more compute \u2013 would deliver steady, almost guaranteed improvements in capability.\u00a0<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"788\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/nvidia-chip.jpg\" alt=\"NVIDIA video chip on the motherboard.\" class=\"wp-image-209554\"\/>Nvidia is the world&#8217;s largest company, valued at more than $4.5 trillion. &#8211; Photo credit: Getty<\/p>\n<p>For several years, this held true. Applying these scaling laws turbo-charged AI development, and within half a decade, deep learning models went from quirky toys to systems that hundreds of millions of users rely on daily.<\/p>\n<p>OpenAI\u2019s successive GPT models are a prime example of the scaling mindset. GPT-3, released in 2020, contained <a href=\"https:\/\/medium.com\/@chudeemmanuel3\/gpt-3-5-and-gpt-4-comparison-47d837de2226\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">175 billion parameters<\/a>, making it by far the largest model of its time.\u00a0<\/p>\n<p>Its 2023 successor GPT-4 is estimated to be 10 times larger, at roughly <a href=\"https:\/\/explodingtopics.com\/blog\/gpt-parameters\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">1.8 trillion parameters<\/a>. Training data has exploded as well: GPT-4 was reportedly trained on an astonishing 13 trillion tokens of text (a <a href=\"https:\/\/help.openai.com\/en\/articles\/4936856-what-are-tokens-and-how-to-count-them\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">token is roughly 3\/4 of a word<\/a>). For comparison, the entire English Wikipedia contains only about 5 billion words \u2013 making it thousands of times smaller than GPT-4\u2019s training material.<\/p>\n<p>Other leading models from Anthropic, Google and Meta have all followed a similar pattern.\u00a0<\/p>\n<p>In theory, each 10-fold increase in model size and data was expected to yield new capabilities and better performance across tasks. And indeed, performance on many benchmarks has shot up.<\/p>\n<p>For instance, GPT-4 achieved<a href=\"https:\/\/openai.com\/index\/gpt-4-research\/\" rel=\"nofollow noopener\" target=\"_blank\"> a score of 84.6 per cent<\/a> on the Massive Multitask Language Understanding tests \u2013 a benchmark for AI systems that covers 57 topics \u2013 whereas GPT-3.5 scored 70 per cent.\u00a0<\/p>\n<p>That improvement closed the gap to human-level performance on many tasks. GPT-4, for example, could pass the bar exam and other professional tests that stumped its predecessors.<\/p>\n<p>This progress fueled grand claims that an artificial general intelligence (AGI) \u2013 an AI that could do all your work better than you, drive your car, book your holidays and even make scientific breakthroughs \u2013 was on the horizon, and justified sky-high valuations for AI startups and suppliers.<\/p>\n<p>The only problem? The scaling laws may not have been \u2018laws\u2019 at all.<\/p>\n<p>The 3 biggest limits of today\u2019s AI<\/p>\n<p>\u201cIf I told you that my baby weighed 9 pounds at birth, and 18 months later it had doubled in weight,\u201d Marcus posits, \u201cthat doesn\u2019t mean it\u2019s going to keep doubling and become a trillion-pound baby by the time it goes to college.\u201d\u00a0<\/p>\n<p>What he means is that while the scaling laws looked like a real relationship at the time, and delivered impressive results to boot, there was no empirical evidence that they would hold forever.\u00a0<\/p>\n<p>Cracks are now showing, with bigger models not yielding proportional gains. Models may be tens of times larger than they were a couple of years ago, but they\u2019re not 10 times smarter by most metrics.\u00a0<\/p>\n<p>All of this puts the AI frenzy in a different light. If simply throwing more data and computing power at the problem no longer yields dramatically better results, then the economic foundations of the AI boom start to wobble.<\/p>\n<p>\u201cThe thing about these systems is they\u2019re really just mimics \u2013 they don\u2019t have a deep understanding of what they\u2019re talking about,\u201d Marcus says.\u00a0<\/p>\n<p>At their core, as Marcus puts it, today\u2019s AI models are still \u201cgiant statistical machines\u201d that learn correlations, not true comprehension. They predict outputs based on patterns in their training data, not by reasoning about the world the way humans do.<\/p>\n<p>Unlike a calculator, which gives the right answer every time for the problems it\u2019s built for, a neural network can never be 100 per cent correct 100 per cent of the time. It works more like a human brain, making its best guess based on patterns it has seen before.<\/p>\n<p>This fundamental limitation leads to three well-known failures of AI models.\u00a0<\/p>\n<p>Read more:<\/p>\n<p>1. Hallucinations<\/p>\n<p>The over-generalisations or outright fabrications that even the latest state-of-the-art models produce are often euphemistically called \u2018hallucinations\u2019.<\/p>\n<p>Most of us have encountered these by now. The AI confidently invents facts, cites nonexistent research or asserts something completely false.\u00a0<\/p>\n<p>Marcus uses the example of his friend Harry Shearer, an actor behind the voice of Mr Burns in The Simpsons, to explain. Shearer once found an AI-generated biography claiming he was British, which is wrong, as a quick check of Wikipedia would show.\u00a0<\/p>\n<p>Why did the model say that? Possibly because many other voice actors or comedians it read about in that category were British, so the pattern-matching statistical machine guessed that Shearer was too.\u00a0<\/p>\n<p>Fundamentally, the AI had no concept of who Harry Shearer actually is, or even what an actor or Britain is \u2013 it just regurgitated a likely-seeming correlation from its training data.<\/p>\n<p>\u201cThey break everything into little pieces of information, and they learn the correlations between those bits of information,\u201d Marcus says. In other words, there are no hard facts in deep learning models, only connections \u2013 and there never will be.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"800\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/calculator.jpg\" alt=\"A calculator.\" class=\"wp-image-209556\"\/>&#8220;Nobody would use a calculator that&#8217;s right 80 per cent of the time,&#8221; says Gary Marcus. &#8211; Photo credit: Getty<\/p>\n<p>Empirically, newer models do hallucinate less than older ones, but they still do it quite frequently.\u00a0<\/p>\n<p>A 2024 <a href=\"https:\/\/pubmed.ncbi.nlm.nih.gov\/38776130\/#:~:text=,001%29.%20Further%20analysis%20of\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">study<\/a> found ChatGPT-4 produced false information 28.6 per cent of the time, compared to a 39.6 per cent hallucination rate for GPT-3.5, in tests where factual accuracy was measured. According to <a href=\"https:\/\/openai.com\/index\/introducing-gpt-5\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">OpenAI<\/a>, the company has made \u201csignificant advances in reducing hallucinations\u201d in its latest model, GPT-5, but they\u2019re still commonplace.\u00a0<\/p>\n<p>In truth, no current AI model can be trusted to be consistently correct. We humans are still needed to sense-check what comes out, which doesn\u2019t replace human expertise in a given subject, but necessitates it.\u00a0<\/p>\n<p>2. The \u2018outlier problem\u2019<\/p>\n<p>Hallucinations are one issue; another is what happens when these models encounter situations outside the distribution of their training data. If an AI sees something genuinely new or weird \u2013 something that wasn\u2019t well-represented in the billions of examples it ingested \u2013 it can completely break down.\u00a0<\/p>\n<p>Marcus calls this the \u2018outlier problem\u2019. He says, \u201cThere&#8217;s this infinite periphery around the centre of things that the systems haven&#8217;t been exposed to.\u201d<\/p>\n<p>Take self-driving cars. They can often reliably recognise other vehicles moving in familiar, orderly ways. But if they come across a lorry tipped on its side across two lanes \u2013 a shape they\u2019ve barely, if ever, seen in training data \u2013 the system may fail to register it as a hazard at all. It doesn\u2019t take much imagination to picture how costly such an error would be.\u00a0<\/p>\n<p>This is a major problem in terms of where the AI industry can go from here, and it&#8217;s why we haven\u2019t seen lone AI scientist models winning Nobel Prizes for novel discoveries yet. Today\u2019s deep learning models can remix human knowledge, but not extend it much beyond the frontier of what it\u2019s seen.\u00a0<\/p>\n<p>3. Data limits<\/p>\n<p>Where does all of this leave us? Well, AI models are now incredibly expensive to train and run, requiring not just vast amounts of data but enormous computational infrastructure.<\/p>\n<p>And they\u2019re literally running out of good data to learn from \u2013 so much so that companies are now scraping and transcribing everything (like <a href=\"https:\/\/www.techradar.com\/computing\/artificial-intelligence\/investigation-finds-companies-are-training-ai-models-with-youtube-content-without-permission\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">YouTube video subtitles<\/a>) just to get a bit more text to feed the beast.\u00a0<\/p>\n<p>\u201cEverybody&#8217;s been using essentially 100 per cent of the internet for the last couple of years, and they&#8217;re not getting the same gains anymore,\u201d Marcus says. \u201cThere isn\u2019t 10 more internets to draw on.\u201d\u00a0<\/p>\n<p>In fact, a 2024 analysis by the non-profit research institute Epoch AI estimated that at some point between <a href=\"https:\/\/epoch.ai\/blog\/will-we-run-out-of-data-limits-of-llm-scaling-based-on-human-generated-data\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">2028 to 2032<\/a>, we may exhaust the supply of high-quality human text data to train on.<\/p>\n<p>According to Elon Musk, who founded his own AI company, xAI, in 2023, that point may already have been reached. \u201cThe cumulative sum of human knowledge has been exhausted in AI training. That happened basically last year,\u201d Musk said in an interview in January that was livestreamed on his social media platform X.\u00a0<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"800\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/1766978600_967_elon-musk.jpg\" alt=\"Elon Musk.\" class=\"wp-image-209555\"\/>Elon Musk, the world&#8217;s richest man, believes AI companies have &#8220;exhausted&#8221; useful data for training AI. &#8211; Photo credit: Getty<br \/>\nThe cost of an AI revolution<\/p>\n<p>All this talk of scale makes it sound like we understand how these systems truly work. We don\u2019t. We know what their architecture looks like and how to make them perform tasks, but when it comes to the computations they\u2019re doing internally to generate outputs, they\u2019re essentially black boxes.\u00a0<\/p>\n<p>\u201cWe know how to build them, but we don&#8217;t know how to predict exactly what they&#8217;ll do,\u201d Marcus says. \u201cFundamentally, the whole idea of using a black box where you just pour data in, like you would pour cranberries into a grinder, and expect cognition to come out of it, I think, is just a bad idea to start with.\u201d\u00a0<\/p>\n<p>According to <a href=\"https:\/\/aaai.org\/about-aaai\/presidential-panel-on-the-future-of-ai-research\/?utm_source=substack&amp;utm_medium=email\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">a survey<\/a> by the Association for the Advancement of Artificial Intelligence, Marcus\u2019 views are the consensus, with a comfortable majority of AI researchers agreeing that simply scaling current approaches won\u2019t yield AGI.<\/p>\n<p>Now, the economic underpinnings of this approach are beginning to show strain. The push for ever-larger models has an astronomical price tag. As early as last year, Anthropic CEO Dario Amodei <a href=\"https:\/\/www.youtube.com\/watch?v=xm6jNMSFT7g\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">predicted<\/a> models could soon cost $10bn (\u00a37.6bn)or more to train.\u00a0<\/p>\n<p>Training and using AI also takes a heavy environmental toll. While exact figures are difficult to ascertain, a <a href=\"https:\/\/arxiv.org\/abs\/2104.10350\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">2021 preprint study<\/a> by researchers from Google and the University of California, Berkeley, estimated that the training process alone for GPT-3 was 1,287 megawatt hours of electricity \u2013 enough to power 120 US homes for a year. That model was orders of magnitude smaller than those in use today; GPT-4, for example, was estimated to have needed <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S1364032125008329#:~:text=A%20systematic%20review%20of%20electricity,more%20than%2040%20times\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">40 times that amount of power<\/a>.\u00a0<\/p>\n<p>Using these models is costly, too. According to <a href=\"https:\/\/www.goldmansachs.com\/insights\/articles\/AI-poised-to-drive-160-increase-in-power-demand\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Goldman Sachs<\/a>, a ChatGPT query needs nearly 10 times as much power as a typical Google search, and data centre power demand is projected to grow 160 per cent by 2030.\u00a0<\/p>\n<p>These hidden costs \u2013 the electric bills, the water for cooling servers, the supply chain for GPUs \u2013 are the less glamorous forces propelling (and potentially unravelling) the AI bubble.<\/p>\n<p>Is this really a bubble?<\/p>\n<p>This arms race has been a boon for chipmakers: Nvidia\u2019s revenue surged from just over $20bn (\u00a316bn) in 2022 to almost $130bn (\u00a3104bn) in the 12 months prior to August 2025. Its latest quarterly results and outlook were also positive, restoring at least some faith in the $4.5tr (\u00a33.6tr) company.<\/p>\n<p>And herein lies a curiosity, and a potential reason why it may not be time to stash your cash under the mattress just yet. Because, despite the eye-watering costliness of these systems, money is being made.\u00a0<\/p>\n<p>A recent <a href=\"https:\/\/www.goldmansachs.com\/insights\/goldman-sachs-research\/why-we-are-not-in-a-bubble-yet\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">bulletin<\/a>, also from Goldman Sachs, cautiously touted that \u201cwe are not in a bubble\u2026 yet,\u201d and the reason for this is that companies like Alphabet (Google\u2019s parent company), Nvidia and Microsoft \u2013 all of which are at the core of the AI boom \u2013 are making hundreds of billions of dollars.\u00a0<\/p>\n<p>These are not the flimsy Pet.coms of the dot-com bubble in the late 1990s. They\u2019re financial behemoths with money to invest in costly data centres and model training.\u00a0<\/p>\n<p>So while it might be true that numerous AI start-ups will go belly-up as the scaling train runs out of steam, the mega-companies propping up the global economy could hold firm.\u00a0<\/p>\n<p>There is also another way for these companies to stay profitable. Even AI companies that are losing money hand over fist could harness a treasure trove of lucrative data if they can capture enough market share.<\/p>\n<p>Marcus calls this a move towards \u201csurveillance capitalism\u201d \u2013 the idea that our personal data becomes a raw material to be mined and sold. Think about your social media feeding you targeted adverts and selling your data elsewhere \u2013 the same techniques could be a cash cow for the AI industry.\u00a0<\/p>\n<p>He adds, \u201cI think they\u2019re definitely thinking about targeted ads and so forth. For personal data, they don\u2019t have to solve the grand problems of artificial intelligence, they just have to get people to type stuff in \u2013 and they\u2019re already doing that.\u201d\u00a0<\/p>\n<p>A new way forward<\/p>\n<p>If scaling current models won\u2019t get us to the kind of transformational AI that many had predicted, what will?\u00a0<\/p>\n<p>One option, Marcus argues, is a return to an older idea that has been quietly waiting in the wings: neuro-symbolic AI. For the past half-century, AI research has largely split into two camps: those building neural networks and those developing symbolic systems.<\/p>\n<p>Symbolic systems, as the name implies, manipulate symbols with formal logic.<\/p>\n<p>\u201cIt&#8217;s called symbol manipulation because you have symbols that stand for things, like in algebra,\u201d Marcus explains. \u201cClassical computer programming is almost entirely made up of stuff like that, and neural networks don\u2019t do that very well.\u201d\u00a0<\/p>\n<p>He continues: \u201cThe classical stuff is really good at, for example, representing databases and ontologies. Like a robin is a bird, a bird is an animal, and concluding therefore that a robin is an animal. Classical AI techniques are perfect at that stuff. They never hallucinate.\u201d<\/p>\n<p>By combining the clear, rule-based logic of older AI with the pattern-spotting power of neural networks, Marcus thinks researchers could get much closer to true general intelligence. These hybrid systems would sidestep the rigid limits of traditional software while also reducing the errors and made-up answers that plague today\u2019s models.<\/p>\n<p>Some companies are already experimenting with this approach, most notably Google DeepMind.\u00a0<\/p>\n<p>Its AlphaFold2 system, which can accurately predict the 3D structure of proteins from their amino-acid sequence, has been widely hailed as one of the most important scientific breakthroughs of recent years. Crucially, it blends neural networks with elements of symbolic manipulation.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1200\" height=\"800\" src=\"https:\/\/www.newsbeep.com\/au\/wp-content\/uploads\/2025\/12\/protein-structrue.jpg\" alt=\"Molecular biology, conceptual illustration.\" class=\"wp-image-209558\"\/>Being able to accurately predict the 3D structure of proteins from amino acid sequences has rapidly accelerated drug discovery and our understanding of disease &#8211; Photo credit: Getty<\/p>\n<p>It is perhaps not surprising, then, that AlphaFold2 earned the 2024 Nobel Prize in Chemistry \u2013 a win Marcus has called \u201cthe first Nobel Prize for Neurosymbolic AI\u201d. This wasn\u2019t an AI system making discoveries unaided, but it was a major validation of the approach.<\/p>\n<p>A neuro-symbolic strategy won\u2019t, Marcus says, deliver AGI outright \u2013 but it could represent a significant leap forward.<\/p>\n<p>And despite his pessimism about the current state of the field, he remains cautiously optimistic about what comes next.<\/p>\n<p>\u201cWill we have artificial general intelligence by 2027? I can say with absolute certainty, or nearly absolute certainty, no, we won&#8217;t.\u201d<\/p>\n<p>But, he adds, \u201cI absolutely think a better artificial intelligence is possible.<\/p>\n<p>\u201cThe tragedy of this era is that we&#8217;re spending so much money on one bet. That one bet, trillions of dollars, on the one bet is that scaling, adding more data and adding more compute will bring us to artificial general intelligence. I think there&#8217;s actually lots of evidence against that at this point.\u201d<\/p>\n<p>Read more:<\/p>\n","protected":false},"excerpt":{"rendered":"A chill seems to be setting in over Wall Street. Tech billionaire Peter Thiel\u2019s hedge fund recently sold&hellip;\n","protected":false},"author":2,"featured_media":377340,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[256,254,255,64,63,105],"class_list":{"0":"post-377339","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-au","12":"tag-australia","13":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/377339","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/comments?post=377339"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/posts\/377339\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media\/377340"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/media?parent=377339"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/categories?post=377339"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/au\/wp-json\/wp\/v2\/tags?post=377339"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}