{"id":360563,"date":"2026-04-02T16:56:16","date_gmt":"2026-04-02T16:56:16","guid":{"rendered":"https:\/\/www.newsbeep.com\/nz\/360563\/"},"modified":"2026-04-02T16:56:16","modified_gmt":"2026-04-02T16:56:16","slug":"microsoft-launches-3-new-ai-models-in-direct-shot-at-openai-and-google","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/nz\/360563\/","title":{"rendered":"Microsoft launches 3 new AI models in direct shot at OpenAI and Google"},"content":{"rendered":"<p><a href=\"https:\/\/www.microsoft.com\/en-us\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft<\/a> on Thursday launched <a href=\"https:\/\/microsoft.ai\/news\/today-were-announcing-3-new-world-class-mai-models-available-in-foundry\/\" rel=\"nofollow noopener\" target=\"_blank\">three new foundational AI models<\/a> it built entirely in-house \u2014 a state-of-the-art speech transcription system, a voice generation engine, and an upgraded image creator \u2014 marking the most concrete evidence yet that the $3 trillion software giant intends to compete directly with <a href=\"https:\/\/openai.com\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a>, <a href=\"https:\/\/www.google.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Google<\/a>, and other frontier labs on model development, not just distribution.<\/p>\n<p>The trio of models \u2014 <a href=\"https:\/\/microsoft.ai\/news\/state-of-the-art-speech-recognition-with-mai-transcribe-1\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Transcribe-1<\/a>, <a href=\"https:\/\/microsoft.ai\/news\/today-were-announcing-3-new-world-class-mai-models-available-in-foundry\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Voice-1<\/a>, and <a href=\"https:\/\/msi-playground.microsoft.com\/chat\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Image-2<\/a> \u2014 are available immediately through <a href=\"https:\/\/azure.microsoft.com\/en-us\/products\/ai-foundry\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft Foundry<\/a> and a new <a href=\"https:\/\/msi-playground.microsoft.com\/chat\" rel=\"nofollow noopener\" target=\"_blank\">MAI Playground<\/a>. They span three of the most commercially valuable modalities in enterprise AI: converting speech to text, generating realistic human voice, and creating images. Together, they represent the opening salvo from Microsoft&#8217;s <a href=\"https:\/\/microsoft.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">superintelligence team<\/a>, which Suleyman formed just six months ago to pursue what he calls &#8220;<a href=\"https:\/\/www.mitsloanme.com\/article\/microsoft-moves-toward-ai-self-sufficiency-amid-evolving-openai-ties\/\" rel=\"nofollow noopener\" target=\"_blank\">AI self-sufficiency<\/a>.&#8221;<\/p>\n<p>&#8220;I&#8217;m very excited that we&#8217;ve now got the first models out, which are the very best in the world for transcription,&#8221; Suleyman told VentureBeat in an interview ahead of the public announcement. &#8220;Not only that, we&#8217;re able to deliver the model with half the GPUs of the state-of-the-art competition.&#8221;<\/p>\n<p>The announcement lands at a precarious moment for Microsoft. The company&#8217;s stock just closed its <a href=\"https:\/\/www.cnbc.com\/2026\/03\/31\/microsofts-stock-closes-worst-quarter-since-2008-financial-crisis.html\" rel=\"nofollow noopener\" target=\"_blank\">worst quarter since the 2008 financial crisis<\/a>, as investors increasingly demand proof that hundreds of billions of dollars in AI infrastructure spending will translate into revenue. These models \u2014 priced aggressively and positioned to reduce Microsoft&#8217;s own cost of goods sold \u2014 are Suleyman&#8217;s first answer to that pressure.<\/p>\n<p>Microsoft&#8217;s new transcription model claims best-in-class accuracy across 25 languages<\/p>\n<p><a href=\"https:\/\/microsoft.ai\/news\/state-of-the-art-speech-recognition-with-mai-transcribe-1\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Transcribe-1<\/a> is the headline release. The speech-to-text model achieves the lowest average Word Error Rate on the <a href=\"https:\/\/arxiv.org\/abs\/2205.12446\" rel=\"nofollow noopener\" target=\"_blank\">FLEURS benchmark<\/a> \u2014 the industry-standard multilingual test \u2014 across the top 25 languages by Microsoft product usage, averaging 3.8% WER. According to Microsoft&#8217;s benchmarks, it beats OpenAI&#8217;s <a href=\"https:\/\/huggingface.co\/openai\/whisper-large-v3\" rel=\"nofollow noopener\" target=\"_blank\">Whisper-large-v3<\/a> on all 25 languages, Google&#8217;s <a href=\"https:\/\/blog.google\/innovation-and-ai\/models-and-research\/gemini-models\/gemini-3-1-flash-lite\/\" rel=\"nofollow noopener\" target=\"_blank\">Gemini 3.1 Flash<\/a> on 22 of 25, and ElevenLabs&#8217; <a href=\"https:\/\/elevenlabs.io\/blog\/introducing-scribe-v2\" rel=\"nofollow noopener\" target=\"_blank\">Scribe v2<\/a> and OpenAI&#8217;s <a href=\"https:\/\/developers.openai.com\/api\/docs\/models\/gpt-4o-transcribe\" rel=\"nofollow noopener\" target=\"_blank\">GPT-Transcribe<\/a> on 15 of 25 each.<\/p>\n<p>The model uses a transformer-based text decoder with a bi-directional audio encoder. It accepts MP3, WAV, and FLAC files up to 200MB, and Microsoft says its batch transcription speed is 2.5 times faster than the existing Microsoft Azure Fast offering. Diarization, contextual biasing, and streaming are listed as &#8220;coming soon.&#8221; Microsoft is already testing <a href=\"https:\/\/microsoft.ai\/news\/state-of-the-art-speech-recognition-with-mai-transcribe-1\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Transcribe-1<\/a> inside <a href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-copilot\/for-individuals\/do-more-with-ai\/general-ai\/what-is-copilot-voice?form=MA13PW\" rel=\"nofollow noopener\" target=\"_blank\">Copilot&#8217;s Voice mode<\/a> and <a href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-teams\/group-chat-software\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft Teams<\/a> for conversation transcription \u2014 a detail that underscores how quickly the company intends to replace third-party or older internal models with its own.<\/p>\n<p>Alongside it, <a href=\"https:\/\/microsoft.ai\/news\/today-were-announcing-3-new-world-class-mai-models-available-in-foundry\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Voice-1<\/a> is Microsoft&#8217;s text-to-speech model, capable of generating 60 seconds of natural-sounding audio in a single second. The model preserves speaker identity across long-form content and now supports custom voice creation from just a few seconds of audio through Microsoft Foundry. Microsoft is pricing it at $22 per 1 million characters. <a href=\"https:\/\/msi-playground.microsoft.com\/chat\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Image-2<\/a>, meanwhile, debuted as a top-three model family on the <a href=\"http:\/\/arena.ai\" rel=\"nofollow noopener\" target=\"_blank\">Arena.ai leaderboard<\/a> and now delivers at least 2x faster generation times on Foundry and Copilot compared to its predecessor. Microsoft is rolling it out across <a href=\"https:\/\/www.bing.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Bing<\/a> and <a href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-365\/powerpoint\" rel=\"nofollow noopener\" target=\"_blank\">PowerPoint<\/a>, pricing it at $5 per 1 million tokens for text input and $33 per 1 million tokens for image output. <a href=\"https:\/\/www.wpp.com\/en-us\" rel=\"nofollow noopener\" target=\"_blank\">WPP<\/a>, one of the world&#8217;s largest advertising holding companies, is among the first enterprise partners building with MAI-Image-2 at scale.<\/p>\n<p>The contract renegotiation with OpenAI that made Microsoft&#8217;s model ambitions possible<\/p>\n<p>To understand why these models matter, you have to understand the contractual tectonic shift that made them possible. Until October 2025, Microsoft was <a href=\"https:\/\/www.wired.com\/story\/openai-five-levels-agi-paper-microsoft-negotiations\/\" rel=\"nofollow noopener\" target=\"_blank\">contractually prohibited<\/a> from independently pursuing artificial general intelligence. The original deal with OpenAI, signed in 2019, gave <a href=\"https:\/\/news.microsoft.com\/source\/2019\/07\/22\/openai-forms-exclusive-computing-partnership-with-microsoft-to-build-new-azure-ai-supercomputing-technologies\/\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft a license to OpenAI&#8217;s models<\/a> in exchange for building the cloud infrastructure OpenAI needed. But when OpenAI sought to expand its compute footprint beyond Microsoft \u2014 striking deals with SoftBank and others \u2014 Microsoft renegotiated. As Suleyman explained in a December 2025 interview with <a href=\"https:\/\/www.bloomberg.com\/features\/2025-mustafa-suleyman-weekend-interview\/\" rel=\"nofollow noopener\" target=\"_blank\">Bloomberg<\/a>, the revised agreement meant that &#8220;up until a few weeks ago, Microsoft was not allowed \u2014 by contract \u2014 to pursue artificial general intelligence or superintelligence independently.&#8221; The new terms freed Microsoft to build its own frontier models while retaining license rights to everything OpenAI builds through 2032.<\/p>\n<p>Suleyman described the dynamic to VentureBeat in characteristically blunt terms. &#8220;Back in September of last year, we renegotiated the contract with OpenAI, and that enabled us to independently pursue our own superintelligence,&#8221; he said. &#8220;Since then, we&#8217;ve been convening the compute and the team and buying up the data that we need.&#8221;<\/p>\n<p>He was quick to emphasize that the <a href=\"https:\/\/blogs.microsoft.com\/blog\/2025\/10\/28\/the-next-chapter-of-the-microsoft-openai-partnership\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI partnership remains intact<\/a>. &#8220;Nothing&#8217;s changing with the OpenAI partnership. We will be in partnership with them at least until 2032 and hopefully a lot longer,&#8221; Suleyman said. &#8220;They have been a phenomenal partner to us.&#8221; He also highlighted that Microsoft provides access to Anthropic&#8217;s <a href=\"https:\/\/claude.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Claude<\/a> through its <a href=\"https:\/\/learn.microsoft.com\/en-us\/rest\/api\/aifoundry\/\" rel=\"nofollow noopener\" target=\"_blank\">Foundry API<\/a>, framing the company as &#8220;a platform of platforms.&#8221; But the subtext is unmistakable: Microsoft is building the capability to stand on its own. In March, as <a href=\"https:\/\/www.businessinsider.com\/microsoft-combines-copilot-teams-and-mustafa-suleyman-superintelligence-memos-2026-3\" rel=\"nofollow noopener\" target=\"_blank\">Business Insider first reported<\/a>, Suleyman wrote in an internal memo that his goal is to &#8220;focus all my energy on our Superintelligence efforts and be able to deliver world class models for Microsoft over the next 5 years.&#8221; <a href=\"https:\/\/www.cnbc.com\/2026\/03\/17\/microsoft-copilot-ai-suleyman.html\" rel=\"nofollow noopener\" target=\"_blank\">CNBC reported<\/a> that the structural shift freed Suleyman from day-to-day Copilot product responsibilities, with former Snap executive Jacob Andreou taking over as EVP of the combined consumer and commercial Copilot experience.<\/p>\n<p>How teams of fewer than 10 engineers built models that rival Big Tech&#8217;s best<\/p>\n<p>Perhaps the most striking detail Suleyman shared with VentureBeat is how small the teams behind these models actually are. &#8220;The audio model was built by 10 people, and the vast majority of the speed, efficiency and accuracy gains come from the model architecture and the data that we have used,&#8221; Suleyman said. &#8220;My philosophy has always been that we need fewer people who are more empowered. So we operate an extremely flat structure.&#8221; He added: &#8220;Our image team, equally, is less than 10 people. So this is all about model and data innovation, which has delivered state of the art performance.&#8221;<\/p>\n<p>This matters for two reasons. First, it challenges the prevailing industry narrative that frontier AI development requires thousands of researchers and billions in headcount costs. <a href=\"https:\/\/www.meta.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Meta<\/a>, by contrast, has pursued what Suleyman described in his Bloomberg interview as a strategy of &#8220;<a href=\"https:\/\/www.bloomberg.com\/features\/2025-mustafa-suleyman-weekend-interview\/\" rel=\"nofollow noopener\" target=\"_blank\">hiring a lot of individuals, rather than maybe creating a team<\/a>&#8221; \u2014 including reported compensation packages of $100 million to $200 million for top researchers. Second, small teams producing state-of-the-art results dramatically improve the economics. If Microsoft can build best-in-class transcription with 10 engineers and half the GPUs of competitors, the margin structure of its AI business looks fundamentally different from companies burning through cash to achieve similar benchmarks.<\/p>\n<p>The lean-team philosophy also echoes Suleyman&#8217;s broader views on how AI is already reshaping the work of building AI itself. When asked by VentureBeat how his own team works, Suleyman described an environment that resembles a startup trading floor more than a traditional Microsoft engineering org. &#8220;There are groups of people around round tables, circular tables, not traditional desks, on laptops instead of big screens,&#8221; he said. &#8220;They&#8217;re basically vibe coding, side by side all day, morning till night, in rooms of 50 or 60 people.&#8221;<\/p>\n<p>Why Suleyman&#8217;s &#8220;humanist AI&#8221; pitch is aimed squarely at enterprise buyers<\/p>\n<p>Suleyman has been steadily building a philosophical brand around Microsoft&#8217;s AI efforts that he calls &#8220;<a href=\"https:\/\/microsoft.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">humanist AI<\/a>&#8221; \u2014 a term that appeared prominently in the blog post he authored for the launch and that he elaborated on in our interview. &#8220;I think that the motivation of a humanist super intelligence is to create something that is truly in service of humanity,&#8221; he told VentureBeat. &#8220;Humans will remain in control at the top of the food chain, and they will be always aligned to human interests.&#8221;<\/p>\n<p>The framing serves multiple purposes. It differentiates Microsoft from the more acceleration-oriented rhetoric coming from <a href=\"https:\/\/openai.com\/\" rel=\"nofollow noopener\" target=\"_blank\">OpenAI<\/a> and <a href=\"https:\/\/www.meta.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Meta<\/a>. It resonates with enterprise buyers who need governance, compliance, and safety assurances before deploying AI in regulated industries. And it provides a narrative hedge: if something goes wrong in the broader AI ecosystem, Microsoft can point to its stated commitment to human control. In his December Bloomberg interview, Suleyman went further, describing containment and alignment as &#8220;<a href=\"https:\/\/www.bloomberg.com\/features\/2025-mustafa-suleyman-weekend-interview\/\" rel=\"nofollow noopener\" target=\"_blank\">red lines<\/a>&#8221; and arguing that no one should release a superintelligence tool until they are &#8220;confident it can be controlled.&#8221;<\/p>\n<p>Suleyman also stressed data provenance as a competitive advantage, describing a conversation with CEO Satya Nadella about developing &#8220;a clean lineage of models where the data is extremely clean.&#8221; He drew an implicit contrast with open-source alternatives, noting that &#8220;many of the open-source models have been trained on data in, let&#8217;s say, inappropriate ways. And there are potentially security issues with that.&#8221; For enterprise customers evaluating AI vendors amid a thicket of copyright lawsuits across the industry, that is a meaningful commercial argument \u2014 if Microsoft can credibly claim that its training data was acquired through properly licensed channels, it reduces the legal and reputational risk of deploying these models in production.<\/p>\n<p>Microsoft&#8217;s aggressive pricing puts pressure on Amazon, Google, and the AI startup ecosystem<\/p>\n<p>Today\u2019s launch positions Microsoft on three competitive fronts simultaneously. <a href=\"https:\/\/microsoft.ai\/news\/state-of-the-art-speech-recognition-with-mai-transcribe-1\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Transcribe-1 <\/a>directly targets the transcription workloads that OpenAI&#8217;s <a href=\"https:\/\/openai.com\/index\/whisper\/\" rel=\"nofollow noopener\" target=\"_blank\">Whisper models<\/a> have dominated in the open-source community, with Microsoft claiming superior accuracy on all 25 benchmarked languages. The FLEURS results also show it winning against Google&#8217;s Gemini 3.1 Flash Lite on 22 of 25 languages \u2014 a direct challenge as Google aggressively pushes Gemini across its own product suite. And <a href=\"https:\/\/microsoft.ai\/news\/today-were-announcing-3-new-world-class-mai-models-available-in-foundry\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Voice-1<\/a>&#8216;s ability to clone voices from seconds of audio and generate speech at 60x real-time puts it in competition with <a href=\"https:\/\/elevenlabs.io\/\" rel=\"nofollow noopener\" target=\"_blank\">ElevenLabs<\/a>, <a href=\"https:\/\/www.resemble.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">Resemble AI<\/a>, and the growing ecosystem of voice AI startups, with Microsoft&#8217;s distribution advantage \u2014 any Foundry developer can now access these capabilities through the same API they use for GPT-4 and Claude \u2014 acting as a powerful moat.<\/p>\n<p>Suleyman framed the competitive position confidently: &#8220;We&#8217;re now a top three lab just under OpenAI and Gemini,&#8221; he told VentureBeat. The pricing strategy \u2014 <a href=\"https:\/\/microsoft.ai\/news\/today-were-announcing-3-new-world-class-mai-models-available-in-foundry\/\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Voice-1<\/a> at $22 per million characters, <a href=\"https:\/\/msi-playground.microsoft.com\/chat\" rel=\"nofollow noopener\" target=\"_blank\">MAI-Image-2<\/a> at $5 per million input tokens \u2014 reflects a deliberate decision to compete on cost. &#8220;We&#8217;re pricing them to be the very best of any hyperscaler. So there will be the cheapest of any of the hyperscalers out there, Amazon. And obviously Google,&#8221; Suleyman said. &#8220;And that&#8217;s a very conscious decision.&#8221;<\/p>\n<p>This makes strategic sense for Microsoft, which can amortize model development costs across its enormous installed base of enterprise customers. But it also speaks to the question investors have been asking with increasing urgency: when does AI spending start generating returns? <a href=\"https:\/\/www.cnbc.com\/2026\/03\/31\/microsofts-stock-closes-worst-quarter-since-2008-financial-crisis.html\" rel=\"nofollow noopener\" target=\"_blank\">Microsoft&#8217;s stock has fallen roughly 17% year-to-date<\/a>, according to CNBC, part of a broader selloff in software stocks. By building models that run on half the GPUs of competitors, Microsoft reduces its own infrastructure costs for internal products \u2014 <a href=\"http:\/\/teams.microsoft.com\/v2\/\" rel=\"nofollow noopener\" target=\"_blank\">Teams<\/a>, <a href=\"https:\/\/copilot.microsoft.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Copilot<\/a>, <a href=\"https:\/\/www.bing.com\/\" rel=\"nofollow noopener\" target=\"_blank\">Bing<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/microsoft-365\/powerpoint\" rel=\"nofollow noopener\" target=\"_blank\">PowerPoint<\/a> \u2014 while offering developers pricing designed to undercut the rest of the market. In his March memo, <a href=\"https:\/\/blogs.microsoft.com\/blog\/2026\/03\/17\/announcing-copilot-leadership-update\/\" rel=\"nofollow noopener\" target=\"_blank\">Suleyman wrote<\/a> that his models would &#8220;enable us to deliver the COGS efficiencies necessary to be able to serve AI workloads at the immense scale required in the coming years.&#8221; These three models are the first tangible delivery on that promise.<\/p>\n<p>Suleyman says a frontier large language model is coming \u2014 and Microsoft plans to be &#8220;completely independent&#8221;<\/p>\n<p>Suleyman made clear that transcription, voice, and image generation are just the beginning. When asked whether Microsoft would build a large language model to compete directly with GPT at the frontier level, he was unequivocal. &#8220;We absolutely are going to be delivering state of the art models across all modalities,&#8221; he said. &#8220;Our mission is to make sure that if Microsoft ever needs it, we will be able to provide state of the art at the best efficiency, the cheapest price, and be completely independent.&#8221;<\/p>\n<p>He described a multi-year roadmap to &#8220;set up the GPU clusters at the appropriate scale,&#8221; noting that the superintelligence team was formally stood up only in October 2025. Suleyman spoke to VentureBeat from Miami, where the full team was convening for one of its regular week-long in-person sessions. He described Nadella flying in for the gathering to lay out &#8220;the roadmap of everything that we need to achieve for our AI self-sufficiency mission over the next 2, 3, 4 years, and all the compute roadmap that that would involve.&#8221;<\/p>\n<p>Building a competitive frontier LLM, of course, is a different order of magnitude in complexity, data requirements, and compute cost from what Microsoft demonstrated Thursday. The models launched today are specialized \u2014 they handle audio and images, not the general reasoning and text generation that underpin products like ChatGPT or Copilot&#8217;s core intelligence. Suleyman has the organizational mandate, Nadella&#8217;s public backing, and the contractual freedom. What he doesn&#8217;t yet have is a track record at Microsoft of delivering on the hardest problem in AI.<\/p>\n<p>But consider what he does have: three models that are best-in-class or near it in their respective domains, built by teams smaller than most seed-stage startups, running on half the industry-standard GPU footprint, and priced below every major cloud competitor. Two years ago, Suleyman proposed in MIT Technology Review what he called the &#8220;Modern Turing Test&#8221; \u2014 not whether AI could fool a human in conversation, but whether it could go out into the world and accomplish real economic tasks with minimal oversight. On Thursday, his own models took a step toward that vision. The question now is whether Microsoft&#8217;s superintelligence team can repeat the trick at the scale that actually matters \u2014 and whether they can do it before the market&#8217;s patience runs out.<\/p>\n","protected":false},"excerpt":{"rendered":"Microsoft on Thursday launched three new foundational AI models it built entirely in-house \u2014 a state-of-the-art speech transcription&hellip;\n","protected":false},"author":2,"featured_media":360564,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[365,363,364,111,139,69,145],"class_list":{"0":"post-360563","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-new-zealand","12":"tag-newzealand","13":"tag-nz","14":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/360563","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/comments?post=360563"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/posts\/360563\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media\/360564"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/media?parent=360563"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/categories?post=360563"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/nz\/wp-json\/wp\/v2\/tags?post=360563"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}