{"id":189124,"date":"2025-10-09T22:44:11","date_gmt":"2025-10-09T22:44:11","guid":{"rendered":"https:\/\/www.newsbeep.com\/uk\/189124\/"},"modified":"2025-10-09T22:44:11","modified_gmt":"2025-10-09T22:44:11","slug":"use-amazon-sagemaker-hyperpod-and-anyscale-for-next-generation-distributed-computing","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/uk\/189124\/","title":{"rendered":"Use Amazon SageMaker HyperPod and Anyscale for next-generation distributed computing"},"content":{"rendered":"\n<p>This post was written with Dominic Catalano from Anyscale.<\/p>\n<p>Organizations building and deploying large-scale AI models often face critical infrastructure challenges that can directly impact their bottom line: unstable training clusters that fail mid-job, inefficient resource utilization driving up costs, and complex distributed computing frameworks requiring specialized expertise. These factors can lead to unused GPU hours, delayed projects, and frustrated data science teams. This post demonstrates how you can address these challenges by providing a resilient, efficient infrastructure for distributed AI workloads.<\/p>\n<p><a href=\"https:\/\/aws.amazon.com\/sagemaker-ai\/hyperpod\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon SageMaker HyperPod<\/a> is a purpose-built persistent generative AI infrastructure optimized for machine learning (ML) workloads. It provides robust infrastructure for large-scale ML workloads with high-performance hardware, so organizations can build heterogeneous clusters using tens to thousands of GPU accelerators. With nodes optimally co-located on a single spine, SageMaker HyperPod reduces networking overhead for distributed training. It maintains operational stability through continuous monitoring of node health, automatically swapping faulty nodes with healthy ones and resuming training from the most recently saved checkpoint, all of which can help save up to 40% of training time. For advanced ML users, SageMaker HyperPod allows SSH access to the nodes in the cluster, enabling deep infrastructure control, and allows access to SageMaker tooling, including Amazon SageMaker Studio, MLflow, and SageMaker distributed training libraries, along with support for various open-source training libraries and frameworks. SageMaker Flexible Training Plans complement this by enabling GPU capacity reservation up to 8 weeks in advance for durations up to 6 months.<\/p>\n<p>The <a href=\"https:\/\/www.anyscale.com\/product\/platform\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Anyscale platform<\/a> integrates seamlessly with SageMaker HyperPod when using <a href=\"https:\/\/aws.amazon.com\/eks\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon Elastic Kubernetes Service<\/a> (Amazon EKS) as the cluster orchestrator. <a href=\"https:\/\/www.ray.io\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Ray<\/a> is the leading AI compute engine, offering Python-based distributed computing capabilities to address AI workloads ranging from multimodal AI, data processing, model training, and model serving. Anyscale unlocks the power of Ray with comprehensive tooling for developer agility, critical fault tolerance, and an optimized version called <a href=\"https:\/\/www.anyscale.com\/product\/platform\/rayturbo\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">RayTurbo<\/a>, designed to deliver leading cost-efficiency. Through a unified control plane, organizations benefit from simplified management of complex distributed AI use cases with fine-grained control across hardware.<\/p>\n<p>The combined solution provides extensive monitoring through <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/sagemaker-hyperpod-eks-cluster-observability.html\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">SageMaker HyperPod real-time dashboards<\/a> tracking node health, GPU utilization, and network traffic. Integration with <a href=\"https:\/\/docs.aws.amazon.com\/AmazonCloudWatch\/latest\/monitoring\/ContainerInsights.html\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon CloudWatch Container Insights<\/a>, <a href=\"http:\/\/aws.amazon.com\/prometheus\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon Managed Service for Prometheus<\/a>, and <a href=\"https:\/\/aws.amazon.com\/grafana\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon Managed Grafana<\/a> delivers deep visibility into cluster performance, complemented by <a href=\"https:\/\/docs.anyscale.com\/monitoring\/metrics\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Anyscale\u2019s monitoring framework<\/a>, which provides built-in metrics for monitoring Ray clusters and the workloads that run on them.<\/p>\n<p>This post demonstrates how to integrate the Anyscale platform with SageMaker HyperPod. This combination can deliver tangible business outcomes: reduced time-to-market for AI initiatives, lower total cost of ownership through optimized resource utilization, and increased data science productivity by minimizing infrastructure management overhead. It is ideal for Amazon EKS and Kubernetes-focused organizations, teams with large-scale distributed training needs, and those invested in the <a href=\"https:\/\/www.anyscale.com\/blog\/understanding-the-ray-ecosystem-and-community\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Ray ecosystem<\/a> or SageMaker.<\/p>\n<p>       Solution overview <\/p>\n<p>The following architecture diagram illustrates SageMaker HyperPod with Amazon EKS orchestration and Anyscale.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter size-large wp-image-117082\" style=\"margin: 10px 0px 10px 0px;border: 1px solid #CCCCCC\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-arch-diag-1024x570.png\" alt=\"End-to-end AWS Anyscale architecture depicting job submission, EKS pod orchestration, data access, and monitoring flow\" width=\"1024\" height=\"570\"\/><\/p>\n<p>The sequence of events in this architecture is as follows:<\/p>\n<p>        A user submits a job to the Anyscale Control Plane, which is the main user-facing endpoint.<br \/>\n        The Anyscale Control Plane communicates this job to the Anyscale Operator within the SageMaker HyperPod cluster in the SageMaker HyperPod virtual private cloud (VPC).<br \/>\n        The Anyscale Operator, upon receiving the job, initiates the process of creating the necessary pods by reaching out to the EKS control plane.<br \/>\n        The EKS control plane orchestrates creation of a Ray head pod and worker pods. These pods represent a Ray cluster, running on SageMaker HyperPod with Amazon EKS.<br \/>\n        The Anyscale Operator submits the job through the head pod, which serves as the primary coordinator for the distributed workload.<br \/>\n        The head pod distributes the workload across multiple worker pods, as shown in the hierarchical structure in the SageMaker HyperPod EKS cluster.<br \/>\n        Worker pods execute their assigned tasks, potentially accessing required data from the storage services \u2013 such as <a href=\"http:\/\/aws.amazon.com\/s3\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon Simple Storage Service<\/a> (Amazon S3), <a href=\"https:\/\/aws.amazon.com\/efs\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon Elastic File System<\/a> (Amazon EFS), or <a href=\"https:\/\/aws.amazon.com\/fsx\/lustre\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon FSx for Lustre<\/a> \u2013 in the user VPC.<br \/>\n        Throughout the job execution, metrics and logs are published to <a href=\"http:\/\/aws.amazon.com\/cloudwatch\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon CloudWatch<\/a> and Amazon Managed Service for Prometheus or Amazon Managed Grafana for observability.<br \/>\n        When the Ray job is complete, the job artifacts (final model weights, inference results, and so on) are saved to the designated storage service.<br \/>\n        Job results (status, metrics, logs) are sent through the Anyscale Operator back to the Anyscale Control Plane. <\/p>\n<p>This flow shows distribution and execution of user-submitted jobs across the available computing resources, while maintaining monitoring and data accessibility throughout the process.<\/p>\n<p>       Prerequisites <\/p>\n<p>Before you begin, you must have the following resources:<\/p>\n<p>       Set up Anyscale Operator <\/p>\n<p>Complete the following steps to set up the Anyscale Operator:<\/p>\n<p>        In your workspace, download the <a href=\"https:\/\/github.com\/aws-samples\/aws-do-ray\/tree\/main\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">aws-do-ray<\/a> repository: <\/p>\n<p>          git clone https:\/\/github.com\/aws-samples\/aws-do-ray.git<br \/>\ncd aws-do-ray\/Container-Root\/ray\/anyscale <\/p>\n<p>This repository has the commands needed to deploy the Anyscale Operator on a SageMaker HyperPod cluster. The <a href=\"https:\/\/github.com\/aws-samples\/aws-do-ray\/tree\/main\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">aws-do-ray<\/a> project aims to simplify the deployment and scaling of distributed Python application using Ray on Amazon EKS or SageMaker HyperPod. The aws-do-ray container shell is equipped with intuitive action scripts and comes preconfigured with convenient shortcuts, which save extensive typing and increase productivity. You can optionally use these features by building and opening a bash shell in the container with the instructions in the aws-do-ray README, or you can continue with the following steps.<\/p>\n<p>        If you continue with these steps, make sure your environment is properly set up: <\/p>\n<p>        Verify your connection to the HyperPod cluster: <\/p>\n<p>          Obtain the name of the EKS cluster on the SageMaker HyperPod console. In your cluster details, you will see your EKS cluster orchestrator.<img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-117081\" style=\"margin: 10px 0px 10px 0px;border: 1px solid #CCCCCC\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-cluster-console-1024x290.png\" alt=\"Active ml-cluster-eks details interface showing configuration, orchestrator settings, and management options\" width=\"1024\" height=\"290\"\/><br \/>\n          Update kubeconfig to connect to the EKS cluster: <\/p>\n<p>            aws eks update-kubeconfig &#8211;region  &#8211;name my-eks-cluster<\/p>\n<p>kubectl get nodes -L node.kubernetes.io\/instance-type -L sagemaker.amazonaws.com\/node-health-status -L sagemaker.amazonaws.com\/deep-health-check-status $@ <\/p>\n<p>The following screenshot shows an example output.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-117080\" style=\"margin: 10px 0px 10px 0px;border: 1px solid #CCCCCC\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-example-output-1-1024x54.png\" alt=\"Terminal view of Kubernetes nodes health check showing two ml.g5 instances with status and health details\" width=\"1024\" height=\"54\"\/><\/p>\n<p>If the output indicates InProgress instead of Passed, wait for the deep health checks to finish.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-117079\" style=\"margin: 10px 0px 10px 0px;border: 1px solid #CCCCCC\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-example-output-2-1024x52.png\" alt=\"Terminal view of Kubernetes nodes health check showing two ml.g5 instances with differing scheduling statuses\" width=\"1024\" height=\"52\"\/><\/p>\n<p>        Review the env_vars file. Update the variable AWS_EKS_HYPERPOD_CLUSTER. You can leave the values as default or make desired changes.<br \/>\n        Deploy your requirements: <\/p>\n<p>          Execute:<br \/>\n.\/1.deploy-requirements.sh <\/p>\n<p>This creates the anyscale namespace, installs Anyscale dependencies, configures login to your Anyscale account (this step will prompt you for additional verification as shown in the following screenshot), adds the <a href=\"https:\/\/anyscale.github.io\/helm-charts\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">anyscale helm chart<\/a>, installs the <a href=\"https:\/\/github.com\/kubernetes\/ingress-nginx\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">ingress-nginx<\/a> controller, and finally labels and taints SageMaker HyperPod nodes for the Anyscale worker pods.<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-117078\" style=\"margin: 10px 0px 10px 0px;border: 1px solid #CCCCCC\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-additional-verification-1024x179.jpeg\" alt=\"Terminal showing Python environment setup with comprehensive package installation log and Anyscale login instructions\" width=\"1024\" height=\"179\"\/><\/p>\n<p>        Create an EFS file system: <\/p>\n<p>          Execute:<\/p>\n<p>.\/2.create-efs.sh <\/p>\n<p>Amazon EFS serves as the <a href=\"https:\/\/docs.anyscale.com\/configuration\/storage\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">shared cluster storage<\/a> for the Anyscale pods.<br \/>At the time of writing, Amazon EFS and S3FS are the supported file system options when using Anyscale and SageMaker HyperPod setups with Ray on AWS. Although FSx for Lustre is not supported with this setup, you can use it with KubeRay on SageMaker HyperPod EKS.<\/p>\n<p>        Register an Anyscale Cloud: <\/p>\n<p>          Execute:<\/p>\n<p>.\/3.register-cloud.sh <\/p>\n<p>This registers a self-hosted Anyscale Cloud into your SageMaker HyperPod cluster. By default, it uses the value of ANYSCALE_CLOUD_NAME in the env_vars file. You can modify this field as needed. At this point, you will be able to see your registered cloud on the <a href=\"https:\/\/console.anyscale.com\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Anyscale console<\/a>.<\/p>\n<p>        Deploy the Kubernetes Anyscale Operator: <\/p>\n<p>          Execute:<\/p>\n<p>.\/4.deploy-anyscale.sh <\/p>\n<p>This command installs the Anyscale Operator in the anyscale namespace. The Operator will start posting health checks to the Anyscale Control Plane.<\/p>\n<p>To see the Anyscale Operator pod, run the following command:kubectl get pods -n anyscale<\/p>\n<p>       Submit training job <\/p>\n<p>This section walks through a simple training job submission. The example implements distributed training of a neural network for Fashion MNIST classification using the Ray Train framework on SageMaker HyperPod with Amazon EKS orchestration, demonstrating how to use the AWS managed ML infrastructure combined with Ray\u2019s distributed computing capabilities for scalable model training.Complete the following steps:<\/p>\n<p>        Navigate to the jobs directory. This contains folders for available example jobs you can run. For this walkthrough, go to the dt-pytorch directory containing the training job. <\/p>\n<p>        Configure the required environment variables: <\/p>\n<p>          AWS_ACCESS_KEY_ID<br \/>\nAWS_SECRET_ACCESS_KEY<br \/>\nAWS_REGION<br \/>\nANYSCALE_CLOUD_NAME <\/p>\n<p>        Create <a href=\"https:\/\/docs.anyscale.com\/configuration\/compute\/overview\/\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Anyscale compute configuration<\/a>:<br \/>.\/1.create-compute-config.sh<br \/>\n        Submit the training job:<br \/>.\/2.submit-dt-pytorch.shThis uses the job configuration specified in job_config.yaml. For more information on the job config, refer to <a href=\"https:\/\/docs.anyscale.com\/reference\/job-api#jobconfig\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">JobConfig<\/a>.<br \/>\n        Monitor the deployment. You will see the newly created head and worker pods in the anyscale namespace.<br \/>kubectl get pods -n anyscale<br \/>\n        View the job status and logs on the Anyscale console to monitor your submitted job\u2019s progress and output.<br \/><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter size-large wp-image-117077\" style=\"margin: 10px 0px 10px 0px;border: 1px solid #CCCCCC\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-training-logs-1024x557.png\" alt=\"Ray distributed training output displaying worker\/driver logs, checkpoints, metrics, and configuration details for ML model training\" width=\"1024\" height=\"557\"\/> <\/p>\n<p>       Clean up <\/p>\n<p>To clean up your Anyscale cloud, run the following command:<\/p>\n<p>        cd ..\/..<br \/>\n.\/5.remove-anyscale.sh <\/p>\n<p>To delete your SageMaker HyperPod cluster and associated resources, delete the CloudFormation stack if this is how you created the cluster and its resources.<\/p>\n<p>       Conclusion <\/p>\n<p>This post demonstrated how to set up and deploy the Anyscale Operator on SageMaker HyperPod using Amazon EKS for orchestration.SageMaker HyperPod and Anyscale RayTurbo provide a highly efficient, resilient solution for large-scale distributed AI workloads: SageMaker HyperPod delivers robust, automated infrastructure management and fault recovery for GPU clusters, and RayTurbo accelerates distributed computing and optimizes resource usage with no code changes required. By combining the high-throughput, fault-tolerant environment of SageMaker HyperPod with RayTurbo\u2019s faster data processing and smarter scheduling, organizations can train and serve models at scale with improved reliability and significant cost savings, making this stack ideal for demanding tasks like large language model pre-training and batch inference.<\/p>\n<p>For more examples of using SageMaker HyperPod, refer to the <a href=\"https:\/\/catalog.workshops.aws\/sagemaker-hyperpod-eks\/en-US\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon EKS Support in Amazon SageMaker HyperPod workshop<\/a> and the <a href=\"https:\/\/docs.aws.amazon.com\/sagemaker\/latest\/dg\/sagemaker-hyperpod.html\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">Amazon SageMaker HyperPod Developer Guide<\/a>. For information on how customers are using RayTurbo, refer to <a href=\"https:\/\/www.anyscale.com\/product\/platform\/rayturbo\" target=\"_blank\" rel=\"noopener noreferrer nofollow\">RayTurbo<\/a>.<\/p>\n<p>\u00a0<\/p>\n<p>       About the authors <\/p>\n<p style=\"clear: both\"><img decoding=\"async\" loading=\"lazy\" class=\"size-full wp-image-117076 alignleft\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-sindhura.jpeg\" alt=\"\" width=\"100\" height=\"133\"\/><a href=\"https:\/\/www.linkedin.com\/in\/sindhura-palakodety-a6724416\/\" rel=\"nofollow noopener\" target=\"_blank\">Sindhura Palakodety<\/a> is a Senior Solutions Architect at AWS and Single-Threaded Leader (STL) for ISV Generative AI, where she is dedicated to empowering customers in developing enterprise-scale, Well-Architected solutions. She specializes in generative AI and data analytics domains, helping organizations use innovative technologies for transformative business outcomes.<\/p>\n<p style=\"clear: both\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-117097 size-full alignleft\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-mark-1.jpeg\" alt=\"\" width=\"100\" height=\"133\"\/><a href=\"https:\/\/www.linkedin.com\/in\/mark-vinciguerra\/\" rel=\"nofollow noopener\" target=\"_blank\">Mark Vinciguerra<\/a> is an Associate Specialist Solutions Architect at AWS based in New York. He focuses on generative AI training and inference, with the goal of helping customers architect, optimize, and scale their workloads across various AWS services. Prior to AWS, he went to Boston University and graduated with a degree in Computer Engineering.<\/p>\n<p style=\"clear: both\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-117098 size-full alignleft\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-flo-1.jpeg\" alt=\"\" width=\"100\" height=\"133\"\/><a href=\"http:\/\/www.linkedin.com\/in\/flogauter\/\" rel=\"nofollow noopener\" target=\"_blank\">Florian Gauter<\/a> is a Worldwide Specialist Solutions Architect at AWS, based in Hamburg, Germany. He specializes in AI\/ML and generative AI solutions, helping customers optimize and scale their AI\/ML workloads on AWS. With a background as a Data Scientist, Florian brings deep technical expertise to help organizations design and implement sophisticated ML solutions. He works closely with customers worldwide to transform their AI initiatives and maximize the value of their ML investments on AWS.<\/p>\n<p style=\"clear: both\"><img decoding=\"async\" loading=\"lazy\" class=\"alignleft wp-image-117073 size-full\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-alex.jpeg\" alt=\"\" width=\"100\" height=\"133\"\/><a href=\"https:\/\/www.linkedin.com\/in\/alex-iankoulski\/\" rel=\"nofollow noopener\" target=\"_blank\">Alex Iankoulski<\/a> is a Principal Solutions Architect in the Worldwide Specialist Organization at AWS. He focuses on orchestration of AI\/ML workloads using containers. Alex is the author of the <a href=\"https:\/\/bit.ly\/do-framework\" rel=\"nofollow noopener\" target=\"_blank\">do-framework<\/a> and a <a href=\"https:\/\/www.docker.com\/captains\/alex-iankoulski\/\" rel=\"nofollow noopener\" target=\"_blank\">Docker captain<\/a> who loves applying container technologies to accelerate the pace of innovation while solving the world\u2019s biggest challenges. Over the past 10 years, Alex has worked on helping customers do more on AWS, democratizing AI and ML, combating climate change, and making travel safer, healthcare better, and energy smarter.<\/p>\n<p style=\"clear: both\"><img decoding=\"async\" loading=\"lazy\" class=\"alignleft wp-image-117072 size-full\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-anoop.jpeg\" alt=\"\" width=\"100\" height=\"133\"\/><a href=\"https:\/\/www.linkedin.com\/in\/anoop-saha\/\" rel=\"nofollow noopener\" target=\"_blank\">Anoop Saha<\/a> is a Senior GTM Specialist at AWS focusing on generative AI model training and inference. He is partnering with top foundation model builders, strategic customers, and AWS service teams to enable distributed training and inference at scale on AWS and lead joint GTM motions. Before AWS, Anoop has held several leadership roles at startups and large corporations, primarily focusing on silicon and system architecture of AI infrastructure.<\/p>\n<p style=\"clear: both\"><img decoding=\"async\" loading=\"lazy\" class=\"alignleft wp-image-117071 size-thumbnail\" src=\"https:\/\/www.newsbeep.com\/uk\/wp-content\/uploads\/2025\/10\/ml-19194-dominic-100x103.png\" alt=\"\" width=\"100\" height=\"103\"\/><a href=\"https:\/\/www.linkedin.com\/in\/dominic-catalano-81ab6976\/\" rel=\"nofollow noopener\" target=\"_blank\">Dominic Catalano<\/a> is a Group Product Manager at Anyscale, where he leads product development across AI\/ML infrastructure, developer productivity, and enterprise security. His work focuses on distributed systems, Kubernetes, and helping teams run AI workloads at scale.<\/p>\n","protected":false},"excerpt":{"rendered":"This post was written with Dominic Catalano from Anyscale. Organizations building and deploying large-scale AI models often face&hellip;\n","protected":false},"author":2,"featured_media":189125,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[21],"tags":[4323,86,56,54,55],"class_list":{"0":"post-189124","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-computing","8":"tag-computing","9":"tag-technology","10":"tag-uk","11":"tag-united-kingdom","12":"tag-unitedkingdom"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/189124","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/comments?post=189124"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/posts\/189124\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media\/189125"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/media?parent=189124"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/categories?post=189124"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/uk\/wp-json\/wp\/v2\/tags?post=189124"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}