In this week’s real-time analytics news: Amazon Web Services (AWS) announced new capabilities in Sagemaker AI.
Keeping pace with news and developments in the real-time analytics and AI market can be a daunting task. Fortunately, we have you covered with a summary of the items our staff comes across each week. And if you prefer it in your inbox, sign up here!
AWS announced new capabilities in Sagemaker AI to accelerate how customers build and train AI models. The new capabilities include:
SageMaker HyperPod observability provides real-time visibility into model development tasks and compute resources, helping customers bring models to market faster by reducing the time to troubleshoot performance issues from days to minutes.
Customers can now easily deploy models from SageMaker JumpStart, as well as fine-tuned custom models, on SageMaker HyperPod for fast, scalable inference.
With new remote connections to SageMaker AI, developers and data scientists can quickly and easily connect to SageMaker AI from their local IDE, maintaining access to the custom tools and familiar workflows that help them work most efficiently.
Real-time analytics news in brief
Cerebras Systems announced new partnerships and integrations with Hugging Face, DataRobot, and Docker. These collaborations dramatically increase accessibility and impact of Cerebras’ AI inference, enabling a new generation of performant, interactive, and intelligent agentic AI applications. For example, now powered by Cerebras inference and deployed with Gradio on Hugging Face Spaces, Hugging Face’s SmolAgents can deliver near-instant responses with dramatically improved interactivity. DataRobot’s Syftr, integrated with Cerebras’ AI inference performance, delivers a toolchain for production-grade agentic apps. And with Docker Compose and Cerebras, developers can spin up powerful, multi-agent AI stacks in seconds.
Bitwarden announced the launch of a new Model Context Protocol (MCP) server, enabling secure integration between AI agents and credential workflows. The Bitwarden MCP server operates on a user’s local machine and allows AI assistants to access, generate, retrieve, and manage credentials while preserving zero-knowledge, end-to-end encryption through a local-first architecture.
Cognizant announced the launch of Cognizant Agent Foundry, an offering designed to help enterprises design, deploy, and orchestrate autonomous AI agents at scale. Cognizant Agent Foundry supports adaptive operations, real-time decision-making, and personalized customer experiences, empowering organizations to embed agentic capabilities1 across workflows.
Denodo announced the availability of the Denodo DeepQuery capability, now as a private preview, and generally available soon, enabling generative AI (GenAI) to go beyond retrieving facts to investigating, synthesizing, and explaining its reasoning. Denodo also announced the availability of Model Context Protocol (MCP) support as part of the Denodo AI SDK.
Domino Data Lab announced the launch of its Vibe Modeling offering. Using the solution, data scientists can leverage Vibe Modeling capabilities to describe their analytical intent and desired outcomes, with AI accelerating the time-consuming experimental phases of model building. The solution is available via Domino’s GitHub repository.
Graphwise announced the availability of GraphDB 11. This latest release makes it easier to integrate with multiple Large Language Models (LLMs) and enables AI applications to deliver more accurate and contextually relevant results. With MCP protocol support, V11 offers swift integration of data in agentic AI ecosystems and enables AI platforms like Microsoft Copilot Studio to tap directly into their enterprise knowledge.
Hydrolix announced support for AWS Elemental MediaLive, MediaPackage, and MediaTailor, as well as client-side analytics from Datazoom. The new integrations provide companies with real-time and historical insights into video streaming performance and advertising delivery. The AWS Elemental and Datazoom integrations complement existing integrations with AWS CloudFront and AWS WAF.
Kong released AI Gateway 3.11, expanding its AI infrastructure tool with new capabilities to help organizations grow, secure, and scale GenAI and agent-based systems. The new release includes more than 10 out-of-the-box GenAI capabilities designed to help teams build scalable, multimodal AI agents while cutting token costs and strengthening guardrails.
Liquid AI announced the launch of its next-generation Liquid Foundation Models (LFM2). Unlike traditional transformer-based models, LFM2 is composed of structured, adaptive operators that allow for more efficient training, faster inference, and better generalization, especially in long-context or resource-constrained scenarios. Additionally, Liquid AI open-sourced its LFM2, and LFM2’s weights can now be downloaded from Hugging Face and are also available through the Liquid Playground for testing.
OpenText introduced MyAviator, a secure, personal digital worker built for the enterprise. MyAviator enables individuals to securely interact with their own documents, extract insights, and generate content, all within the trusted OpenText ecosystem. The solution leverages agentic AI and the company’s suite of Aviator solutions to drive diverse automation use cases.
SambaNova announced SambaManaged, an inference-optimized data center product offering, deployable in just 90 days, which is faster than the typical 18 to 24 months. Designed for rapid deployment, this modular product enables existing data centers to immediately stand up AI inference services with minimal infrastructure modification.
WEKA unveiled NeuralMesh Axon, an advanced storage system that leverages a fusion architecture designed to address the fundamental challenges of running exascale AI applications and workloads. NeuralMesh Axon seamlessly fuses with GPU servers and AI factories to streamline deployments, reduce costs, and significantly enhance AI workload responsiveness and performance.
Partnerships, collaborations, and more
Oracle and Amazon Web Services announced the general availability of Oracle Database@AWS. Customers can now run Oracle Exadata Database Service and Oracle Autonomous Database on dedicated infrastructure on Oracle Cloud Infrastructure (OCI) within AWS. Oracle Database@AWS is available in the AWS U.S. East (N. Virginia) and U.S. West (Oregon) Regions, with plans to expand availability to 20 additional AWS Regions around the world.
Additionally, customers can easily migrate their Oracle Database workloads to Oracle Database@AWS running on OCI in AWS while taking advantage of Oracle Real Application Clusters (RAC) and the latest Oracle Database 23ai with embedded AI Vector capabilities. Oracle Database@AWS includes zero-ETL (extract, transform, and load) integration, which simplifies data integration between enterprise Oracle Database services and AWS Analytics services, eliminating the need to build and manage complex data pipelines.
Anaconda announced a partnership with Prefix.dev to deliver significant performance improvements for conda package creation while maintaining the trusted conda-build experience that enterprises rely on. The enhanced conda-build will leverage Rust-based technology from rattler-build to enable faster package building while ensuring compatibility with existing conda environments and workflows.
DDN announced that Google Cloud Managed Lustre, a fully managed, high-performance parallel file system service, powered by DDN’s EXAScaler technology, is now generally available. Designed to accelerate the most demanding workloads in AI, HPC, and data-intensive enterprise environments, Google Cloud Managed Lustre brings the power of Lustre natively into the Google Cloud ecosystem.
DuploCloud announced a strategic collaboration agreement (SCA) with Amazon Web Services (AWS) to bring a new set of automated DevOps solutions to AWS customers. With this agreement, DuploCloud and AWS will co-develop go-to-market initiatives and technical integrations aimed at enabling startups, generative AI innovators, and regulated industries to launch their products and services faster, while meeting strict compliance frameworks.
PingCAP announced an expanded collaboration with Microsoft to accelerate the adoption of modern data infrastructure across the Microsoft Azure ecosystem. This collaboration brings together PingCAP’s expertise in distributed transactional and analytical systems with Microsoft’s global cloud platform to help organizations build scalable, real-time, and AI-ready applications on Azure.
If your company has real-time analytics news, send your announcements to [email protected].
In case you missed it, here are our most recent previous weekly real-time analytics news roundups: