Mirantis, a Kubernetes native infrastructure provider for AI, announced additional capabilities for k0rdent AI. This further extends the platform beyond infrastructure management, enabling enterprises, neoclouds, and GPU cloud operators to monetize their AI infrastructure investments.
Also read: AIThority interview with Rohit Agarwal, Founder and CEO of Portkey
The new k0rdent AI Model Registry and k0rdent AI Inference Mesh enable organizations to securely host, manage, route, and instrument AI models and inference services across federated computing resources. Together, the two new products enable organizations to transform their raw GPU infrastructure into a managed, revenue-generating AI platform.
Mirantis also introduced the k0rdent AI inference runtime, which is designed to maximize tokens per GPU second to improve infrastructure efficiency and utilization.
“As organizations move AI projects from experimentation to production, infrastructure teams increasingly face operational and governance challenges around model distribution, inference visibility, compliance enforcement, and GPU economics,” said Kevin Kamil, vice president of product development at Mirantis. “Enterprises and GPU operators are largely forced to piece together fragile workflows and disconnected tools to operationalize AI. Models cannot be treated like containers because they have their own governance, sovereignty, compliance, and lifecycle requirements. The capabilities we deliver today are validated and benchmarked for our users.”
k0rdent AI Model Registry
The k0rdent AI model registry is optimized for AI model storage and distribution workflows. It provides a secure OCI native registry for managing large-scale language models (LLMs), fine-tuned variants, quantized builds, and related AI artifacts across a distributed infrastructure.
The registry reduces the operational complexity associated with distributing secure AI models.
k0rdent AI inference mesh
k0rdent AI Inference Mesh routes, measures, audits, and enforces policies on all inference requests across models, regions, clusters, and providers. Get complete visibility into where your AI requests are sent, costs, and compliance gaps.
The new product is built on Mirantis’ k0rdent AI platform, which focuses on Kubernetes-native AI infrastructure across bare metal, virtual machines, managed Kubernetes, and sovereign clouds.
Also read: AI-powered risk intelligence: How financial institutions are anticipating systemic shocks
[To share your insights with us, please write to psen@itechseries.com]
