Cloudian launches Object Storage AI Platform with Corporate LLM

AI News


Cludian has launched the HyperScale AI Data Platform, an on-premises S3-based storage platform and artificial intelligence (AI) infrastructure bundle aimed at businesses that need quick responses from corporate information.

The offer leverages Cloudian Object Storage and the NVIDIA RTX Pro 6000 Blackwell Graphics Processing Unit (GPU) in Search Enhanced Generation (RAG) architectures to promote large-scale language modeling (LLM) capabilities that are often trained on the mass of untapped enterprise data.

The target use case is to provide natural language queries for corporate data so that employees can get quick answers from the data they hold. This can be company procedures, data useful for marketing and product development, and past codebases. Cloudian emphasizes that the product will work completely on-premises, and is “air gapped” to ensure the security of your organization's data.

It consists of three nodes of S3 object storage, in this case on-premises, connected using S3 using Remote Direct Memory Access (RDMA), developed by NVIDIA. This allows for quick connections between storage nodes using RDMA. RDMA was originally developed to allow it to move from one server's memory to another server's memory due to high-throughput, low-latency operations, and to avoid attacking central processing unit (CPU) resources.

S3 over RDMA takes advantage of this approach to reduce latency by bypassing the TCP/IP stack. It is intended to address bottlenecks that can occur between storage nodes during AI processing. This is an important constraint on AI performance.

Sitting on top of this, at the heart of the platform is a so-called billion vector database. Vector databases have emerged as the core of AI as they have come to the forefront. When data is ingested into an AI system, its characteristics are given multiple numbers. These values ​​can then be calculated to calculate similarity, context, and give similarity in meaning.

Cloudian's HyperScale AI data platform allows new information to be ingested without the need to retrain the entire corpus, but the architecture also supports image and structured data, as well as text, of the structured data that the product is primarily targeted.

Cludian is one of many suppliers that offer Enterprise Object Store products. In fact, analyst house Gigaom's 2025 Object Storage Radar has 22 Object Storage Players. Some of these have some kind of play into the object storage platform targeting AI use cases, and Cludian ranks among the most innovative of them.

The space also contains object storage, RAG and Vector database features, with experts like Scality and Minio, and common storage players like Pure Storage and NetApp.

Cloudian object storage is located under its Hyperstore family. It is a native S3, but it also has SMB and NFS files access. Hyperstore nodes have a set of spin-disk HDD models as well as all flash options with TLC NVME drives.

Cloudian's hyperscale AI data platform uses Llama 3.2-3B-Instruct LLM. Meanwhile, its four NVIDIA GPUs are dedicated to sharing different phases of the workload: LLM inference, vector database operations, reranking and relevance, and vector embedding and other features.

Just like using the popular LLM, users get an easy-to-use graphical user interface that allows them to ask questions in natural language and then refine.

Target use cases include building data lineage and audit trails for enterprise knowledge mining, secure document intelligence, video content analytics, and compliance and governance purposes.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *