Runware raises $13 million seeds to help customers achieve up to 10x cost savings in the AI ​​media generation

Machine Learning


Runware Funding News - UK-based ranware secures $3 million in funding

Runware, a performance- and price-oriented AI-AS-A-Service provider, has announced $13 million Funding led by Global Software Investor Insight Partners, and participation from former investors A16Z Speedrun, Begin Capital and Zero Prime. This funding will be used to extend the functionality of the ranware from image and video generation to all media workflows, including audio, LLM, and 3D. To date, more 4b Visual assets were generated with Runware's inference engine and over 100k The developers have installed it less than a year after its release. The platform hosts +400K AI models and powers media inference 250m End users through customers such as Quora, NightCafe, Openart, Focalml and more.

Runware runs AI Media Generation API with its own Sonic Inference Engine®integrates custom-designed hardware and bespoke software to achieve greater cost-effectiveness and power generation speeds. As intensive workloads such as video generation become popular and GPU costs burn through budgets, consumer AI apps are increasingly trying to cut costs. Professional solutions like Runware provide all media generation and offer up to 10x cost savings in implementation and inference. In addition to saving inference, Runware's API integrates all model providers with a common data standard, reducing the time that engineering teams spend adding new models in minutes through simple parameter changes.

Also Read: Aithority Interview with Speakup CEO Tim Morrs

All media generation in one API: Images, Videos, Audio, LLM

Following the recent round, Runware has invested heavily in extending its inference engine and APIs to all AI media workloads. The company already integrates all image and video models from Black Forest Labs, Openai, Ideogram, Bytedance, Kling, Minimax Hailuo, Google Veo, Pixverse, Vidu and Vidu. Alibaba One &Qwen, and actively expands to audio and LLM models. A fully-featured media generator or content creation tool can now build Runware APIs in minutes. That model hub is currently hosting +400K AI generation model.

By supporting all media generation with an inference engine, Runware removes complexity from AI integration. Its API can replace dozens or hundreds of individual model integrations, or the need for large internal infrastructure, ML teams, and six-figure R&D budgets. Many product teams are now able to ship AI media at the same time, but no setup. Throughout the media and model types, Runware aims to be the fastest, cheapest and most flexible API for all AI workloads.

“As more and more models launch, developers can have dozens or even hundreds of endpoints to integrate and maintain. Model providers are offering APIs from inference PODs as they can move to current platforms and provide inference costs up to 90% lower than any cloud provider.” flaviu radulescufounder of Runware

How Runware Can Reduce Generation Costs by Up to 90%

Runware's capabilities to perform basic hardware optimization are based on flaviu radulescu's The past 20 years of experience building bare metal data clusters for clients like Vodafone, Booking.com and Transport London. The ranware designs and builds its own custom GPU and network hardware packaged in a proprietary inference POD optimized for rapid deployment and cost-effective use of renewable energy. Vertical integrated design reduces inference costs by up to 90%.

“Runware is a hidden gem that all serious AI applications should consider. It offers extremely competitive pricing, consistently strong performance, and responsive and helpful customer support across the top models. Openart CEO Coco Mao

“The core of Runware's benefits is its dedicated Sonic InconerceEngine®. While others often rely on product cloud infrastructure, Runware has built its own workload-specific infrastructure. You can control latency, throughput and cost at a basic level. The technical edge can make Runware a performance leader in AI media.” George MatthewManaging Director of Insight Partners. Mathew will be joining Runware's board of directors as part of its fundraising.

Also Read: Cognitive Product Design: Enhance the Power of Non-Technical Users through Natural Language Interaction with AI-Native PLM

Unlock developer flexibility

Runware offers an edge of cost and performance without compromising quality or flexibility thanks to its custom Sonic inference engine® and the developer API. Built for configurable workflows, developers can combine and match models from day one and integrate them into existing pipelines into new pipelines. Previously, it is limited to image generation, such as batch processing, parallel inference, Comfyui support, ControlNet or Lora editing. This has been extended to video.

“We chose Runware as our primary inference partner for API pricing and flexibility. NightCafe users are an avid explorer of AI. We want to try out all models, hyperparameters, loras and other options. They can be less than a fifth of the costs of other providers.” Angus Russellfounder of NightCafe

“We moved to the runway on a large traffic day. Their APIs were easy to integrate and handled sudden loads very smoothly. The combination of quality, speed and price was far the best in the market and they were a great partner because they were scaled up.” Robert Cunninghamco-founder of Focal

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *