Hugging Face commits $10 million to free shared GPUs

AI For Business


Hugging Face, one of the biggest names in machine learning, is putting $10 million into free shared GPUs to help developers develop new AI technologies. The goal is to help small developers, academics, and startups compete against the centralization of AI advances.

“We are fortunate to be in a position to invest in our community,” said Clem DeLang, CEO of Hugging Face. The Verge. DeLang said the investment was possible because Hugging Face was “profitable or close to profitable,” and recently raised $235 million in funding, valuing the company at $4.5 billion.

DeLang is concerned about whether AI startups will be able to compete with the tech giants. The most important advances in artificial intelligence, such as GPT-4, the algorithms behind Google search, and Tesla's fully self-driving system, remain hidden within the confines of big technology companies. Not only are these companies financially incentivized to keep their models proprietary, but they also have billions of dollars of computational resources at their disposal, multiplying their profits and giving them a leg up on competitors. can make it impossible for startups to catch up.

“If a few organizations become too dominant, it will be difficult to fight against them later on.”

Hugging Face aims to make cutting-edge AI technology accessible to everyone, not just the tech giants. I spoke with DeLange at his Google I/O, the tech giant's flagship conference. There, Google executives announced a number of AI features for their products and even a family of open source models called Gemma. For DeLang, his unique approach is not the future he envisions.

“If you go down the open source path, you're moving towards a world where most companies, most organizations, most nonprofits, policymakers, regulators can actually do AI. “A more decentralized approach would be a better world in my opinion,” DeLang said.

How to use

Access to computing poses major challenges to building language models at scale, often favoring companies like OpenAI and Anthropic who secure deals with cloud providers for large amounts of computing resources. . Hugging Face aims to level the playing field by donating these shared GPUs to the community through a new program called ZeroGPU.

A shared GPU can be accessed by multiple users or applications simultaneously, eliminating the need for each user or application to have its own dedicated GPU. ZeroGPU is now available via Hugging Face's Spaces, a hosting platform for publishing apps, and more than 300,000 AI demos have been created on CPUs or paid GPUs to date, the company said.

“It's very difficult to get enough GPUs from major cloud providers.”

Access to shared GPUs is determined by usage, so if some GPU capacity is not actively utilized, that capacity is made available to someone else. This makes it cost-effective, energy-efficient and ideal for use throughout the region. ZeroGPU powers this processing using Nvidia A100 GPU devices. It provides about half the calculation speed of the popular and expensive H100.

“It's very difficult to get enough GPUs from the major cloud providers, and the way to get them is to commit to very large numbers over a long period of time, which creates a high barrier to entry.” Delangue said.

Typically, companies commit to cloud providers such as Amazon Web Services for more than a year to secure GPU resources. This arrangement disadvantages small businesses, indie developers, and academics who build on a smaller scale and cannot predict whether their projects will gain traction. You pay for GPUs regardless of how much you use them.

“Knowing the number of GPUs and the budget needed is also a forecasting nightmare,” Delangue says.

Open source AI is catching up

As AI advances rapidly behind closed doors, Hugging Face's goal is to help people build more AI technology openly.

“If a few organizations become too dominant, it will be difficult to fight against them later on,” DeLang says.

Andrew Reed, a machine learning engineer at Hugging Face, also developed an app that visualizes the progress of proprietary and open source LLMs over time as scored by LMSYS Chatbot Arena. This shows that the gap between the two is getting closer and closer.

Since Meta's first version a year ago, more than 35,000 variations of Meta's open source AI model Llama have been shared on Hugging Face, ranging from “quantized and coupled models to biological It is said that this ranges from “academic studies'' to special models of Mandarin.

“AI should not be in the hands of a few. With our commitment to open source developers, we look forward to seeing what you create next in a spirit of collaboration and transparency,” DeLang said. stated in a press release.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *