Hugging Face’s Merve Noyan explains how agents can now train models and extend the capabilities of the open agent ecosystem. Noyan highlights the integration of Hugging Face Hub’s capabilities such as searching for models and datasets, as well as the ability to run jobs and query spaces through LLM.
Face-hugging: Agents train models with new skills — from an AI engineer
Visual TL;DR. The agent trains the model via Hugging Face Hub. Hug Face Hub integrates with Hub Integration. Hub integration allows you to leverage your skills. Train your model by leveraging your agent’s skills. Hub integration supports local model serving. Training the agent model powers your AI workflow. Hub integration facilitates model discovery.
Agents train models: Agents can now train models with new skills
Hugging Face Hub: A central repository for models, datasets, and applications
Hub integration: Agents leverage hubs to search for models/datasets and run jobs.
Leverage skills: Skills enable agents to perform advanced training tasks.
Providing local models: Agents interact with models locally for efficiency.
Enhanced AI Workflow: A more sophisticated workflow for AI development
Agent training: Agents can train models with new skills
Model discovery: The agent can discover models based on benchmarks.
Visual TL;DR
Open Agent Ecosystem and Hug Face Hub Integration
Noyan explains that Hugging Face Hub will serve as a central repository for machine learning models, datasets, and applications, facilitating a collaborative environment. The platform hosts a huge number of models and datasets and allows developers to share and discover resources.
Hugging Face Hub and agent integration enable more sophisticated workflows. Agents can now leverage Hugging Face’s infrastructure to perform tasks such as selecting models based on benchmarks, fine-tuning models with specific datasets, and hosting agent traces for analysis.
Use skills to train agents
A key aspect of this progress is the introduction of “skills” available to agents. These skills allow agents to programmatically interact with the Hugging Face ecosystem. For example, the Hugging Face CLI skill allows agents to search for models, manage datasets, launch spaces, and run jobs directly.
Noyan shows how you can leverage the benchmarks and leaderboards available in Hugging Face Hub to encourage agents to find the best model for a specific task, such as OCR of French documents. The agent can also automatically retrieve the necessary information and suggest the best configuration.
Providing local models and agent interaction
The presentation also touches on the ability to serve LLMs locally, providing greater flexibility and control. tools like llama.cpp and related agents can be integrated with Hugging Face Hub, allowing users to run models on their own infrastructure. This is especially useful for privacy-sensitive applications and performance optimization.
Noyan will show you how to configure agents to use local LLM endpoints, enabling a seamless workflow for training and inference without relying solely on cloud-based services. Hugging Face Hub’s model repository also provides detailed information about hardware compatibility and recommended configurations for various models.
Practical skills: training and discovery
Noyan explains these concepts with practical examples, including a demonstration of training a model remotely using the Hugging Face infrastructure. Following the user’s prompts, the agent identifies a suitable OCR model, obtains its benchmark performance, and begins the training process.
The presentation also introduces the “Skills” feature that enables agents to perform actions such as building demos in Gradio and exploring datasets in Hugging Face Datasets. These skills are designed to be easily integrated into various agent frameworks to enhance their functionality.
conclusion
The advances Noyan discussed highlight Hugging Face’s commitment to building a robust and accessible ecosystem for AI development. The platform enables agents to train models and interact with Hugging Face Hub, giving developers more flexibility and efficiency when building and deploying AI applications.