Join an event that enterprise leaders have been trusted for nearly 20 years. VB Transform brings together people who build real enterprise AI strategies. learn more
Editor's Note: Emilia will lead the editorial roundtable on this topic in VB Transform this month. Sign up now.
Orchestration frameworks for AI services offer multiple capabilities for enterprises. They should not only define how applications or agents flow together, but they should have administrators manage workflows and agents and audit the system.
Once businesses begin to expand their AI services and lead them to production, they will ensure agents are running as expected by creating a manageable, trackable, auditable and robust pipeline. Without these controls, organizations may not know what is happening with AI systems and may only discover issues if something goes wrong or if it is not complying with regulations.
Kevin Kiley, president of enterprise orchestration company Airia, told VentureBeat in an interview that the framework needs to include auditability and traceability.
“It's important to have that observability and be able to go back to the audit log and show at what point it was provided again,” Killie said. “You need to know if it's a bad actor or an internal employee who didn't realize they were sharing information, or if it's a hallucination. You need a record of that.”
Ideally, robustness and audit trajectory should be incorporated into AI systems very early on. By understanding the potential risks of new AI applications or agents and ensuring that they continue to meet standards prior to deployment, it helps to alleviate concerns about producing AI.
However, the organization initially did not design the system with traceability and auditability in mind. Many AI pilot programs began life when experiments began without an orchestration layer or audit trail.
The big question facing businesses now is how to manage all their agents and applications, make sure their pipeline remains robust and, if something goes wrong, monitor AI performance by knowing what went wrong.
Choose the right method
However, before building AI applications, experts said that organizations need to stock their data. If companies know data that accesses AI systems and fine-tuned models, they have a baseline to compare long-term performance.
“Can I run some of these AI systems to verify that the system is actually running properly?” Yrieix Garnier, vice president of products at Datadog, told VentureBeat in an interview. “It's really hard to actually do to understand that there is a good reference system to validate an AI solution.”
Once an organization has identified and found the data, it is necessary to establish a dataset version (which essentially assigns a timestamp or version number) to make the experiment reproducible and understand what the model has changed. These datasets and models, applications using these specific models or agents, certified users, and baseline runtime numbers can be loaded into either an orchestration or observability platform.
Just like when choosing the foundation model to build, orchestration teams need to consider transparency and openness. While some closed source orchestration systems offer many advantages, more open source platforms may also offer benefits that some companies cherish, such as increased visibility into decision-making systems.
Open source platforms such as MLFLOW, Langchain and Grafana provide granular and flexible instruction and monitoring for agents and models. Enterprises can choose to develop AI pipelines through a single end-to-end platform, such as DataDog, or take advantage of a variety of interconnection tools from AWS.
Another consideration for businesses is to plug in a system that maps agent and application responses to compliance tools or responsible AI policies. Both AWS and Microsoft offer services that track AI tools and how closely they are compliant with GuardRails and other policies set up by users.
Kiley said one company consideration when building these reliable pipelines revolves around choosing more transparent systems. For Kiley, he doesn't visualize how AI systems work ineffective.
“There are situations where you have to be flexible regardless of the use case or the industry's content, and closed systems won't work. There are providers with great tools, but I don't know how they're reaching these decisions.
Join the conversation with VB Transform
From June 24th to 25th, we will lead an editorial roundtable called “Best Practices for Building an Agent AI Orchestration Framework” at VB Transform 2025 in San Francisco. I want you to participate in the conversation. Sign up now.
Source link
