Today, we’re releasing workflows in public preview. Workflow is the orchestration layer of enterprise AI. Deliver the durability, observability, and fault tolerance you need to reliably move your AI-powered processes from proof of concept to production. organizations like ASML, ABANCA, CMA-CGM, France Travail, La Banque Postale, MoeveWorkflows are already in place to automate important processes, such as:

Enterprise teams now have access to a capable model. What these lack is a way to reliably run them in production. We see this in every industry we work in. Failure modes are consistent. Things like pipelines that run in notebooks but silently fail with no trace in production, long-running processes that can’t survive network timeouts, multi-step operations that require human approval during execution but no mechanism to pause and resume, and systems that have no way to verify that they continue to behave as expected after deployment.
Building all the capabilities to address these challenges is a complex task that can take companies months. The orchestration layer must be pieced together from the beginning, and the connecting components, inference, agents, connectors, and observability come from different tools, each with its own interface and format.
Because workflows are part of Studio, the orchestration layer and the components it orchestrates are built to work together. Once the business process is identified, the developer creates the workflow in Python. All workflows can be published to Le Chat, so anyone in your organization can trigger them. All steps are tracked and auditable in Studio. By bringing all of this together, workflows enable organizations to go from identifying use cases to running them in production in a matter of days.
workflow graph
Workflow deployed in the real world
As mentioned earlier, Mistral AI customers are already using workflows to automate business processes and run them in production. The example below shows how durability, observability, and human approval work in practice.
Cargo release automation.
Shipping to all parts of the world is based on administrative procedures. The release of a single shipment may include customs declarations, hazardous materials classification, safety inspections, and regulatory checks across multiple jurisdictions. Neglecting procedures can result in cargo delays at ports and potential non-compliance.
The operational requirements for such use cases are that the system must tolerate intermittent timeouts, pause mid-execution for human review, and report exactly where and why when something fails.
Workflows allow customers to automate this end-to-end. This workflow validates all incoming shipping documents against customs regulations, checks for anomalies, flags those that require human approval, and waits for approval before releasing the shipment. In a workflow, the human approval step is one line of code: wait_for_input(). Workflows pause and wait as long as needed without consuming compute, notify reviewers, and resume exactly where they left off. Studio records a complete run history.
Document compliance check.
KYC reviews are manual, iterative, and time-consuming. Onboarding a single customer may require extracting identity documents, checking against sanctions lists and PEP databases, cross-referencing regulatory requirements across jurisdictions, and creating structured risk assessments with supporting evidence. Doing this manually would take hours of analysis time for each case.
The operational requirements here are speed and auditability. Systems that automate such processes need to be fast and document the steps and reasoning behind them to meet regulatory requirements.
With workflows, the entire review process takes just a few minutes, and with native support for OpenTelemetry, Studio presents each step as a structured timeline that allows you to drill down to any level of detail, right down to a specific trace.
Customer support triage.
Our support team handles high volumes. Refund Requests, Technical Issues, Billing Disputes, and Account Escalations. Prompt and consistent routing to the right team can make or break your time to resolution.
The operational requirement here is modifiability. Automatic routing presents a problem. When this happens, your team needs to see why the ticket was routed that way and fix it without retraining the model.
With workflows, incoming tickets are analyzed, categorized by purpose and urgency, and automatically routed to the appropriate downstream process. Each routing decision can be viewed and tracked in Studio. If the classification is incorrect, the team corrects it at the workflow level.
Why use workflow?
- Durable execution. Workflows track state at every step. If the process fails, it will resume where it left off. As a result, developers can focus on writing business logic instead of recovery logic.
- Observability. All branches, retries, and state changes are recorded in Studio. If you need to review a decision months later, you’ll see a complete timeline of how that decision was reached.
- Human relations. Just one line of code pauses your workflow for approval. The reviewer responds via Le Chat, webhook, or connected surface, and the workflow resumes where it left off.
- studio native. Workflows use the same agents and connectors as the rest of Studio. No separate integration work is required to connect them.
- Enterprise readiness. Workspaces within Studio separate teams and projects, and role-based access control (RBAC) enforces these rules consistently.
- Built for developers and business teams. Engineers write workflows as code. Business teams run them from Le Chat.
- Deployment flexibility. The control plane runs on Mistral. Workers and data processing run within the environment where your critical services are hosted, such as in the cloud, on-premises, or hybrid.
under the hood
Workflows is built on Temporal’s durable execution engine, the same infrastructure that powers orchestration for Netflix, Stripe, and Salesforce. We expanded it for AI-specific workloads by adding streaming, payload processing, multi-tenancy, and observability that the core engine doesn’t offer out of the box.
The deployment model is split between Mistral and the environment, separating the control plane and data plane. Mistral hosts the orchestration infrastructure (Temporal cluster, Workflows API, and Studio). Deploy your workers to your own Kubernetes environment using a separate Helm chart and connect them back to your central cluster through secure credentials. Data and business logic stay within the boundaries.
The Mistral SDK handles retry policies, tracing, timeouts, rate limits, and human participation functionality through decorators and single-line configuration, so all you have to write is the business logic itself.
Let’s get started
The Python SDK is a way for developers to create and run workflows. v3.0 is now available and can be installed with one command:
