Nissan recently extended its partnership with AI company Monolith for another three years to leverage AI-driven engineering in vehicle development. By training machine learning models on more than 90 years of historical test data, including simulations, engineers at Nissan’s UK technical center aim to more accurately predict physical test results and reduce the need for physical prototypes. This approach has already reduced physical testing of bolted joints by 17%, and Nissan expects to cut development test time in half for future European cars. ATTI speaks with Sam Emmeny-Smith, Head of Automotive, Defense and Motorsport at Monolith, about the company’s next steps in working with Nissan to advance AI-powered vehicle testing.
How does Monolith’s AI platform integrate into Nissan’s existing vehicle development workflow, and what data infrastructure is required to support it?
Monolith was introduced into Nissan’s development workflow through a collaborative onboarding effort, rather than a top-down technology drop-in. Engineers from both teams reviewed available test and simulation datasets, including historical data from different departments that had not previously been used together.
The team worked with Nissan engineers to identify the most important variables and established an approach that allows existing test data to be input directly into the platform. This supported structured prioritization practices. You can run your validation program more efficiently by bringing forward high-risk, high-value tests while reducing or eliminating low-impact tests.
Engineers adopted this platform because it provided transparent predictive tools. Built-in explanations allow you to see why a model behaves a certain way, making it easier to integrate AI results into daily validation decisions.
From an IT perspective, the integration was easy. Nissan already had a well-organized database and secure storage, so the main task was not to rebuild the infrastructure, but to ensure controlled data access. This approach fits well with the broader Re:Nissan plan, which emphasizes a leaner development process supported by more effective use of existing engineering knowledge.
What types of machine learning models are used to predict test results, and how do engineers verify that these AI-generated results accurately reflect real-world vehicle performance?
Monolith uses a variety of supervised and unsupervised machine learning approaches suitable for engineering regression problems, including random forest regressors, neural network-based models, and other structured techniques developed from eight years of engineering projects. The platform does not depend on a single model type. Instead, our team works with engineers to select the model structure that best fits the behavior of the system being studied. It is built into the platform so users can apply the same proven methods directly to their test data.
Validation is handled through standard and rigorous engineering practices. Portions of past test campaigns are held back for model checking so engineers can compare predictions to known results before using the model for new test plans. For large programs, teams often run dedicated validation exercises and review metrics such as confusion matrices to ensure that false positives and false negatives are within engineering tolerances.
The goal is not to treat the model as an abstract statistical tool, but as a practical surrogate to speed up decision-making while maintaining physical testing as the standard for real-world vehicle performance.
How does Monolith’s AI system integrate into Nissan’s existing CAE and test workflows, and what challenges does it present in scaling the technology across multiple vehicle platforms and global R&D centers?
Monolith is integrated into Nissan’s CAE and test workflows using simulation and physical test data that engineers are already generating. Teams connect existing outputs to the platform and build surrogate models that help prioritize tests without changing established processes.
Scaling is currently in progress. As different groups realize the benefits, we hope to apply machine learning to more components and validation programs to support Re:Nissan’s goals of reducing test time and accelerating development.

The challenge is not in creating one useful model, but in scaling the approach across multiple platforms and global R&D centers. In most engineering teams, someone can write a script that solves a particular problem, but it’s difficult to share it. Moving code to another site, adjusting data formats, and explaining workflows to new teams can quickly become a burden. These one-time tools rarely survive beyond their original group.
Monolith was built to remove this barrier. The platform handles backend data structures, reliability, and security, and provides built-in explainability so engineers can understand and trust the output. This allows teams to easily share analysis and results without having to maintain code or build infrastructure. This will allow Nissan engineers to focus on expanding AI solutions instead of building software.
Scaling AI across the world’s R&D centers presents both technical and organizational challenges. In this sense, ensuring data quality and consistency across diverse sources is just as important as aligning workflows and standards across regions. Monolith supports this with a flexible cloud-based architecture and robust governance framework that allows teams to securely share models and insights.
The partnership has already reduced physical exams by 17%. As AI adoption expands, what specific test areas or vehicle components are likely to see the greatest time savings?
Nissan and Monolith’s initial collaboration focused on testing bolted connections within vehicle chassis structures. This is a very important area of verification, but it requires a lot of time and effort. Using Monolith’s AI, engineers identified the bolt’s optimal torque range and prioritized only the most informative tests. This alone resulted in a 17% reduction in physical inspections. This goes a long way in showing how AI can pinpoint the experiments that really matter.
As the partnership expands, this technology will be applied to a wider range of components and systems throughout the design, test, and delivery process, especially when it is highly reproducible and highly correlated with simulation data. These are areas where vast historical datasets already exist that provide the basis for accurate predictive models.
By applying AI across its test portfolio, Nissan expects to cut physical test times for future European models by up to half. By bringing AI into broader workflows, speed becomes an invaluable benefit, while also allowing engineers to gain deeper insights earlier in the process, accelerating innovation while maintaining the exacting standards customers expect from Nissan vehicles.
How do you think AI will transform the vehicle development process over the next 10 years? Will it eventually enable a fully virtual test environment to replace most physical verification?
AI will change vehicle development over the next decade by reducing the amount of physical testing required, but it will not completely replace physical verification. Safety requirements, regulatory standards, and the need for traceable decisions mean that physical testing remains the gold standard. What AI does is move much of the learning into software, allowing engineers to reach final testing with far fewer prototypes and a far more focused validation plan.
The biggest gains come from optimizing long, expensive, bottleneck test programs. These are areas where AI is already delivering tangible results, such as double-digit reductions in testing effort, which directly shorten development schedules. Engineers have known about machine learning for years, but adoption has been slow because trust and accuracy are more important than novelty. Now that the team has seen reliable results, they are beginning to implement these methods across their programs.
Over the next decade, AI will become a cornerstone of vehicle development, fundamentally changing the way engineers explore, validate, and optimize designs.
Physical testing will always play an important role in final validation and safety assurance, but its amount will be significantly reduced as confidence in AI-based predictions increases. Engineers will increasingly rely on machine learning-powered digital twins to assess performance, durability, and efficiency under a wide range of real-world scenarios.
In the future, vehicle programs may move from concept to validation with far fewer prototypes and shorter lead times. Monolith’s vision is to use AI to direct human expertise where it has the most impact, making every test conducted matter. The result is a faster, smarter and more sustainable vehicle development process fit for future purposes where operational efficiency makes the difference when it comes to the bottom line.
More information: Electroformed contact configurations redefining high-end electronics testing
