Engineers at Duke University are using artificial intelligence to do what scientists have been pursuing for centuries. Turn complex real-world movements into simple rules that can be written down. The study was authored by Boyuan Chen, director of the General Robotics Lab, and his team, including lead author and doctoral candidate Sam Moore. They reported their results in the academic journal NPJ Complexity.
Their new AI framework studies time-series data, or measurements taken over long periods of time, and generates compact equations that describe how a system changes. It targets the types of problems that occur everywhere, from weather patterns and electrical circuits to mechanical devices and biological signals. Goals are more than just predictions. That's understanding.
“Scientific discoveries always depend on the ability to simplify complex processes,” said Chen, the Dickinson Family Assistant Professor of Mechanical Engineering and Materials Science at Duke University. “The raw data needed to understand complex systems is increasingly available, but we don't have the tools to translate that information into the simplistic rules that scientists rely on. Closing that gap is essential.”
Why complex systems still resist simple explanations
Since Isaac Newton's 1687 principiamechanics has provided a way to explain change. Over time, that research grew into dynamic systems theory, which tracks evolving “state variables.” These variables can describe more than moving objects. Capture the changing landscape of engineering, climate science, neuroscience, physiology, and ecology.
However, many real-world systems remain difficult to identify. Although it is possible to measure the behavior of a system, it is still difficult to identify the rules that drive the system. Nonlinear behavior exacerbates the problem, as small changes can lead to widely different results. High dimensionality adds further barriers. When a system contains many interacting states, it can become difficult to interpret and cause analysis tools to fail.
You can see trade-offs even in familiar movements. The trajectory of the shell is determined by exit velocity, angle, drag, wind, and temperature. However, a close approximation can be obtained from a simple equation using only the first two. Science often advances by finding such “good enough” reductions without losing the core truth.
Upgrading 1930s mathematical ideas with modern AI
Duke's approach is based on an idea proposed by mathematician Bernard Koopman in 1931. Koopman showed that nonlinear systems can sometimes be expressed through linear models if they are described in appropriate coordinates. Linear models are attractive because they allow the use of tools such as spectral decomposition that can perform global analysis and reveal the modes and stability of the system.
“The problem is scale. Koopman-style modeling often forces you into a very large, even infinite, variable space. That reality has led to long-standing problems in the field, including dynamic mode decomposition and extended DMD. Although such techniques are useful, for nonlinear systems their representations are often hugely bloated. Deep learning has also been used to find linear embeddings, but many approaches still reach a much larger latent space than the original system,” Chen shared. The bright side of the news.
This paper points out some well-known benchmarks. Previous work has represented two-dimensional Duffing systems using embeddings that reach 100 dimensions, and sometimes much more. Similar inflation appears in van der Pol oscillators. These large embeddings may work, but they may also add redundancy and increase the risk of incorrect modes and overfitting.
How new frameworks reduce the problem
The Duke framework tries to keep the linear representation as small as possible while still predicting well over long time frames. Take experimental time series data and use deep learning and physics-inspired constraints to discover a reduced set of hidden variables that capture the essential behavior of your system.
In practice, the model learns a latent space, labeled ψ, whose dynamics behaves like a linear system. This approach is based on time-delay embedding, which feeds the model with a short window of past states so that it can infer what will happen next. The team also developed a mutual information technique to help choose an effective time delay length. This is because the choice greatly affects the prediction error.
Training focuses on long-term accuracy. The researchers used loss discounting for future steps and then adjusted that discounting over time in a curriculum-like manner so that the model generalized beyond the training window. We also investigated various latent dimensions and selected the smallest one that does not significantly impact performance.
“What stands out is not just the accuracy, but the interpretability,” said Chen, who also holds positions in electrical engineering, computer engineering and computer science. “When linear models are compact, the process of scientific discovery can naturally connect to existing theories and methods that human scientists have developed over thousands of years. It's like connecting AI scientists and human scientists.”
9 testbeds from pendulums to neural circuits to weather models
To test the method, the team built nine datasets spanning simulated and experimental nonlinear systems. The lineup has gone from simple to complex, and this is important. Because methods that only work for textbook movements aren't really that useful.
A single pendulum provided the simplest case with two measured variables and a stable stationary state. Van der Pol oscillators were made more difficult by repeating cycles known as limit cycles. Four variables and a strong nonlinearity were added to the Hodgkin-Huxley model to explain how neurons generate action potentials. The Lorenz-96 system used for weather predictability studies introduced a high-dimensional setting with periodic and chaotic behavior.
This study also focused on multistability, where a system can settle into multiple long-term patterns. The Duffing oscillator was well described as a particle moving through a double-well landscape and served as an important example. Other testbeds included interacting magnetic systems, nested limit cycles, experimental magnetic pendulums, and double pendulums exhibiting chaotic behavior.
For many of these systems, the framework has found reduced models that are more than 10 times smaller than previous machine learning approaches required, while producing reliable long-term predictions. For example, the researchers reported that three-dimensional and six-dimensional representations are sufficient to model van der Pol oscillators and Duffing oscillators, respectively. In the high-dimensional case, the limit cycle Lorentz 96 system was reduced from 40 states to 14 latent dimensions while maintaining strong predictive performance.
Find “landmarks” that explain stability and change
Forecasting is only part of the profit. The framework also aims to reveal structures of interest to dynamists, such as attractors, which are stable states or patterns that a system tends to approach over time. These structures help determine whether the system is operating normally, drifting, or becoming unstable.
“For dynamists, finding these structures is like finding a new landscape landmark,” Moore said. “Once you know where the stability point is, you can understand the rest of the system.”
This method supports spectral analysis of learned linear systems and extracts eigenvalues and eigenfunctions describing modes, frequencies, and damping rates. It also generates a learned stability tool called a neural Lyapunov function built from decay modes. These functions can provide a practical way to assess global stability, an area where nonlinear systems often force researchers to compromise on local answers.
The researchers also penalized unstable growth during training by suppressing eigenvalues with positive real parts. This choice is intended to keep the learned dynamics physically realistic, rather than exploding in a way that fits the training data but actually fails.
“This is not a replacement for physics,” Moore continued. “It's about expanding our ability to use data to reason when the physics is unknown, hidden, or cumbersome to write down.”
Practical implications of the research
This research points to an AI tool that does more than identify patterns. When models can discover compact, interpretable rules from messy measurements, they can test hypotheses faster and design better experiments. This is important in fields where the governing equations are incomplete or too difficult to derive, such as climate science, neuroscience, and some complex engineering systems.
This framework could also improve early warning and control. Steady-state and drift toward instability manifest themselves in real-world environments, from power grids and aircraft dynamics to biological rhythms. Reliable methods to identify attractors and assess stability from data can help researchers detect when a system is transitioning into a dangerous situation and help engineers decide how to intervene.
The team also believes this method of guiding what data to collect next could reduce costs if experiments are expensive. Over time, this approach could support “mechanical scientists” who help human researchers move from raw measurements to clear, testable rules.
