Heterogeneous catalysts require complex computational processes for calculations due to the variation in reaction paths and possibilities. Now, thanks to a clever combination of programming and machine learning, researchers have achieved dramatic increases in simulation speed and made resource-intensive processes more energy efficient. The results reported for reactions that convert carbon dioxide into fuels may have immediate application to other industrially relevant reactions, such as depolymerization and biomass valorization.
“It has the potential to unlock understandings that cannot be achieved with manual simulations and accelerate discoveries by orders of magnitude,” said Nuria López of the Catalan Institute of Chemistry, who led the study. This framework facilitates the prediction of selectivity and reactivity in catalysis, particularly in the process of producing long-chain hydrocarbons from syngas, commonly referred to as Fischer-Tropsch. “Until now, computational calculations have been limited by tedious manual monitoring of intermediates…and also required hundreds of long-running processes as alternative reaction pathways emerge,” she added. In addition to speed, the new program predicts properties such as selectivity, reaction rate, and yield, or “observables,” López explains. “This is directly comparable with experimental results and shows the potential of the program,” she added.
Anastasia Alexandrova, a materials science and simulation expert at the University of California, Los Angeles, explains that in homogeneous and enzymatic catalysts, active sites are typically restricted to a small set of atoms. “In heterogeneous catalysis, the surfaces are large and often complex, so a variety of possibilities exist as binding sites for reagents,” she says. Furthermore, the surface of the catalyst is surprisingly dynamic. As the reaction progresses, a remodeling process occurs that “under the influence of the reactants…creates many different microenvironments at every active site.” Scanning potential routes and networks is “insurmountable in terms of time and computer resources required, and error-prone” when performed manually, Alexandrova says. “This paper represents an important step in the right direction. [studying] Reactions… are now faster thanks to machine learning. ”
What is AI, machine learning, and neural networks?
Artificial intelligence (AI) is an umbrella term that is often mistakenly used to encompass a variety of connected but simpler processes.
A.I. The ability of a machine or computer program to perform tasks normally only performed by humans, such as reasoning, responding to feedback, and making decisions.
Generation AI is a new variant of AI that analyzes and detects patterns in training datasets and generates original text, images, and videos in response to user requests. ChatGPT, Microsoft Copilot, Google Gemini, and more recently X’s Grok are all examples of chatbots that use generative AI.
neural network It is an interconnected array of artificial neurons, similar to a biological brain, that identifies, analyzes, and learns from statistical patterns in data.
machine learning is a subset of AI that allows machines to learn from datasets and make predictions based on new data without the programmer explicitly asking for it. Machine learning models perform better the more data they receive.
deep learning is an enhanced type of machine learning that uses neural networks with many layers to analyze complex data from very large datasets. Deep learning applications include speech recognition, image generation, and translation.
Large-scale language model or LLM It is a type of deep learning that is trained on large amounts of data to understand and generate language. LLMs learn patterns in text by predicting the next word in a sequence, and these models can now write prose, analyze text from the Internet, and interact with users.
“This breakthrough framework automatically maps and analyzes large and complex chemical reaction networks that were previously too large or problematic to handle manually,” explains Ritesh Kumar, a science and artificial intelligence expert at the University of Chicago. The system “quickly and accurately predicts” the reactivity of surface catalysts “without requiring scientists to guess every step of the way,” he added. “This actually replaces guesswork with intelligent automation, instantly estimating the energy and velocity of thousands of steps.”
While traditional density functional theory (DFT) programs predict up to 500 steps in 100 processing times, this new solution speeds up the search by orders of magnitude, simulating 370,000 possible paths in a similar amount of time. “The speed is impressive; it identifies critical responses in a fraction of the time and cost,” says Kumar, who will soon start at TCG Crest in India. And, perhaps most importantly, “there is no huge energy consumption typically required by supercomputers.” In addition to the sustainability benefits, automated algorithms could allow scientists to prioritize accurate, slow computing tools only for critical calculations, Kumar explains. “In processes like Fischer-Tropsch, the number of potential paths explodes to hundreds of thousands…which would take centuries to compute using traditional techniques,” he says. Neural networks can now automatically discover reaction pathways and speed up the study of complex catalytic processes.
Lopez explains that similarities in simulating processes across multiple catalysts could dramatically accelerate the application of the algorithm in industrial applications. After Fischer-Tropsch, the group was able to calculate other complex reactions, such as biomass valorization and plastic recycling. “Our aim is to move into the demanding environment of industrial R&D and study aspects that are often overlooked in academia, such as code security, robustness, sustainability, and accessibility,” said first author Santiago Morandi. “This preliminary platform provides a potential bridge between theory and experiment, enabling rapid data-driven optimization of chemical reactors.”
