Boffin, of the Department of Energy’s Sandia National Laboratories, is working on developing inexpensive, power-efficient LED alternatives to lasers. One day, they released three AI assistants into the lab.
Unlike most AI agents, the lab’s tools do more than just make API calls to third-party models.
Five hours later, the bot has repeated more than 300 tests and discovered a new approach to operating LED lights that is four times better than the method researchers developed using their own wetware.
The research, detailed in a paper published in the journal Nature Communications, highlights how AI agents are changing the way scientists work.
“We are one of the leading examples of how self-driving labs can be established to support and extend human knowledge,” Sandia researcher Prasad Iyer said in a recent blog post.
The experiment builds on a 2023 paper in which Ayer and his team demonstrated how to control LED light, which could be used in everything from self-driving cars to holographic projectors. The trick is finding the right combination of parameters to control the light in the desired way, a process the researchers expected to take years.
To speed up this process, Iyer, with the help of colleague Sirketh Desai, developed a series of research assistants equipped with artificial intelligence.
Unlike most AI agents, the lab’s tools do more than just make API calls to third-party models. Instead, the team developed three domain-specific models based on established machine learning algorithms.
“We don’t do any LLMs. There’s a lot of interest in this. A lot of people are playing with the idea, but I think it’s still in the exploratory stage,” Desai said. El Reg.
As it turned out, the researchers didn’t need them. “We used a simpler model called a variational autoencoder (VAE). This model was established in 2013. It is one of the early generative models,” Desai said.
By sticking to domain-specific models based on more mature architectures, Sandhya also avoided one of the biggest headaches with generative AI deployments: hallucinations (errors that occur when the AI makes something up).
“Hallucinations weren’t that big of an issue here because we’re building a generative model tailored to this very specific task,” Desai explained.
The first of these models utilized a VAE architecture, a type of model commonly used to generate images before the advent of diffusion models in 2015. The model preprocessed a laboratory dataset.
The researchers then fed the output of that model into a second model that was connected directly to the optical equipment used to conduct the experiment.
This active learning model is based on a Bayesian optimization algorithm that is responsible for generating and running experiments and analyzing the results. This process was performed in a closed loop, and the model was refined through repeated experiments.
However, it was not enough to know which combination of parameters would give the best results. The real science is in figuring out why that particular configuration works.
So the team added a third model to the loop that essentially acts as a fact checker. The researchers tasked this simple feedforward neural network with devising an equation for the data it generated and later validating the results.
And while many AI models are trained on hundreds of thousands of GPUs, the team was able to run all of this using relatively modest hardware in the form of a Lambda Labs workstation with three RTX A6000 graphics cards.
Combining these models not only speeded up testing, but also surfaced approaches to LED beam steering that researchers had not previously considered.
Although the research focused on applying AI to control the light emitted by LEDs, Desai believes the underlying approach could be applied to materials design such as alloys and printable electronics.
For other scientists interested in replicating this type of “self-driving lab,” Desai says it’s important to have equipment that is tightly integrated into the model framework.
“There has been progress and development, but we still have a long way to go in terms of allowing the tools we have in the lab, the physical tools, to interact with these models,” he said. “If you’re using equipment from 1975, you’re already in a difficult position to get started.”
Regarding the model itself, he emphasizes the importance of skepticism. “When working on more advanced architectures and machine learning, such as transformer-based LLMs, my advice is to be really skeptical about what it brings.” ®
