AI Artificial Intelligence Machine Learning Laboratory

Machine Learning


The advent of self-driving laboratories and automated experiments promises to increase productivity and chemical discovery rates beyond what humans could achieve alone. However, the black box nature of AI means we don't know how or why deep learning systems make decisions, making it difficult to understand how best to use AI to optimize scientific research. It becomes difficult to know whether the results are reliable or not.

The paper was published in November 2023. Nature We report the discovery of more than 40 novel materials using an AI-guided autonomous laboratory. But researchers quickly questioned the autonomous lab's results. A preprint followed in January, reporting problems with both the computational and experimental work and “systematic errors from start to finish.”

Robert Palgrave, a materials chemist at University College London and one of the authors of the review, said that while AI has made “huge advances”, there is a “small trend” of feeling that AI has to change everything “right”. ”. And really, you shouldn't expect things to change overnight.

robot

Milad Abolhassani, who heads a research group at North Carolina State University in the US that studies flow chemistry strategies using autonomous robot experiments, says the “hype” has gotten a bit too dominant when it comes to AI, and it's time to pause. . “We humans are good at envisioning what the future will be and what the possibilities are, but we need to take it one step at a time and make sure things are done right.”

Risks of relying on AI

For many, the appeal of AI comes from the need to improve productivity. “The productivity outcomes of AI are very attractive, such as faster literature review, faster experiment execution, and faster data generation,” explains Lisa Meseri, an anthropologist at Yale University. Masu. “And that has to do with the institutional pressure to get the research done so you can do all the other things you have to publish.”

Messeri said AI also holds an attractive prospect: the “promise of objectivity,” the idea that scientists are always seeking tools that they feel are robust and can limit human bias and interference. . While AI may certainly offer these benefits to some research, there are risks to relying too heavily on AI and emphasize the importance of including diverse thinkers in scientific knowledge production. Please do not forget. And of course, an AI model is only as good as the data on which it is trained.

Everyone is rushing to start doing the kind of science that lends itself to AI tools.

Molly Crockett, Princeton University

Messeri and his colleague Molly Crockett, a neuroscientist at Princeton University, co-authored an opinion piece on the topic. Nature, Risks are divided into three categories: All of this stems from the “illusion of understanding,” a well-documented phenomenon in cognitive science, and is related to our tendency to overestimate how well we understand something. Masu.

“The first risk arises when individual scientists try to solve problems using AI tools. The superior performance of AI tools allows scientists to understand the world better than it actually is. erroneously believe that they are,” Crockett explains.

The latter two refer to scientists as a collective inadvertently creating a scientific “monoculture.” “If in a monoculture he plants only one type of crop, this is very efficient and productive, but it makes that crop much more vulnerable to disease and pests,” Crockett explains. Masu.

“We are concerned about two types of monocultures,” she continues. “The first is the monoculture of knowing. Different approaches can be used to solve scientific problems. AI is one of her approaches. But everyone is rushing to get started because of the productivity gains promised by AI tools. Researching science suitable for AI tools… [and the] Questions that are not well-suited to AI tools will be ignored. ”

They are also concerned about the development of a monoculture of “intellectuals” where only AI tools are used instead of leveraging the knowledge of a whole team with discipline and cognitive diversity. “We know that when solving complex problems, having interdisciplinary teams is extremely beneficial,” Crockett says.

It's great to have people from different backgrounds and with different skill sets on your team…in an era where human interaction is increasingly being shunned in favor of digital interaction…collaborators There may be a temptation to replace the… with AI tools. [but] This is a really dangerous practice. This is because a lack of expertise makes it difficult to determine whether the output returned by AI is actually valid. ”

What is the solution?

The question is how can AI-powered tools, such as self-driving labs, be tailored to address specific research questions? Abolhassani and North Carolina State University colleague Amanda Volk recently defined his seven performance metrics to help “unleash” the power of self-driving labs. He was shocked that this did not already exist in published literature.

“This metric is designed with the idea that we want the Autonomous Driving Institute’s machine learning agents to be as powerful as possible to help make more informed decisions,” he said. say. But he added that if the quality of the data a lab trains on isn't high enough, the decisions it makes are useless.

Many self-driving labs don't even mention the total amount of chemicals consumed per experiment.

Milad Abolhassani, North Carolina State University

The performance metrics they describe include degrees of autonomy, which cover the level of influence humans have over the system; Operating life. Throughput; experimental accuracy. Material usage. Accessible parameter space. Represents the range of accessible experimental parameters. Optimization efficiency, or overall system performance.

“When we conducted a literature search, we were surprised to find that 95% of papers on self-driving labs do not report how long a platform can operate before failure.” [or] before you need to refill anything,” he explains. “I would like to know how many experiments per hour per day that the self-driving laboratory can run. How accurate is the experiment execution? How reliable is the data being produced? mosquito?”

“Many self-driving research institutes don't even mention total chemical consumption.” [was] “Every experiment and optimization they did,” he added.

Abolhassani and Volk say that clearly reporting these metrics can guide research into more “productive and promising” technology areas, and that without a thorough evaluation of the Autonomous Driving Institute, , states that the field will lack the information needed to guide future research.

However, optimizing the role that AI can play in complex fields such as synthetic chemistry requires more than improved classification and large amounts of data. In recent articles, Journal of the American Chemical Societydigital chemist Felix Stries Karthoff, along with AI chemistry pioneers such as Alan Aspul Guzik, Frank Glorias, and Bartosz Grzybowski, said that algorithm designers work closely with synthetic chemists. They argue for the need to build relationships and leverage expertise.

They argue that such collaboration would be mutually beneficial by allowing synthetic chemists to develop AI models for synthetic problems of particular interest and “transferring AI know-how to the synthesis community.” are doing.

Looking to the future

For Abolhassani, the success of independent chemistry experiments ultimately comes down to trust. “Autonomous experiments are a tool that helps scientists… [but] For that to happen, the hardware needs to be reproducible and reliable,” he explains.

Essential for the community to grow your user base

Milad Abolhassani, North Carolina State University

And building this trust requires lowering barriers to entry to give more chemists the opportunity to use self-driving labs in their work. “It needs to be as intuitive as possible so that even chemists without autonomous experimentation expertise can interact with the self-driving laboratory,” he explains.

Additionally, the best self-driving labs are currently very expensive, so lower-cost options need to be developed while maintaining reliability and reproducibility, he says. “It's essential for the community to grow the user base,” he says.

'one time [self-driving labs] Become a mainstream tool in chemistry [they] It helps digitize chemistry and materials science and provide access to high-quality experimental data…but the power of that expert data is to ensure that the data is reproducible, reliable, and usable by everyone. It is demonstrated when it is standardized. ”

Messeri believes that AI is most useful when it is seen only as augmenting humans, rather than replacing them. To do that, she says, the community needs to be more particular about when and where it's used. “I have great confidence that creative scientists can come up with a case for doing this responsibly and productively,” she added.

Crockett suggests that scientists consider AI tools as another approach to data analysis, one that differs from the human mind. “As long as you respect that, you can strengthen your approach by including these tools as another diverse node in your network,” she says.

Importantly, Crockett said this moment also serves as a “wake-up call” against institutional pressures that may be pushing scientists toward AI to “improve productivity without necessarily deepening understanding.” states that it is possible. But this problem is much bigger than individuals and requires widespread institutional acceptance before a solution can be found.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *