Interpreting the Impact of AI Large Language Models on Chemistry | Opinion

Applications of AI


Is AI at stake for something huge? Thanks to the release of improved “Large Scale Language Models” (LLM) such as ChatGPT’s successor, OpenAI’s GPT-4, this is a historical number. It’s been a hot topic for months. Developed as tools for language processing, these algorithms respond so fluently and naturally that some users become convinced they are conversing with genuine intelligence. Some researchers believe that LLM surpasses traditional deep learning AI methods by displaying emergent features of the human mind (such as theory of mind that ascribes autonomy and motivation to other agents) suggests that Others say that despite their impressive ability, LLMs continue to practice finding correlations, lacking not only sense but any kind of semantic understanding of the world they claim to be talking about. claims to be LLMs may still make silly or illogical mistakes or invent false facts. When her Bing search chatbot Sydney, which incorporated ChatGPT, threatened to kill an Australian researcher and tried to annul her marriage after confessing her love for a New York-based journalist, the dangers became clear.

Meanwhile, AI and complexity experts Melanie Mitchell and David Krakauer of the Santa Fe Institute in the United States suggest a third possibility. LLM has a true understanding, but we don’t understand ourselves yet, and it’s very different from ourselves. about the human heart.1

Despite its name, LLM isn’t just about languages. Similar to other types of deep learning methods, such as the one behind DeepMind’s protein structure algorithm AlphaFold, we mine huge datasets for correlations between variables and, after a period of training, for new input prompts to provide a reliable response. The difference is that LLM uses a neural network architecture called Transformers. In this architecture, a neuron “participates more” in some of its connections than in others. This feature enhances LLM’s ability to generate naturalistic text, but may also improve its ability to deal with inputs outside the training set. Because, some argue, the algorithm infers some of the underlying conceptual principles, so it doesn’t need to. “Told” in the same way in training.

The inner workings of these networks are largely opaque

Melanie Mitchell and David Krakauer, Santa Fe Institute

This suggests that LLM may outperform traditional deep learning when applied to scientific problems. This is suggested by a recent paper applying his LLM to the “AlphaFold problem” of inferring protein structure purely from sequence.2 (I’m reluctant to call this a protein-folding problem, because it’s a bit different.) Alphafold’s ability is justly celebrated, and it can infer some of the characteristics of the underlying energy landscape. But Alexander Rives and his colleagues at Meta AI in New York suggest that a family of “transformer protein language models” collectively called ESM-2, and a derived model called ESMFold, are even better. said that The language model is up to two orders of magnitude faster, requires less training data, and does not rely on collections of so-called multiple-sequence alignments (arrays closely related to the target structure). The researcher ran the model on approximately 617 million protein sequences in his MGnify90 database, curated by the European Institute of Bioinformatics. More than a third of these results yield reliable predictions, including unprecedented in experimentally determined structures.

The authors argue that these performance gains are actually due to such LLMs’ improved conceptual “understanding” of the problem. As they say, “language models internalize structure-related patterns of evolution” — which means they may open up “a deeper view into the natural diversity of proteins.” Since the model contains about 15 billion parameters, it is still not easy to reliably extract what the internal representation is that will lead to better performance. But if such a claim is well supported, LLM will be a much more exciting place to do science.

The authors argue that these performance gains are actually due to such LLMs’ improved conceptual “understanding” of the problem. As they say, “language models internalize structure-related patterns of evolution” – that is, they have the potential to open “a deeper view into the natural diversity of proteins”. parameters, it is still not easy to reliably extract what the internal representation is that will lead to better performance. But if such claims are well supported, LLM will be a very exciting place to do science.

However, there may still be a long way to go. When chemists Cayque Monteiro Castro Nascimento and André Silva Pimentel of the Catholic University of Rio de Janeiro, Brazil, set ChatGPT up with some basic chemical challenges, such as converting compound names to Smiles chemical representations, the results were mixed. . The algorithm did a good job of correctly identifying symmetry point groups in 6 out of 10 simple molecules and predicting the water solubility of 11 different polymers. But you didn’t seem to recognize the difference between alkanes and alkenes, or benzene and cyclohexene. As with any language application, getting good results here can partly depend on asking the right questions. There is now a new field of “rapid engineering” to do this. Again, asking the right questions is one of the most important tasks in doing any kind of science.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *