Next-generation medicines will be powered by machine learning

Machine Learning

Dr. Grant Wishart, by Guido Lanza

In 1950, Alan Turing discussed the mathematical possibilities of artificial intelligence (AI) in his paper “Computing Machinery and Intelligence.” As a truly revolutionary paper, it was ahead of its time. Pre-1950 computers had no ability to store commands, making learning impossible. But as technology advanced, the scientist was able to work towards his AI proof-of-concept.

In 1956 the RAND Corporation (Research AND Development) introduced Logic Theorist, a program designed to mimic human problem-solving skills. Nearly 70 years later, AI has continued to evolve and overcome important challenges. Today, AI benefits from a data-filled world, novel technologies, and dramatically increased processing power.

We are so much more than AI is science fiction lore that depicts humanoid robots. In drug development, chemists and machine learning (ML) systems can work together to solve complex drug discovery problems. ML systems do not replace established medicinal chemistry processes, but rather enable medicinal chemists to work at a scale that is significantly different than using traditional approaches. ML tools allow medicinal chemists to search a much larger universe of potential compounds.

Medicinal chemists relying on traditional drug design processes must sacrifice creativity to deal with practicalities such as complex syntheses, low-throughput processes, and tight budgets. A medicinal chemist must first generate many good ideas and then prioritize them. Only a few ideas can go to the test.

Grant Wishart [CharlesRiver]

By expanding the idea space and virtualizing testing using ML models, chemists can evaluate orders of magnitude more compounds. These models therefore give chemists a better chance of identifying structures with multiple (often competing) desirable properties that make drugs drugs.

For example, a drug that must have high affinity for its target of interest, all possessing specific permeability, metabolic stability, molecular weight, and plasma protein binding properties, can be toxic to other similar off-targets that cause toxicity. In some cases, it may be necessary to minimize the coupling of Given a target, the ML model can extract the key features needed to achieve that target, as well as learn from existing data on drug successes and failures to converge on drugs more quickly. I can.

The exciting part is that such methods have the potential to provide new chemistries, optimize these chemistries much more efficiently, and improve medicinal chemists’ ability to produce safe and effective medicines. Ultimately, this is not a case of AI replacing medicinal chemists, but a case of AI empowering medicinal chemists and helping them adopt more efficient processes.

Classic Discovery Upgraded

Although the traditional drug discovery process has grown in scale, it has remained fundamentally unchanged for decades. Most projects still require brute-force screening followed by workflows based on chemical intuition. The main role of computation was to aid the chemist’s intuition. For a given target, early detection groups use their knowledge and experience to design biological assays suitable for high-throughput screening, with the goal of providing a starting point for future compounds. Given these initial hits, there is a subsequent trial-and-error process to refine the compound set until, in successful cases, there is one or more series of compounds to move forward.

It may seem obvious, but AI can ingest and analyze far more information than humans could on their own. However, the benefits go far beyond mere information gathering. Data can be abstracted and AI can start deciphering patterns from billions of compounds. For example, certain chemical features may be involved in binding affinity or catalysis of certain reactions. Currently available AI tools enable chemists to evaluate chemical spaces thousands of times larger than they could have previously been able to evaluate.

This allows researchers to move through each iteration of the design-make-test-analyze (DMTA) cycle more quickly, reducing the number of iterations and compounds per iteration, ultimately resulting in a more efficient It leads to optimization trajectory.

GuidoLanza ValoHealth

Over the past two decades, ML tools have become increasingly mainstream, enhancing our ability to profit from the vast amounts of public and private data generated by the scientists who came before us. Of course, it is impossible for a chemist to keep in mind all the known, publicly accessible chemistry papers at once. It is similarly impractical for a chemist to remember all the data she personally generated for one program, or to trade off multiple competing objectives in a lead optimization program. However, machine learning can do this with relative ease, so it can benefit from decades of work to digitize public domain medicinal chemistry data.

These public datasets come in a variety of formats. Some are professional and well-characterized datasets, others are from the published literature, and others are from patents. Different degrees of labeling, standardization, and usefulness, but all can ultimately be incorporated into new AI tools.

Humans and machines come together to create new landscapes of discovery

Achieving integrated human-AI drug discovery requires a change in thinking. You need to know how to generate data to power AI models and how to transform these models to create real compounds in the lab. The novelty is currently in progress. You can decide how best to combine AI with traditional chemistry and human intelligence to increase efficiency. As many activities as possible, such as synthesis, testing, and even his entire DMTA cycle, can be virtualized on the computer so that he can work on a much larger scale.

As an example, consider high-throughput screening. Whereas typical screens run thousands or hundreds of thousands of compounds against a given target, AI-driven virtual screens search for billions or trillions of rational drug-like compounds. you have the option to The key question is, “How can we best scan and score billions of compounds?”

This human-AI partnership is critical for drug discovery services like Logica. Logica leverages Charles River Laboratories’ chemistry research capabilities and Valo Health’s AI technology. Logica leverages both worlds to enhance early detection and even allow us to fundamentally rethink how the entire drug discovery process is conducted.

To understand how, consider the following example. A target is proposed by a partner who intends to attack that target with a new drug candidate. If you have a large amount of data already available on your target, you can generate an initial AI model to predict binding. High-throughput screens may not be required, significantly shortening project timelines. If not much data is available, the process of generating data for seed model building can be more broadly applied using high-throughput screening or DNA-encoded library approaches. Novel compounds are proposed, analyzed and tested with the help of automated human chemistry research to generate more target-specific data for further refinement.

This iterative process is a close combination of prediction and experimental validation, ultimately leading to the selection of forwardable leads. These leads will be further refined to ensure they meet additional criteria to support in vivo studies and, ultimately, his IND-ready studies. The AI ‚Äč‚Äčalgorithm’s goal here is to generate a localized model. That is, to generate models that are tuned to predict the properties of specific chemicals with great accuracy. AI-enhanced optimization accelerates preclinical candidate development by enabling fewer DMTA cycle iterations and better decisions about which compounds to proceed.

It is important to note that different levels of intentionality must be inserted into the laboratory data generation process for the process to work. Are you creating and testing compounds to build ML models, or are you creating compounds to leverage models to find solutions in the discovery process?

For example, a program might early on focus on data generation that provides as many diverse scaffolds as possible for building initial ML models. Later, these models may be used to select compounds. In addition to optimizing in the field of chemistry, it has the potential to provide progressive lead series that are patentable and capable of testing biological hypotheses in the laboratory.

With each iteration of the DMTA cycle, the model learns by incorporating new data, building on existing knowledge, and gradually filling in gaps. This level of data orientation represents a set of capabilities that will enable teams of drug hunters from both Charles River and Valo Health to more effectively target candidates.

An example of the benefits of this approach is finding inhibitors of protein tyrosine kinase inhibitors, which carries significant but general challenges with respect to both selectivity and a crowded intellectual property landscape, conducted by Logica. This is research for Using a strategy of applying high-throughput screening, DNA-encoded libraries, and AI approaches in parallel, the team identified hit matter from all approaches across multiple chemotypes. Subsequent AI-guided chemical optimization cycles very quickly identified new advanced chemotypes suitable for further optimization.

Ultimately, what does this mean for the role of AI in drug discovery? What is the point of all this effort? The goal isn’t just to save time, money, or resources, or even just get drugs to patients faster.

This integration of data, AI and human ingenuity can change the risk profile of entering small molecule programs. Small-molecule discovery is now fraught with daunting uncertainty, so a negative experimental result could set teams backtracking for months or years. With this approach, we enter a whole new phase of discovery with quantifiable risk estimates, a true understanding of how successful we are. While these approaches are just the beginning, they show that combining AI and data generation will irreversibly change the way drug design problems are prosecuted.

Dr. Grant Wishart is Senior Director of Small Molecule Drug Discovery at Charles River Laboratories and is responsible for Logica. Guido Lanza is Vice President of Integrated Research and General Manager of Logica at Valo Health.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *