By Jonathan Montagu, CEO of HotSpot Therapeutics, and Ramy Farid, CEO of Schrodinger, as part of the From The Trenches feature of LifeSciVC.
No, your next doctor will not be a robot. Nor will an algorithm cure cancer. However, AI technology will absolutely play a key role in solving some of the biggest healthcare challenges we face today. In fact, it is already doing that by revolutionizing drug discovery, clinical research, and several other aspects of pharmaceutical R&D.
The recently published AI Index Annual Report conducted by Stanford University shows that AI investment in drug design and discovery increased significantly to more than $13.8 billion in 2020, 4.5 times higher than 2019. Still, those working in and watching this fast-moving space often have a hard time separating the fledgling breakthroughs from those AI applications that are, quite frankly, dabbling in hype.
Unfortunately, we’ve had our fair share of the latter. From the spectacular rise and fall of IBM’s Watson Health initiative to the inclusion of AI on Gartner’s 2021 Hype Cycle at the “Peak of Inflated Expectations,” healthcare has been littered with AI-themed pitches that have proven too good to be true. Even some of the recently claimed AI breakthroughs, such as Exscientia’s Phase 1 trial for a drug designed using AI or Recursion Pharmaceuticals’ high-profile collaborations focused on AI-powered drug design, are really more incremental advances around known chemical scaffolds. Truly generative AI, where unprecedented molecules are created entirely through computer modeling, has yet to deliver high profile breakthroughs; these may come with time, but as of today most AI advances have been met with overly-enthusiastic claims. While AI is indeed revolutionizing key aspects of drug R&D for these innovators and others, the technology alone is not doing all of the heavy lifting. And it is not without limitations.
Avoiding the Problem of “Garbage In, Garbage Out”
How can healthcare companies looking to innovate in this space separate the hype from the hope? As the technology continues to develop quickly, we’re seeing clear trends emerge. Chief among these is the manner in which companies are dealing with the limitations of machine learning, namely incomplete data and lack of complexity in the data sets used to train the models.
Incomplete Data: A machine learning tool is only as good as the data that goes into it. For example, it is perfectly feasible for a machine learning algorithm to mine every image of a cat on the internet (about a billion pictures) to accurately identify feline characteristics in other photos.
However, the vastness of chemical space in comparison to the “cat-iverse” presents real difficulties for machine learning models. Rough estimates indicate that 1060 drug-like molecules are theoretically possible, which is comparable to estimates of the number of atoms in the universe. It is therefore not possible to capture the vastness of chemical space solely with machine learning – less than one billion molecules have been synthesized and characterized by human. The goal in drug discovery is to extrapolate away from the molecules already synthesized, but machine learning models can, by definition, only be effectively used to predict molecules highly similar to those molecules used to train the model. The reliability of the predictions quickly breaks down as the molecules become dissimilar.
Complexity: The predictive power of a given method is dependent upon not just the size of the training sets, but also on the ability of the descriptors used to build the model to represent the full complexity of the endpoint of interest. Today, most machine learning methods used in drug discovery are based on the chemical structure of the molecule or the chemical structure combined with a static representation of the protein. In the real-world, however, molecules bend, flex and bind in unique ways that will not always be captured by that static representation, making it difficult for the machine learning model to accurately anticipate every possible permutation.
In this sense, machine learning is very literal. While it can ingest massive amounts of information and make forecasts based on that data, it cannot extrapolate beyond the data on which it was trained. Going back to our cat photo example, the machine learning model may recognize the cat image used in the training set, but it will have a hard time recognizing the unique background context of each photo to predict that the cat will pounce on a mouse or climb a tree, unless that information was included in photographs used to train the technology.
At Schrödinger, we’re tackling these two issues by leveraging a computational platform that evaluates the complete system comprising the molecule, the protein and the surrounding solvent based on fundamental laws of physics, which are invariant to the particular molecule being studied. Our calculations, which correlate highly with experimental results, provide a robust and complete dataset upon which to base locally trained machine learning models. By combining the high accuracy of physics-based modeling with the computational efficiency of locally trained machine learning models, we can accurately evaluate hundreds of billions of project-relevant molecules in a matter of weeks. This has allowed us to identify molecules that were eventually nominated as development candidates, as in the case of our MALT1 program, in as few as 10 months with fewer than 100 molecules synthesized in the lab.
Similarly, at HotSpot Therapeutics, we are taking the guesswork out of the identification and drugging of “natural hotspots,” the privileged pockets on proteins that act as endogenous on/off switches. Addressing the issues of insufficient data and complexity, we use proprietary data mining tools to augment the intuitions of our experienced team of drug hunters who can focus machine learning algorithms on specific biological and chemical ‘zip codes’ rather than boiling the ocean.
Our approach has identified over 1,500 well-credentialed protein targets including our lead program for CBL-B, a highly sought-after immuno-oncology target. The 3D footprint of natural hotspots has been used to design a tailored collection of ~1 billion molecules, the largest and most diverse chemistry library directed at these allosteric pockets. Machine learning has been critical in transforming the output from this library into actionable chemical insight. As our molecules enter the clinic, we will further define key patient subsegments by leveraging both immune and genomic signatures in the context of a first of its kind collaboration with Caris Life Sciences.
In both cases, we are leveraging the power of machine learning (coupled with physics and predictive analytics, respectively) to enable and accelerate discovery, but this requires a deep understanding of the problem at hand and a true appreciation for the power and limitations of machine learning.
Demystifying the Hype
As machine learning technologies evolve, it’s becoming easier and easier to pick the winners. These initiatives, which are already having a significant impact on drug R&D are analytics engines that can scour longitudinal patient databases of hundreds of millions of real-world patient records to spot trends, anomalies, and red flags. They are integrations of machine learning technologies with advanced physics-based methods that allow impossibly large chemical space to be confidently navigated.
Until now, much of the discourse around AI in healthcare has ignored those decidedly practical functions in favor of sensationalized depictions of cool new AI-based tech. While that’s a natural part of the evolution of any new technology, in healthcare, it has created a dangerous paradox in which many of the stakeholders who stand to benefit most from innovation have become the most skeptical of its hype.
It’s our job as scientists to be precise and honest about what machine learning really is, where it can have the biggest impact, what technologies it needs to be combined with, and where it simply isn’t the right tool for the problem.
Enabling Drug Discovery Teams to Succeed
At both of our companies, we recognize that computational modeling works best when we have comprehensive data sets and algorithms capable of describing the underlying complexity of the properties we are attempting to predict. But as predictive methods become more trusted to inform critical tasks and decisions, the need for human intervention is becoming evident. Of course, machine learning isn’t magic – it must rely upon well-curated, high-quality, representative data sets, and great care must be taken to make sure the particular technologies themselves are capable of describing the complexity of the properties being predicted. Still, without human collaboration, these innovations could potentially hold implicit biases of their own. A good example of where this was avoided is in the application of MRI scans in breast imaging, where image recognition and deep learning technology were effectively combined with continual human involvement.
We are excited by the principle of human-PLUS-machine and the potential for synergy between great drug hunters and cutting-edge algorithms to integrate complex data. Developing these advanced models with people who possess the expertise, insights, and compassion to address difficult-to-drug indications allows us to feed our scientific curiosity and play in novel target spaces. In our view, the role of machine learning is not to replace the human, but rather, to set us free from the mundane … to unleash our creativity.