Model Behavior

Posted September 29th, 2021 by Aoife Brennan, in From The Trenches, R&D Productivity


By Aoife Brennan, CEO of Synlogic Therapeutics, as part of the From The Trenches feature of LifeSciVC

The COVID-19 pandemic has upended life as we know it. Given so much uncertainty, it is a normal human impulse to want some framework to predict the future and make decisions. We have all at one time or another, turned to mathematical models to help us make sense of the course of the pandemic. Of course, we know that the model won’t be accurate but in the words of statistician George Box, “all models are wrong, but some are useful.”

Drug development also relies on mathematical modeling, no surprise given that developing a new medicine is an endeavor steeped in uncertainty. Making predictions about how a compound will behave in humans, what dose and frequency will be required, and how a given exposure at the site of action will impact the clinical endpoint in a disease are central questions in drug development. The better we can predict these outcomes prior to embarking on expensive clinical trials, the more efficient our development efforts can be.

For some companies, turning to outside vendors to provide quantitative services makes sense – especially if they can take experience across a lot of clients to inform their model. But at Synlogic, we are developing a novel therapeutic modality, genetically engineered bacteria we call Synthetic Biotic medicines. We wanted to model live cell dose, expected efficacy, metabolic consumption rates – things about which there was no experience externally. We decided very early on as a company to develop the experience and expertise ourselves.

There are alternatives to rational and quantitative approaches to decision making of course- we all know someone who used a ‘bad’ decision making process but got a great outcome. We call this intuition, instinct, nose, gut, or just plain good luck – whatever the case, it is difficult to do consistently.

Surrounded by so many models, how can biotech teams navigate and find balance between data-based decisions and over-reliance on the ‘fake certainty’ of a model output?

  1. Open the black box

It is important that everyone working on a development team understands the inputs and how the model is built. Having a culture where those building the model expect and welcome being challenged is vital.  Team members also need to refuse to be intimidated by the math and ask questions until they understand the broad strokes of the calculations.

We had an experience last year where a program was failing to achieve our quantitative criteria for advancement in research. The manufacturing team lead opened the black box, and noticed that the input for potency was not consistent with the data their team was generating.  Technical digging into this discrepancy changed the outcome and taught us a lot about assay conditions.

  1. It is the journey, not the destination

Nobody can tell the future, even the computational biologist. Having said that, the exercise of scoping out a model can identify some key questions, whether on the biology or the epidemiology, that can focus attention on the important rather than the interesting but irrelevant.

In every model, there are variables that have greater influence on the outcome and variables where there is greater uncertainty. Identifying where those two overlap tells the team where to  focus resources to generate data that may be disproportionally informative. We integrate our quantitative biology team with the product team so these conversations occur in real time as programs advance.  You can save the trust falls, having a good robust discussion on model assumptions and ‘what you would need to believe’ is my kind of team building!

  1. Just because it is math, does not mean it is not biased

Early in my career, I felt that I lacked the credentials (no MBA) to challenge some of the financial and Net Present Value modeling being used to determine which of my programs would be resourced and advance. I woke up when I noticed a direct correlation between how strongly influential leaders felt about a program a priori and the result of the analysis.

Recognizing that any model is a tool to help with decision making, not some kind of crazy 8 ball decision machine is important. The process of building a model forces the team to spell out assumptions and to assess the strength of the evidence that supports them. That process reduces bias (and noise) in decision making only if it is open, explicit, and data driven.

  1. Seek progress, not perfection

We have been very clear that we are seeking progress and not perfection from our modeling efforts. Following the first tranche of clinical or commercial definitive data, it can be tempting to trash or lionize the model or the modeling team. What is more important is to revisit the model in each case and ask what the team has learned, where the model can be improved and where investment needs to be focused for the next effort.

  1. Look for the ‘re-usable’ parts

The Synlogic platform is built on synthetic biology where a core concept is generating re-usable genetic parts.  As we develop a new strain, we can include parts we have used and work well in prior programs. Similarly, model development can benefit from re-usable components that have been validated by prior programs or research. It is worth considering investments that bring greater precision to inputs that will be included in multiple program models – for Synlogic, that means everything from better estimates of gastrointestinal transit time to research with payers about the relevance of different kinds of endpoints.

Models cannot tell us the future, neither in drug development nor in a pandemic, but they help us make sense of a complex world, make more data driven decisions, have better conversations, and focus on the right things.

Special thanks to Mark, Nick and Bill who have peered into the black box with me over the past few years

 

This entry was posted in From The Trenches, R&D Productivity. Bookmark the permalink.