Academic bias & biotech failures

Posted March 28th, 2011 in Uncategorized

I just met with an entrepreneur who was the founding CEO of a company created around an academic lab’s discoveries. It was fascinating new approach to drugging hot receptor targets.  To protect the innocent I won’t mention the names, but Atlas Venture looked at in back in 2008 and, although intriguing, we ended up passing on the deal.  Thankfully, because we missed a bullet – it recently was shut down.

The reason: the foundational academic science was not reproducible outside the founder’s lab.

The company spent $5M or so trying to validate a platform that didn’t exist.  When they tried to directly repeat the academic founder’s data, it never worked.  Upon re-examination of the lab notebooks, it was clear the founder’s lab had at the very least massaged the data and shaped it to fit their hypothesis.  Essentially, they systematically ignored every piece of negative data.

Sadly this “failure to repeat” happens more often than we’d like to believe.  It has happened to us at Atlas several times in the past decade.

The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce.  This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings. This is a huge problem for translational research and one that won’t go away until we address it head on.

Reality is we live in a tournament model world of academic research: winners get the spoils, losers get nothing. Publish or peril.  Grants are really competitive, and careers are on the line.  Only positive findings are typically published, not negative ones.  This pressure creates a huge conflict of interest for academics, and a strong bias to write papers that support the hypotheses included in grant applications and prior publications.  To think there is only objectivity in academic research, and pervasive bias in industry research, is complete nonsense.

There’s a rich literature on the “Pharma bias” in publications (e.g., Pharma conflicts of interest with academics, clinical trial reporting); in the past 15 months, 63 peer-reviewed articles talk about pharma industry bias according to PubMed.

But what about academic bias?  Or the lack of repeatability of academic findings?  I couldn’t find a single paper in PubMed over the past few years.

So what can drive the failure to independently validate the majority of peer-reviewed published academic findings? I’m sure there are cases where its truly fabrication or falsification of data, but as an optimist I believe that must be a tiny percentage: most of the time I think its just the influence of bias.  A few possible hypotheses exist for how this bias could manifest itself:

  1. Academic investigator’s directly or indirectly pressured their labs to publish sensational “best of all experimental” results rather than the average or typical study;
  2. The “special sauce” of the author’s lab – how the experiment was done, what serum was used, what specific cells were played with, etc.. – led to a local optimum of activity in the paper that can’t be replicated elsewhere and isn’t broadly applicable; or,
  3. Systemically ignoring contradictory data in order to support the lab’s hypothesis, often leading to discounting conflicting  findings as technical or reagent failures.

Importantly, how are venture capitalists who invest in biotech supposed to engage on cool new data when the repeatability is so low?  Frankly, most VCs don’t do early stage investing these days, and this resistance to fund early academic spin-outs is in part due to the insidious impact of the sector’s high failure rate with academic reproducibility (a.k.a. ‘bias’).  But for those of us who remain committed to early stage investing, I’d suggest there are at least two key takeaways for VCs:

  • Findings from a single academic lab are suspect. If other labs haven’t validated it in peer reviewed literature, it’s very high risk.  It’s probably bleeding edge rather than cutting edge.  If it’s only a single lab, it’s likely only a single post-doc or grad student who’ve actually done the work.  Given the idiosyncrasies of lab practices, that’s a concentrated risk profile.  Wait for more labs to repeat the work, or conduct a full lab notebook audit.
  • Repeating the findings in an independent lab should be gating before investing. Don’t dive in with a Series A financing prior to externally validating the data with some real “wet diligence”.  Sign an option agreement with an MTA, repeat the study in a contract research lab or totally independent academic lab.

These two conclusions should help reduce the “reproducibility problem” for startups.

There are other implications of this problem, more than I can discuss here.  But one is around the role of tech transfer offices.  Although many TTOs are keen to start “seed funds” to spin-out new companies, this seems like a waste to me.  I’d argue that the best use of these academic “seed” funds would be to validate the findings of an investigator’s work in a reputable contract research lab that industrial partners and VCs would trust.  If a TTO could show 3rd party data supporting a lab’s striking findings, the prospects for funding would increase significantly.  This is the type of de-risking that TTOs should focus on.

The bottom line is we need to confront the issue and figure out how to reduce academic bias and improve the external validation of published findings – this will undoubtedly reduce the failure rate of new biotechs and bring more capital back into the early stage arena.

This entry was posted in Uncategorized. Bookmark the permalink.