Nessan Bermingham

Caveat Subscriptor When Academia Touts A Breakthrough

Posted March 31st, 2017 by Nessan Bermingham, in Biotech startup advice, From The Trenches, Translational research

This blog was written by Nessan Bermingham, CEO of Intellia Therapeutics, as part of the From The Trenches feature of LifeSciVC


Academic discoveries are oxygen for entrepreneurs. But, caveat subscriptor – be careful what you sign. Even high profile studies in top journals are not always what they are cut out to be.

Scientists at the Large Hadron Collider (LHC) in CERN had good reason to be exited as they noticed a tantalizing bump on a data plot in December 2015: an excess in pairs of photons with a combined energy of 750 gigaelectron volts (GeV) hinted at the discovery of a particle six times larger than the famous Higgs boson and a more complete theory of nature (1).

The physics world was abuzz. More than 500 research papers were submitted. Some theoretical physicists called for a re-evalution of the Standard Model. Independent data from both the Atlas and CMS detectors were cited as scientists pinned the probability of the particle’s existence at greater than 999 in 1,000.

Except, it was not. The measurements were nothing more than “statistical blips”.

This is not the first time scientists got burned by statistical flukes. In 1976, Leon Lederman and team announced the discovery of the upsilon, whose existence had been predicted by the Standard Model. However, further data showed that this particle did not actually exist, and the “discovery” was promply re-named “Oops-Leon” (2). The uproar resulted, in part, in the adoption of the five sigma standard, which physicists live by, which dictates that an observation with a probability of less than one in one million is a fluke.

Statistical anomalies are not the sole domain of physics, either.

Open Science’s Reproducibility Project found that, of 100 studies published in the top three psychology journals in 2008, fewer than half could be replicated (3). And in January this year, Open Science and Science Exchange collaborated to publish the first five of 29 replication studies conducted as part of “Reproducibility Project: Cancer Biology,” in which they attempted to replicate five highly influential mouse experiments. Three of the attempts failed or were inconclusive. The other two found somewhat similar results, though with smaller effects (4).  LifeSciVC has written before on these initiatives and the failure to replicate academic work (here, here).  Biotech is not immune to scientific failures.


As entrepreneurs and investors we are constantly looking for the next big discovery. We stand ready to assemble the necessary resources to realize the potential of Big Discoveries as best we can. We read the latest peer reviewed articles in such journals as Cell, Science, and Nature, looking for the next breakthrough. And, once identified, we mobilize our internal resources and capital to explore and build on that “seminal” discovery.

I have had the fortune of benefiting from one such discovery – Intellia was born from a publication in Science. However, prior to Intellia, I paid my school fees and learned about the other side of this coin – data irreproducibility.

My lesson started with a high profile publication in a top twenty journal with a supporting editorial highlighting its significance. Radio, television and news papers quickly picked up on the discovery and touted the potential impact to mankind. I was a venture partner at Atlas at this time, and this paper and the academics who published it were discussed at length during our Monday meeting. The decision was made to evaluate the potential to launch a company around the discovery. We carried out our initial diligence and, in partnership with a number of like minded groups, we seeded the entity signing an option agreement for the underlying IP. We raised a material amount of capital for a seed financing and began our work. We devised a research plan with parallel tracks. The first track was through an SRA agreement with the academics, while the second track was to replicate the paper through CROs.

After spending more than a year working on the project trying to replicate the data, we were unable to validate the conclusion of the original paper. During this time the academic partners continued to generate additional data supporting and building on their original, peer reviewed, publication. However with the data from our CROs in hand, we elected to dissolve the entity and return the rights. As we began that process a pharma company published its own paper, refuting the original paper and reflecting the data we had generated. Since then a number of additional publications have failed to provide supporting evidence of the original conclusion.

This is not the first time this has happened. In each case the respective scientists truly believed their observations and conclusions. There was nothing nefarious in their approach, nor in their disclosures.


So, be skeptical and look at the actual underlying data carefully.

Ask the critical questions, in line with Begley’s rules of reproducibility:

  • Check the underlying biology. Does it make sense against what has been shown before? Or, does it contradict what others have proven?
  • Is the observed effect actually therapeutically relevant – a statistically signficiant small difference is still a small difference.
  • What statistical analysis has been used in the study and how was it done? Does the p value reflect an actual relevant effect? And, how many times was the study actually replicated – not N, which is important, but the number of independent replicates? Also, is there consistency across the datasets presented, e.g. n of 6 in one figure presented and n of 16 in another for the same experiment? Does this indicate that some data points were rejected or not used in the figure?
  • Watch the scale on the y-axis – this can be visually misleading.
  • Was there a positive control used in the study? (This may not always be possible.)
  • Request the actual raw data and do the analysis yourself.
  • Check the reagents used in the experiment. Do they actually do what is represented?
  • Have the principal investigators been drinking the proverbial Kool-Aid, or are they skeptical of the data and critique it with you? Don’t let the hand waving and anecdotes obfuscate the need for hard data.
  • Have the data and analysis audited by independent experts and listen to what they say. Pick a handful to review the data independently and make your own determination after discussing it with them.
  • Finally ask whether now is the time to build a company around the discovery or whether to let it incubate for an other year or two before making that step.

If you decide to proceed based on your diligence, I would suggest you run the killer experiment in an independent contract research organization that sources new reagents for the study. It will haunt you if you don’t.

Above all, stay in stealth mode until the data has been replicated. An anomaly found in due diligence can sink a project, but a public promise that is not kept leaves a permanent black mark. Oops Leon.



  1. Natalie Wolchover, Quanta Magazine (June 24, 2016) Rumors Cast Doubt on Diphoton Bump downloaded March 2017 at:
  2. J. Yoh (1998). “The Discovery of the b Quark at Fermilab in 1977: The Experiment Coordinator’s Story” (PDF). AIP Conference Proceedings. 424: 29–42.
  3. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. Doi: 10.1126/science.aac4716
  4. Joel Achenbach and Laurie McGinley, Washington Post, January 18, 2017; Researchers struggle to replicate 5 influential cancer experiments from top labs;  downloaded March 2017 at
This entry was posted in Biotech startup advice, From The Trenches, Translational research and tagged , . Bookmark the permalink.
  • Zsombor Lacza

    Great post, thanks. I, as a researcher-clinician-entrepreneur can attest that this is indeed the case. I have had to publish erratum before on one of my papers due to a technical but important error: the y-axis was 100 times up-scaled to what it should have been. This was noted during the peer-review process on one of my follow-on paper the the original was already out and well-cited. The topic was a mysterious and highly important discovery: the capacity of mitochondria to generate nitric oxide. In the end I published 1 paper showing that it exists and then about 5 other arguing that no, it does not exist and that we (the scientific community) were misled by similar but not quite right molecular mechanisms and flaws in the experimental design. The good news is that the research led to another, much simpler discovery that ended up in valid patents and research, later successfully transferred to industry and proven by independent teams. So not all is bad with bad data, we just need to be critical of ourselves. Not very easy…