Academic bias & biotech failures

Posted in Uncategorized

I just met with an entrepreneur who was the founding CEO of a company created around an academic lab’s discoveries. It was fascinating new approach to drugging hot receptor targets.  To protect the innocent I won’t mention the names, but Atlas Venture looked at in back in 2008 and, although intriguing, we ended up passing on the deal.  Thankfully, because we missed a bullet – it recently was shut down.

The reason: the foundational academic science was not reproducible outside the founder’s lab.

The company spent $5M or so trying to validate a platform that didn’t exist.  When they tried to directly repeat the academic founder’s data, it never worked.  Upon re-examination of the lab notebooks, it was clear the founder’s lab had at the very least massaged the data and shaped it to fit their hypothesis.  Essentially, they systematically ignored every piece of negative data.

Sadly this “failure to repeat” happens more often than we’d like to believe.  It has happened to us at Atlas several times in the past decade.

The unspoken rule is that at least 50% of the studies published even in top tier academic journals – Science, Nature, Cell, PNAS, etc… – can’t be repeated with the same conclusions by an industrial lab. In particular, key animal models often don’t reproduce.  This 50% failure rate isn’t a data free assertion: it’s backed up by dozens of experienced R&D professionals who’ve participated in the (re)testing of academic findings. This is a huge problem for translational research and one that won’t go away until we address it head on.

Reality is we live in a tournament model world of academic research: winners get the spoils, losers get nothing. Publish or peril.  Grants are really competitive, and careers are on the line.  Only positive findings are typically published, not negative ones.  This pressure creates a huge conflict of interest for academics, and a strong bias to write papers that support the hypotheses included in grant applications and prior publications.  To think there is only objectivity in academic research, and pervasive bias in industry research, is complete nonsense.

There’s a rich literature on the “Pharma bias” in publications (e.g., Pharma conflicts of interest with academics, clinical trial reporting); in the past 15 months, 63 peer-reviewed articles talk about pharma industry bias according to PubMed.

But what about academic bias?  Or the lack of repeatability of academic findings?  I couldn’t find a single paper in PubMed over the past few years.

So what can drive the failure to independently validate the majority of peer-reviewed published academic findings? I’m sure there are cases where its truly fabrication or falsification of data, but as an optimist I believe that must be a tiny percentage: most of the time I think its just the influence of bias.  A few possible hypotheses exist for how this bias could manifest itself:

  1. Academic investigator’s directly or indirectly pressured their labs to publish sensational “best of all experimental” results rather than the average or typical study;
  2. The “special sauce” of the author’s lab – how the experiment was done, what serum was used, what specific cells were played with, etc.. – led to a local optimum of activity in the paper that can’t be replicated elsewhere and isn’t broadly applicable; or,
  3. Systemically ignoring contradictory data in order to support the lab’s hypothesis, often leading to discounting conflicting  findings as technical or reagent failures.

Importantly, how are venture capitalists who invest in biotech supposed to engage on cool new data when the repeatability is so low?  Frankly, most VCs don’t do early stage investing these days, and this resistance to fund early academic spin-outs is in part due to the insidious impact of the sector’s high failure rate with academic reproducibility (a.k.a. ‘bias’).  But for those of us who remain committed to early stage investing, I’d suggest there are at least two key takeaways for VCs:

  • Findings from a single academic lab are suspect. If other labs haven’t validated it in peer reviewed literature, it’s very high risk.  It’s probably bleeding edge rather than cutting edge.  If it’s only a single lab, it’s likely only a single post-doc or grad student who’ve actually done the work.  Given the idiosyncrasies of lab practices, that’s a concentrated risk profile.  Wait for more labs to repeat the work, or conduct a full lab notebook audit.
  • Repeating the findings in an independent lab should be gating before investing. Don’t dive in with a Series A financing prior to externally validating the data with some real “wet diligence”.  Sign an option agreement with an MTA, repeat the study in a contract research lab or totally independent academic lab.

These two conclusions should help reduce the “reproducibility problem” for startups.

There are other implications of this problem, more than I can discuss here.  But one is around the role of tech transfer offices.  Although many TTOs are keen to start “seed funds” to spin-out new companies, this seems like a waste to me.  I’d argue that the best use of these academic “seed” funds would be to validate the findings of an investigator’s work in a reputable contract research lab that industrial partners and VCs would trust.  If a TTO could show 3rd party data supporting a lab’s striking findings, the prospects for funding would increase significantly.  This is the type of de-risking that TTOs should focus on.

The bottom line is we need to confront the issue and figure out how to reduce academic bias and improve the external validation of published findings – this will undoubtedly reduce the failure rate of new biotechs and bring more capital back into the early stage arena.

This entry was posted in Uncategorized. Bookmark the permalink.
  • Anonymous

    Bruce, fantastic entry.

    I think you are spot on with the challenges associated with spinning out university technologies that are on the bleeding edge.

    I think there might be some middle ground with regards to university seed funds. Historically, seed funds have provided dismal returns to the university alumns who served as their investor base and created numerous COIs with university profs. Instead of creating traditional seed funds, universities might want to create translational or valley of death funds to support the confirmatory research that you speak of. However, I would caution that those funds should be independently managed to skirt around some of the COI issues / selection bias that will inevitably popup should the university run the fund.

    I should also mention that some universities have already adopted CRO-like models. Duke University has created the Duke Translational Research Institute which is essentially CRO associated with the medical school. Profits that DTRI makes can be funneled into valley of death projects which they call their Translational Pilot Project Program. Each year 7 projects are awarded $25-150K grants to complete translational experiments within DTRI. The people at DTRI are experienced industry alums who have the talent and expertise to shepherd the projects to an inflection point that might entice VCs.

    Duke is fortunate to be located in RTP where there is significant CRO talent (Quintiles, PPD, etc.), but that does not preclude other universities from pursuing similar models or creating JVs with established CROs.

  • Liamoggordan

    The Truth Wears Off
    Is there something wrong with the scientific method?

    http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

  • Respisci

    Bruce, you have tapped into an important area of discussion which I fear that we just don’t address.

    I don’t think that I am alone in being unable to reproduce other’s results. I recall an instance during my post-doc where in experiment after experiment our lab consistently demonstrated a 30% effect of X on Y, while the published results had >70%. Our PI was getting more and more frustrated his own the team and questioned the skill/expertise of our group (and we were questioning ourselves). During a visit by the head of the lab which had published the findings, when asked about this 70% effect, he causally replied “that was a best result. Typically we see 30%”. To this day, I can still recall the shockwave that went through that room . For nearly a year our team was trying to reproduce results which even the parent lab couldn’t achieve. On top of that our results were correct which taught me an important lesson in believing in my own team/data.

    I fear that academic bias is alive and well. Compared to the drive for more publications so as to generate research funding there is little incentive for researchers to retract publications when their work is not reproducible (or as in the example I described, not reproducible to the same level/extent). This cycle just leads to bad science or perhaps bad reporting of science.

    While I agree that it is imperative for VCs to confirm and verify findings prior to fully investing in a company, I would argue that it would benefit research as a whole, not just VC and industry, if we set rigorous standards for labs to verify their work prior to publication or require amendments/retractions when the lab in question knows it isn’t working anymore)

    Alternatively, are there venues that allow for “we can/can not reproduce these results” ? I think we all know the challenge of trying to publish “me too” research or negative findings.

  • Pwoitach

    Spot on! In addition to bias, I have a hypothesis that there’s a more innocent buy fairly systemic lack of rigor re: analytical technique that’s likely reinforced by the reward for getting a fast favorable outcome, resulting in speed at the expense of repeatability and precision. With all due respect to these brilliant people on the bench that are far smarter than me, the level of rigor and technique to accomplish the academic purpose is often inadequate for taking things forward without further work. Why is contamination such a problem in academic cell culture labs when it’s a non-issue in industrial labs? I’m convinced that people inventing things have different DNA from the ones who need to develop things. Remember, ALL proteins are stable in an academic lab.

    Perhaps TTOs can be sold on investing in some validation as well as pre-diligence to spruce up the package so that it is more of an actionable “tech transfer package” that can be bring comfort but also accelerate execution once an investment is made. Working on two rescue situations as we speak where 6-12 months were lost because of early reliance on less-than-rigorous work in the academic lab. If some value accrued back to the TTO for policing and improving the situation…but the initial deal itself is often good enough…

  • Anonymous

    Bruce, thanks for touching on one of my own pet peeves – the common mis-perception that research published by academics is somehow “cleaner” than that from scientists working in industry. In my own experience, just the opposite is true. In fact, I think your estimate that 50% of high profile academic research is irreproducible is optimistic – I think the truth is even worse, especially in very hot areas, where the pressure for an academic scientists to publish before being scooped is especially intense, and where the rewards for being seen as a leader in the field may be more immediate than the cost of publishing something that turns out to be wrong. Having worked on both sides, there is no question that scientists working in industry just aren’t under these publish or perish questions, and IMO often apply a significantly higher level of rigor before publishing. To me, one of the most important reasons to go to scientific congresses is to find out from friends in these fields what has been reproducible, and what hasn’t been. In two cases where I’ve asked PIs if there was any difficulty in generating data I knew to be non-reproducible I was told the post-doc involved had to be pressured hard to get the “right” data! The majority of these non-reproducible papers are never retracted, but those in the field all know pretty soon. A common way of handling these papers is for the folks who write review articles to simply not cite them, and eventually the field cleanses itself.
    For investors not doing the experiments themselves, it’s a challenge to sort this out. I agree with you that some kind of external validation of the data can save a lot of agony down the road.

  • Pingback: links for 2011-03-28 « Brain Music – Gadgets, Neuroscience & More

  • Guest

    Very interesting post. While not directly addressing the reproducibility of results, this PLoS paper looks at fraud within science: http://www.plosone.org/article/info:doi/10.1371/journal.pone.0005738
    14% of respondants say they’ve seen their colleagues falsify data, and 72% say they’ve seen their colleagues use “questionable research practices” (self-reported fraud was, not surprisingly, much lower)… which is certainly going to contribute to a fair amount of nonsense publications.

    But I find the idea of at least 50% of papers not being reproducible difficult to swallow. I’d love to see some hard data backing that up (and then I’d want to see that study reproduced).

  • RB

    Bruce,
    Excellent post, and very timely blog.
    This is one of the great un-discussed problems of academia. We all know that this goes on, and it’s frankly demoralising. As Respisci says, this leads to a huge waste of time in other labs.

    I’d venture that a problem lies in the anonymous peer-review system that rules our lives. It’s very easy for an eminent researcher to protect their story using this mechanism. I’d be interested in other readers comments.

  • Guest

    As an academic researcher working in a high profile and ceompetitive area, I can confirm that the vast majority of papers (especially those in high profile journals) are not reproducible. In the case of genetics, most papers are real (since its hard to fudge a mutation). However, most so-called “functional” papers (in the past this was called cell biology) are not reproducible. But, it is not in anyone’s interest to publicly call “bullshit,” so no one does (or, very very rarely). Since papers in high profile journals tend to beget papers in high profile journals, this type of science becomes self-perpetuating. However, every so often an important and reproducible discovery does emerge, and then the field shifts towards truth. From a VC point of view, I would never EVER base a big investment on a single paper. Most likely BS.

  • johnnyboy

    I work at a CRO, and this is a very common problem – companies (big and small) asking us to apply a model or technique in a published article, which on closer inspection turns out to have been deeply flawed, or described so poorly that it is impossible to reproduce. Trying to explain that to the client is understandably difficult, as the usual attitude out there is “well it’s published so it’s true, why can’t you just do the same ?”…

  • HelicalZz

    It occurs to me that one model that could be useful is a journalistic one. How about a ‘Journal of Confirmed Research’, which publishes only the results of studies that have been confirmed by a CRO consortium? Instead of sending manuscripts out for review, they send them out for robustness testing. It would be an expensive place to publish, but perhaps a useful one, and potentially worth the cost for platform R&D biotech operations looking for early stage capital – perhaps at the insistence of a venture placed board member who knows later series capital raises will be necessary.

  • Pingback: What If Those Wonderful Results Are Wrong? | Pharma Marketer

  • Long Time Researcher

    This is an extremely good post, and right. Especially true when the new facts are most exciting or profound and uses new reagents, or new animal or cell models; interestingly the more pedestrian the finding, the more routine the models, the more standard the assay the more likely it is to be reproducible. About 50% of initial reports have significant errors and are irreproducible findings outside of the originating lab. The acid test of good science is that the data gets reproduced, at least in most regards, in a second lab; in a second lab in which none of the workers in the second lab are scientific children of the originating lab. It is also important to verify that the second lab made their own reagents and did not just get them from the originating lab.
    This observation is also true of clinical trials; there the main issue is patient selection bias. The first study often selects motivated patients, either overtly or covertly or ignorantly, that are the most likely to benefit from the therapy under investigation. The second trial allows “sicker”, or less good candidates, or poorer compliers, and the significance falls below, p<0.05. A very nice article by Jonah Lehrer, (12/13/10) was published in the New Yorker about the clinical trial issue.

  • Pingback: Startup Failures « Lamentations on Chemistry

  • http://pulse.yahoo.com/_RVDRSGD5DYBFD6UWVYY24ZCAA4 Dick Turpin

    Clearly, what’s needed here is a step between the writing of a paper and the consideration of its publications by one’s peers. That “in between step” would more or less amount to the following claim: “Our lab has made the novel discovery of XXX – we are now requesting that our finding be reproduced by an independent lab pursuant to its consideration for publication in such-and-such journal.”

    In other words, the gating factor for publication becomes not only novelty (as it is now) but also reproducibility. As things now stand, there’s such great impatience in academic science to get novel findings out there – that’s not good at all, of course, because it sacrifices scientific accuracy for satisfaction of the ego.

    Somewhat facetiously: we must reduce the number of egomaniacs who practice science. In the best of all possible worlds, all scientific findings would be published anonymously to prevent the “personality factor” from derailing the higher drive for truth.

  • Cellbio

    Very good post, but my perspective offers a slightly different opinion. Not that I disagree with your assessment, in fact, I believe the repeatability rate of aha findings in Science and Nature is closer to 10%.

    My point of divergence from your view is your solution. I don’t think the issue is this repeatability rate, but the lack of qualified folks in VCs. I am not slamming you, I have known, like and respect several bright PhDs who went off to McKinsey, but to a person, they know almost nothing about drug discovery from a practical sense. I worked briefly in investing prior to the crash, and was shocked by the lack of informed decision making.

    So, I respectfully suggest that the other solution is to compliment your teams with experienced drug discovery folks that know their craft. Standing aside the McKinsey trained whiz kids, you’d have a good team (I am being sincere). Remember, the technical crews of pharma and biotech knew that Sirtris technology was fraught with technical error. The issue to address is the business climate that does not care to listen.

  • LifeSciVC

    Thanks Cellbio. Reality is all good Life Science venture funds are surrounded by seasoned Pharma veterans. Not sure who you worked with in the investing world, but we have a large network of ex-Pharma R&D folks who are actively plugged into what we do to help us make better decisions, as well as improve portfolio/project execution.

    Its also fair to say that when a VC is actively involved in starting, managing, and funding early stage companies, he or she will spend a lot of time spent on drug discovery issues. Right now we probably have 20+ discovery stage programs and another 30-40 development stage projects in our portfolio right now. If one is an active, hands-on investor/founder, one doesn’t actually have to sniff solvents in the fume hood to pick up on the nuances, challenges, and excitement of practical in the trenches drug discovery.

    Lastly, there may have been scientists at GSK who didn’t think Sirtris was a great idea, but clearly a bunch of senior R&D leaders did. I doubt McKinsey had anything to do with that.

  • LifeSciVC

    Dick, great suggestion on the idea of a two-step publication process. Has some logistical issues, and would bias literature towards short-term experiments (rather than longer term survival studies, for instance). Another interesting approach would be to allow other investigators to ‘tag’ papers that they have been able to reproduce – the # of tags would help indicate the breadth of a study’s reproducibility.

  • Anonymous

    HelicalZz, love the idea of the Journal. Its a more stringent version of the ‘tag’ concept I noted to Dick T above. Several of us have joked that the opposite journal would be worth publishing – or linking published papers to – the “Journal of Unreproducible Results”…

  • Imran Nasrullah

    This is very insightful and should be part of due diligence around early stage opportunities. Another issue to consider is to look at the statistical power behind many of these “key” studies. Discovery research using 10 or 20 mice as the sample size produces a poorly powered study. Really, how significant are the findings? While you obviously cannot run such studies on hundreds of mice, the ability to have reproducibility and repeatability is essential.

  • Cellbio

    Imran, don’t think this would help in most cases. The problem is less about having power to detect small effects, but total inability to see even the same directional trend. In the cases I have seen where the primary cause of the difference was determined, it related to quality of test agent. Academics often do not have the skill or experience to test their small molecule or protein reagent for purity, or if they do, accept low purity, 80-90%, as sufficient. The other piece that relates to this is the sensitivity of the models to perturbation, with many routes to achieve apparent efficacy that do not relate to interesting mechanisms and are not operational in human disease.

  • Anonymous

    Cellbio, thought I replied ysterday but it didn’t seem to post.

    Not sure who you worked with in the investing world, but in my experience all the good life science venture firms that focus on early stage company creation, like Atlas, certainly surround themselves with experienced drug discovery folks. We have dozens of former Pharma/Biotech researchers help us make better decisions, and shape our execution. I’ve enjoyed working closely and collaboratively with those folks and I’m sure they feel the same.

    Within the Atlas portfolio, my guess is we have 20+ discovery programs and 20+ development stage projects in our portfolio right now, and as active, hands-on investors we try to grasp the details of these – especially since in many circumstances the only material and significant events are the progression or lack thereof of drug discovery projects. Importantly, I don’t believe one needs to sniff solvents in a fume hood to gain a detailed appreciation of the nuances, challenges, key questions, killer experiments, and excitement of drug discovery. Immersion and engagement on the specifics of the project is key, and learning lessons from the past; you are right that McKinsey strategy and cost cutting studies don’t do that. But many years of starting, managing, and funding drug discovery companies certainly does.

    Lastly, bench level scientists may have known the Sirtris technology “was fraught with technical error” but a bunch of other seasoned drug discovery folks at GSK sure bought into it. Btw, lots of VCs passed on investing b/c of technical questions they had about the deal. In the end, I hope for GSK’s sake the drug discovery teams there figure out how to make the SIRT programs work.

  • Cellbio

    Thanks for the reply. We would probably come close to agreeing, but remain a bit apart on the solvent sniffing value, but I am certainly a crusader for the value of drug discovery experience over McKinsey training, so be it.

    BTW, I do think Atlas is on the better end of the spectrum, at least as I can assess things from the outside.

  • http://mbcf.dfci.harvard.edu Paul_Morrison

    Good information and certainly worth discussing. I read all of the comments with high interest. I just think your data are as skewed as your subject matter and possibly for the same reason. You are sniffing the ether of VC and entrepreneurship and all that goes with the territory of high risk bleeding edge. 50% of papers are irreproducible? That implies that the most cutting edge lab could be producing papers that half do not cut it? That vanish when the review paper does not mention them?

    I might hang with egomaniacs who will sell their soul but they know that they aren’t going to be around for long if 50 % of what they are publishing is bullshit. Or 3%. I know labs that have survived one BS paper. That paper hurt. But 50%?

  • http://twitter.com/exMBB exMBB

    The most sensational example would be the work of Homme Hellinga. Wonder if DARPA pulled its grant

  • Pingback: Open innovation – an emerging hope for biopharma? | coolass.com

  • hmmmm

    VCs have traditionally fed into this problem by wanting to get first-mover advantage on a hot new discovery.

  • Pingback: Academic-Pharma Deals: A threat or opportunity for VC?

  • Pingback: Nothing Personal | Pharma Marketer

  • Bergan

    Late to the party here but I wanted to respond to your bolded point, “But what about academic bias?  Or the lack of repeatability of academic findings?  I couldn’t find a single paper in PubMed over the past few years.” That’s because PubMed is the opposite of the right place to look, seeing as they are a core part of the problem. Try looking in PLoS for articles by John Ioannidis, with titles such as, “Why Most Published Research Findings are False.” His team provides solid data evidencing the problem you’ve outlined here. There’s a great article on Ioannidis’ work in The Atlantic Monthly, Nov 2010, entitled, “Lies, Damned Lies and Medical Science.” Now, what to do about it! (Aside:  I was part of an Atlas start-up that did get hit by one of these academic irreproducibility bullets, so I know you know what you’re talking about here!)

  • Pwoitach

    And then there’s the other type of bias to not forget about when involving academic centers where incentives may not be aligned…”Research saboteur barred from work”     http://www.biotechniques.com/news/Research-saboteur-barred-from-work/biotechniques-316132.html?utm_source=BioTechniques+Newsletters+%26+e-Alerts&utm_campaign=12b0c9ed24-Daily_05102011&utm_medium=email

  • http://www.semlerresearch.com clinical studies india

    Whatever it is when you create a new product or concepts it will success or failure is doesn’t matter.. Through this post we know about this biotech experience.. Thanks for sharing this experience over here.. Thanks a lot..

  • Andrew R

    Late reply to the pack but…
    The lack of reproducible findings is what has pushed the use of systematic reviews and meta-analyses in biomedical research in recent years. Everybody has growing understanding that one trials results aren’t really worth a damn, and that the only point at which we can really trust what’s there is once a systematic review, especially one that looks at negative results and “gray literature” sources, is what will really tell you whether or not a result exists, and how large it actually may be.Rather than a Journal of Confirmed Research, I would recommend focusing your own searches on these reviews and meta-analyses. Single trials are really for fellow researchers to spread ideas, but don’t (or at least shouldn’t!) really ever say that the unequivocally confirm a result, but rather they support a claim.

  • http://marciovm.com/ Marcio von Muhlen

    Absolutely.  Ioannidis has been at the forefront of publicizing this for a while.  

  • real scientist

    The stuff stated here is all obvious and not really correct. The lack of repeatability also has to do with the quality of the lab/investigator. While even I have gotten burned once by poor science, I knew going in that the particular scientist I was dealing with played fast and loose with his data. The key with early stage investing is that the VCs are real scientists, not just people with consulting backgrounds. I, for one, have been very successful with picking companies out of academics. In fact, ALL of my companies, which have either been sold or are undergoing interesting clinical trials, have come from academic labs. Perhaps the problem is really more on the VC side. Finally, I agree that outside validation might be important, but no one who is a real scientist would invest in something that is so new that no one else has repeated it (although I, for one, got lucky doing this once).

  • Anonymous

    Larry, thanks for the comments. “Real scientists”: aren’t those the same folks who published these papers in Nature/Cell/Science? Joking aside, I agree that being a good scientist is important for discerning the good from the bad. While I’ll ignore your consulting jab as generalizations aren’t worth responding to, its clear you’re 20+ years at Genentech make you incredibly well qualified to evaluate new programs. That said, a lot of 20 year veterans haven’t made the transition as well as you have. Also, as you know, the reality of successful biotechs is not a simple as picking a good academic technology. Your very own Bioverdant is a good example – cool tech out of Pfizer largely, didn’t make it from what I hear for other reasons. Lastly, the lack of translatability from academic labs may not be a surprise to you, but given the responses I’ve received its clearly a surprise to others. In any case, we’d welcome the opportunity to look at new deals with you and your colleagues at USVP.

  • Pingback: How Many New Drug Targets Aren’t Even Real? | Pharma Marketer

  • http://www.leisnetwork.com Jim Leis

    Hi Bruce. I don’t know about biotech in particular, but I imagine the most common error is that the lab experiment is not an adequate or accurate measure of reality. That issue can take a variety of forms:

    1. The experiment is too controlled. In real life, people don’t always take their pills, or they develop new lifestyles, habits and competing incentives. Let’s face it; the real world is inter-causal. And yes, I understand how the mathematics supposedly works to account for these issues.

    2. The experiment is not a good approximation of reality. As an example, I’ve read quite a few academic research papers on financial incentives since the financial collapse. I wouldn’t pay a dime for most of them. Good luck to anyone who implements them.

    In reality, it is often difficult to duplicate successful business strategies from one firm to another even when they are in similar situations. Arguably the almost religious exhaltation of laboratories for anything outside pure science drastically under-rates real life complexity. We may not need better or duplicative labs; we need new methods of real life testing.

  • Jay

    Very nice entry! Thank you for pointing this out so clearly. In academia, people are well aware of the problem, but as stake holders they are very unlikely to do anything about it, true to the motto: “Just don’t rock the boat!”. It is all a very delicate network of interactions that must not be disturbed to much or it will collapse. Now, has anybody ever considered the actual costs of this issue to society, the main root, in my humble view, is the incredible pressure on postdocs and students? After all, you have rent to pay and children to feed so when your short term contract (2-3 years on average) comes to an end you might just find yourself in a situation where you have to publish something that is not as “mature” as it could be. I mean the money spent on trying to reproduce other people’s results is enormous. I am a postdoc and have, on two occasions, been involved in a project that was entirely based on somebody else’s data and bias. On both occasions I wasted almost one year before the project was abandoned and the public investment (my lowly salary and reagent costs) evaporated into thin air. From my point of view, the way academic research is conducted these days is deeply flawed, starting with funding allocation, career structures/prospects, publishing and peer review. Its all rather corrupt. I mean everybody in the process is a stake holder right down from the top (the funding bodies and policy makers). After all, you would not really want the banks to regulate themselves, or do you? There is no truly independent oversight.
    So, I think creating a different mindset and culture in research would help to solve the problem of reproducibility. People who don’t have to be afraid of loosing their job and “careers” are more likely to escape the pressures that lead to flawed data, which in the end would save society as a whole a lot of resources.

  • Pingback: Verify Then Trust « Retronyma

  • reingu

    Thanks, superb and  interesting.
    The topic of repetability of some scientific findings was discussed  in 2010 in The New Yorker:
    *The Truth Wears Off*
    Is there something wrong with the scientific method?
    http://www.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer

  • A Scientist

    Interesting – you state that “no one who is a real scientist would invest in something that is so new that no one else has repeated it” Then go on to say  “(although I, for one, got lucky doing this once).”  Do you realize that you have just confessed that you are not a ‘real scientist’?

  • RayPerkins

    Bruce,

    I am extremely impressed by your insistence that client prospect data must be repeated by a CRO. Ditto, your advice to tech transfer officers. This is a breath of fresh air in a space that seems to thrive on “tick-box reductionism” and twitter-length analyses.

    Your solution addresses what I would call the first of two, crucial questions for an investor:
    Are the prospect’s data reliable? Assuming the answer is “yes,” I think there’s a logical, next question: Are the conclusions sound? If the answer to this question is also “yes,” then the odds of having a winner much more than doubles.

    There are a great many intrinsic problems associated with the dominant research paradigm in pharma, problems that compromise investment even if the original data are repeatable.

    If this line of thinking is at all reasonable, I’d welcome an opportunity to open a dialogue.

    Again my congratulations on your very forward thinking.

    Ray