What scientists (and employees on bonus schemes) don’t want you to know by Marvin Faure

Article

Didier Marlier

November 16, 2013

From Disruption to Engagement

We support leaders as they navigate through significant strategic and cultural changes.  We are united by our values of Expertise, Courage and Generosity. Our network operates across the world.

Mass Engagement Process

+

Cultural Performance Improvement

+

Innovative Leadership Development

+

Team & Individual Coaching

As my colleague Nick McRoberts noted last week, errors are common, both in science and in business, and having to admit that we were wrong is one of the most difficult things to do. In this article I will look at the size of the problem and the underlying causes before identifying what we, as leaders, can do to reduce their occurrence.

The Economist addressed the issue of uncorrected errors in published scientific papers in an editorial and a four-page briefing[i] published last month. In this article they highlight a series of shocking revelations on the extent of the problem, stating “there are errors in a lot more of the scientific papers being published, written about and acted upon than anyone would normally suppose, or like to think.” If you are like most people, you would probably expect some errors in published research. However, you might also expect that first, these would be few and far between; and second, they would be quickly picked up and corrected by other scientists. Perhaps 5% to 10% of errors, mostly corrected within two to three years? If this is the case, you would be sadly very wide of the mark.

A team led by Florian Prinz at Bayer Health Care wrote in Nature in August 2011: “validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced”. Analysing 67 different research projects, they found “only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings.[ii] In other words, from 75% to 80% of the data could not be replicated.

In a commentary published in Nature in 2012[iii], former head of global cancer research at Amgen Glenn Begley described how over a ten year period he and his team tried to replicate the findings reported in 53 so-called “landmark” research papers. The result? His team were unable to replicate 47 of the 53. Put another way, only 11% of these papers, published in highly reputable journals and considered to form the foundation assumptions for much of the research work in the field, could be considered reliable. Or, 89% of the “landmark” research papers are questionable!

In an article published at the same time, Reuters quoted a former scientist from Merck stating similar findings, and that “It drives people in industry crazy[iv].

One cannot discuss the research situation in medical science without mentioning John Ioannidis. In a famous paper[v] published in 1995 and cited over 1100 times, he argues that most published research findings in the medical field are false: “There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising. It can be proven that most claimed research findings are false.” In spite of the controversial and potentially damaging nature of these claims, they have been widely accepted as being essentially correct. (For a detailed article on the work of Dr Ioannidis and his team, read David Freedman’s 2010 article in The Atlantic[vi]).

If published research in the “hard sciences” can be so shockingly misleading, how about in the softer areas, such as economics (or, dare we say it, psychology)?

In their 1992 paper[vii], de Long and Lang asked the provocative question “Are all economic hypotheses false?” After analysing articles published in the top economics journals over several years they found 78 hypotheses that were confirmed by the evidence. However, in none of the 78 cases could the confirmation be called convincing. Their conclusions: some of the 78 “confirmed” hypotheses might be true, but probably not more than about a third of them. This means that when a top economic journal publishes an article claiming a hypothesis to be true, there is a two-thirds probability that it is actually false!

So how about psychology? Unfortunately, this would seem to be amongst the worst hit fields. An article published in Nature in May 2012[viii] provides a helpful recap of the issues, noting for example a study by Danielle Fanelli at the University of Edinburgh. In this study Fanelli found that in a comparison of the research quality in 18 different fields of science, neurosciences & behaviour, social psychology and psychiatry/psychology are amongst the worst offenders. These fields were considerably more likely to report positive results than, for example, the space sciences or geoscience.

The causes of all this dysfunction can, to a very large extent, be boiled down to inappropriate incentives. In the article in The Economist already mentioned, they cite Brian Nosek, a psychologist at the University of Virginia: “There is no cost [to the scientists] to getting things wrong. The cost is not getting them published”. In other words, there are strong and perverse incentives pushing scientists to do the wrong thing.

These incentives include:

  • Scientific reputations are built more on the quantity of publications than on their quality: the mantra is “publish or perish”.
  • There is an overwhelming publication bias in favour of positive findings versus negative ones.
  • The most striking findings have the best chance of being published.
  • Replicating other people’s work is a thankless task and hard to get funded.
  • Studies that report failure to replicate previously published results are often refused for publication.
  • Competition for science jobs and funding for research projects is intense, making it essential to stand out from the crowd.

I am not suggesting that the majority of scientists are frauds. There are many examples of honest errors and wishful thinking (or confirmation bias, to use the scientific term) alongside the more egregious cases of genuine fraud. Statistical analysis is a big problem. According to The Economist, only a small proportion of scientists have the deep technical skills needed to tease out what is truly significant from their experimental data.

The unfortunate fact is that all these issues are systemic and hence extremely hard to change. The heavy pressure to publish lots of positive, striking findings will ensure a continued stream of misleading research, until or unless the systemic issues are addressed. Since the financial consequences are too great, this is not likely to happen soon.

As Dr Ioannidis said in the previously cited article, “There may not be fierce objections to what I’m saying. But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”

One praiseworthy attempt to moderate the system is the website PsychFileDrawer, created for psychologists to submit unpublished replication attempts, whether successful or not. After a slow start the site is now the centre of lively debate, and includes a list of the Top 20 studies users would most like to see replicated.

In another attempt to improve things, Daniel Kahneman, Nobel Laureate and Professor of Psychology at Princeton, published a controversial open letter in September 2012 to the key players in the popular sub-field of social priming warning of a “looming train wreck” and making several suggestions for the field to regain credibility. (Social priming is a phenomenon in which people’s decisions or actions are apparently affected by seemingly irrelevant events taking place just beforehand. It holds out the promise of influencing people without them being aware of it, and is therefore of interest to marketing departments and governments, amongst others). Kahneman’s core proposal is to commit to a rigorous program of replication and validation, and pre-commit to publishing the results, “letting the chips fall where they may”. I have been unable to find any evidence that his suggestions are being adopted. Indeed, on February 4 2013 Kahneman wrote in the on-going email debate: “[this] refusal to engage in a legitimate scientific conversation … invites the interpretation that the believers are afraid of the outcome”[ix].

So what does all this mean for business? Apart from the healthy scepticism we should all adopt with regard to every new research paper, we might take this as a wake-up call to examine the issue of inappropriate incentives in our own businesses. This is an issue we have addressed before on this blog, but it bears repetition because it is so prevalent.

MBO (or Management By Objectives) has become so widespread in organisations that it is rarely challenged. On the face of it an excellent idea, it often leads to unintended consequences that are damaging to the organisation. If you provide a clear, measurable objective to someone, and attach a substantial bonus to its achievement, surely you have motivated them in the best interests of the company? Well, possibly. The odds are, however, that you have simply motivated them to do everything they can to get their bonus paid, no matter what the consequences to the organisation.

I won’t repeat here the arguments that have raged back and forth between opponents and proponents of bonus schemes. (See for example, my previous blog on this topic). Since bonuses are obviously not going to go away any time soon, the best suggestion I can make is that leaders pay very careful attention to the possible unintended consequences of misaligning them. Generally, the risk can be reduced by increasing the time-scale and broadening the scope. For example, setting an objective at the beginning of the year to complete some specific task by the end of the year is far more likely to result in dysfunctional behaviour than linking the bonus to an overall business result measured at a higher level and over a three-year moving average.

Regular readers of this blog will know the stress we place on keeping a proper balance between the three elements of leadership first proposed by Aristotle: our old friends Logos, Ethos and Pathos. Since the MBO process is very Logos (intellectual) it needs to be accompanied by appropriate attention to Ethos (aligning the behaviours and observable values-in-practice) and Pathos (creating a favourable emotional reaction).

We have seen many well-meaning attempts to make measurable objectives out of behaviours and even emotions. Including employee climate surveys in bonus schemes is one example. But again, have the unintended consequences been properly thought through? One doesn’t need a PhD in organisational psychology to see that employees are likely to respond differently once they know the boss’s bonus is tied to the outcomes.

The effort required to think through all this is substantial, and there is no easy solution. However, surely creating the conditions for others to do their best work is the primary role of the leader?

(Written by M. Faure, co-edited by D. Marlier, N. McRoberts & M. Newman)


[i] The Economist, Oct 19th – 25th 2013. Unreliable research – Trouble at the lab. http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

[ii] Prinz, F., Schlange, T., Asadullah, K. Nature Reviews Drug Discovery 10, 712 (September 2011). Believe it or not: how much can we rely on published data on potential drug targets? http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html

[iii] Begley, C.G and Ellis, L.M. Nature 483, 531–533 (29 March 2012). Drug development: Raise standards for preclinical cancer research,  http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

[iv] Begley, S. Reuters (28 March 2012). In cancer science, many “discoveries” don’t hold up.  http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

[v] Ioannidis, J.P.A., PLoS Med 2(8): e124. (30 August 2005). Why Most Published Research Findings Are False. http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

[vi] Freedman, D.H. The Atlantic (November 2010). Lies, Damned Lies, and Medical Science. http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/

[vii] De Long, J.B. and Lang, K. Journal of Political Economy, Vol. 100, No. 6, Centennial Issue (Dec., 1992), pp. 1257-1272. Are all Economic Hypotheses False? http://www.jstor.org/discover/10.2307/2138833?uid=3738016&uid=2129&uid=2&uid=70&uid=4&sid=21102900104157

[viii] Yong, E. Nature 485, 298–300 (17 May 2012). Replication studies: Bad copy. http://www.nature.com/news/replication-studies-bad-copy-1.10634

[ix] Abbott, A.  Nature  497, 16 (02 May 2013) Disputed results a fresh blow for social psychology. http://www.nature.com/news/disputed-results-a-fresh-blow-for-social-psychology-1.12902

As my colleague Nick McRoberts noted last week, errors are common, both in science and in business, and having to admit that we were wrong is one of the most difficult things to do. In this article I will look at the size of the problem and the underlying causes before identifying what we, as leaders, can do to reduce their occurrence.

The Economist addressed the issue of uncorrected errors in published scientific papers in an editorial and a four-page briefing[i] published last month. In this article they highlight a series of shocking revelations on the extent of the problem, stating “there are errors in a lot more of the scientific papers being published, written about and acted upon than anyone would normally suppose, or like to think.” If you are like most people, you would probably expect some errors in published research. However, you might also expect that first, these would be few and far between; and second, they would be quickly picked up and corrected by other scientists. Perhaps 5% to 10% of errors, mostly corrected within two to three years? If this is the case, you would be sadly very wide of the mark.

A team led by Florian Prinz at Bayer Health Care wrote in Nature in August 2011: “validation projects that were started in our company based on exciting published data have often resulted in disillusionment when key data could not be reproduced”. Analysing 67 different research projects, they found “only in ~20–25% of the projects were the relevant published data completely in line with our in-house findings.[ii] In other words, from 75% to 80% of the data could not be replicated.

In a commentary published in Nature in 2012[iii], former head of global cancer research at Amgen Glenn Begley described how over a ten year period he and his team tried to replicate the findings reported in 53 so-called “landmark” research papers. The result? His team were unable to replicate 47 of the 53. Put another way, only 11% of these papers, published in highly reputable journals and considered to form the foundation assumptions for much of the research work in the field, could be considered reliable. Or, 89% of the “landmark” research papers are questionable!

In an article published at the same time, Reuters quoted a former scientist from Merck stating similar findings, and that “It drives people in industry crazy[iv].

One cannot discuss the research situation in medical science without mentioning John Ioannidis. In a famous paper[v] published in 1995 and cited over 1100 times, he argues that most published research findings in the medical field are false: “There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising. It can be proven that most claimed research findings are false.” In spite of the controversial and potentially damaging nature of these claims, they have been widely accepted as being essentially correct. (For a detailed article on the work of Dr Ioannidis and his team, read David Freedman’s 2010 article in The Atlantic[vi]).

If published research in the “hard sciences” can be so shockingly misleading, how about in the softer areas, such as economics (or, dare we say it, psychology)?

In their 1992 paper[vii], de Long and Lang asked the provocative question “Are all economic hypotheses false?” After analysing articles published in the top economics journals over several years they found 78 hypotheses that were confirmed by the evidence. However, in none of the 78 cases could the confirmation be called convincing. Their conclusions: some of the 78 “confirmed” hypotheses might be true, but probably not more than about a third of them. This means that when a top economic journal publishes an article claiming a hypothesis to be true, there is a two-thirds probability that it is actually false!

So how about psychology? Unfortunately, this would seem to be amongst the worst hit fields. An article published in Nature in May 2012[viii] provides a helpful recap of the issues, noting for example a study by Danielle Fanelli at the University of Edinburgh. In this study Fanelli found that in a comparison of the research quality in 18 different fields of science, neurosciences & behaviour, social psychology and psychiatry/psychology are amongst the worst offenders. These fields were considerably more likely to report positive results than, for example, the space sciences or geoscience.

The causes of all this dysfunction can, to a very large extent, be boiled down to inappropriate incentives. In the article in The Economist already mentioned, they cite Brian Nosek, a psychologist at the University of Virginia: “There is no cost [to the scientists] to getting things wrong. The cost is not getting them published”. In other words, there are strong and perverse incentives pushing scientists to do the wrong thing.

These incentives include:

  • Scientific reputations are built more on the quantity of publications than on their quality: the mantra is “publish or perish”.
  • There is an overwhelming publication bias in favour of positive findings versus negative ones.
  • The most striking findings have the best chance of being published.
  • Replicating other people’s work is a thankless task and hard to get funded.
  • Studies that report failure to replicate previously published results are often refused for publication.
  • Competition for science jobs and funding for research projects is intense, making it essential to stand out from the crowd.

I am not suggesting that the majority of scientists are frauds. There are many examples of honest errors and wishful thinking (or confirmation bias, to use the scientific term) alongside the more egregious cases of genuine fraud. Statistical analysis is a big problem. According to The Economist, only a small proportion of scientists have the deep technical skills needed to tease out what is truly significant from their experimental data.

The unfortunate fact is that all these issues are systemic and hence extremely hard to change. The heavy pressure to publish lots of positive, striking findings will ensure a continued stream of misleading research, until or unless the systemic issues are addressed. Since the financial consequences are too great, this is not likely to happen soon.

As Dr Ioannidis said in the previously cited article, “There may not be fierce objections to what I’m saying. But it’s difficult to change the way that everyday doctors, patients, and healthy people think and behave.”

One praiseworthy attempt to moderate the system is the website PsychFileDrawer, created for psychologists to submit unpublished replication attempts, whether successful or not. After a slow start the site is now the centre of lively debate, and includes a list of the Top 20 studies users would most like to see replicated.

In another attempt to improve things, Daniel Kahneman, Nobel Laureate and Professor of Psychology at Princeton, published a controversial open letter in September 2012 to the key players in the popular sub-field of social priming warning of a “looming train wreck” and making several suggestions for the field to regain credibility. (Social priming is a phenomenon in which people’s decisions or actions are apparently affected by seemingly irrelevant events taking place just beforehand. It holds out the promise of influencing people without them being aware of it, and is therefore of interest to marketing departments and governments, amongst others). Kahneman’s core proposal is to commit to a rigorous program of replication and validation, and pre-commit to publishing the results, “letting the chips fall where they may”. I have been unable to find any evidence that his suggestions are being adopted. Indeed, on February 4 2013 Kahneman wrote in the on-going email debate: “[this] refusal to engage in a legitimate scientific conversation … invites the interpretation that the believers are afraid of the outcome”[ix].

So what does all this mean for business? Apart from the healthy scepticism we should all adopt with regard to every new research paper, we might take this as a wake-up call to examine the issue of inappropriate incentives in our own businesses. This is an issue we have addressed before on this blog, but it bears repetition because it is so prevalent.

MBO (or Management By Objectives) has become so widespread in organisations that it is rarely challenged. On the face of it an excellent idea, it often leads to unintended consequences that are damaging to the organisation. If you provide a clear, measurable objective to someone, and attach a substantial bonus to its achievement, surely you have motivated them in the best interests of the company? Well, possibly. The odds are, however, that you have simply motivated them to do everything they can to get their bonus paid, no matter what the consequences to the organisation.

I won’t repeat here the arguments that have raged back and forth between opponents and proponents of bonus schemes. (See for example, my previous blog on this topic). Since bonuses are obviously not going to go away any time soon, the best suggestion I can make is that leaders pay very careful attention to the possible unintended consequences of misaligning them. Generally, the risk can be reduced by increasing the time-scale and broadening the scope. For example, setting an objective at the beginning of the year to complete some specific task by the end of the year is far more likely to result in dysfunctional behaviour than linking the bonus to an overall business result measured at a higher level and over a three-year moving average.

Regular readers of this blog will know the stress we place on keeping a proper balance between the three elements of leadership first proposed by Aristotle: our old friends Logos, Ethos and Pathos. Since the MBO process is very Logos (intellectual) it needs to be accompanied by appropriate attention to Ethos (aligning the behaviours and observable values-in-practice) and Pathos (creating a favourable emotional reaction).

We have seen many well-meaning attempts to make measurable objectives out of behaviours and even emotions. Including employee climate surveys in bonus schemes is one example. But again, have the unintended consequences been properly thought through? One doesn’t need a PhD in organisational psychology to see that employees are likely to respond differently once they know the boss’s bonus is tied to the outcomes.

The effort required to think through all this is substantial, and there is no easy solution. However, surely creating the conditions for others to do their best work is the primary role of the leader?

(Written by M. Faure, co-edited by D. Marlier, N. McRoberts & M. Newman)

 


[i] The Economist, Oct 19th – 25th 2013. Unreliable research – Trouble at the lab. http://www.economist.com/news/briefing/21588057-scientists-think-science-self-correcting-alarming-degree-it-not-trouble

[ii] Prinz, F., Schlange, T., Asadullah, K. Nature Reviews Drug Discovery 10, 712 (September 2011). Believe it or not: how much can we rely on published data on potential drug targets? http://www.nature.com/nrd/journal/v10/n9/full/nrd3439-c1.html

[iii] Begley, C.G and Ellis, L.M. Nature 483, 531–533 (29 March 2012). Drug development: Raise standards for preclinical cancer research,  http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

[iv] Begley, S. Reuters (28 March 2012). In cancer science, many “discoveries” don’t hold up.  http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

[v] Ioannidis, J.P.A., PLoS Med 2(8): e124. (30 August 2005). Why Most Published Research Findings Are False. http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

[vi] Freedman, D.H. The Atlantic (November 2010). Lies, Damned Lies, and Medical Science. http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/

[vii] De Long, J.B. and Lang, K. Journal of Political Economy, Vol. 100, No. 6, Centennial Issue (Dec., 1992), pp. 1257-1272. Are all Economic Hypotheses False? http://www.jstor.org/discover/10.2307/2138833?uid=3738016&uid=2129&uid=2&uid=70&uid=4&sid=21102900104157

[viii] Yong, E. Nature 485, 298–300 (17 May 2012). Replication studies: Bad copy. http://www.nature.com/news/replication-studies-bad-copy-1.10634

[ix] Abbott, A.  Nature  497, 16 (02 May 2013) Disputed results a fresh blow for social psychology. http://www.nature.com/news/disputed-results-a-fresh-blow-for-social-psychology-1.12902

1 Comment

  1. Harish Jaisingh

    Let us not forget that authors of these research too are human beings who “easily” give up and sccumb to the lure of rewards by the sponsors. And sponsors too do not have any means to verify the research quickly.

    Refer to this quote from planning commission of India.
    “The plan panel defines the poor as one who spends less than Rs. 28 per day in urban areas and Rs. 22.5 in rural areas.”
    http://www.thehindu.com/news/national/montek-in-eye-of-storm-over-plan-panels-poverty-estimates/article3024284.ece

    search for “Montek Singh Rs 28”

    In less than 1/2 a dollar per day, where the panel memebers could buy the food, I do not know. This does not talk of other basic needs clothing and shelter that people need. Of course the government was trying to portray a beeter image but ended up raising debates in media and parliament.

    Though too much of information is floating but where is the reality?

    The good thing is that now it is accepted that very less of the research is true.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *

Mobile: +41 79 435 1660

Skype: didiermarlier

5 Route du Village

1884 Villars-sur-Ollon

CH - Switzerland

“Engaging Leadership” has been written for leaders who are about to engage their organisations in change."

11 + 12 =