Category Archives: academic misconduct

Tools for the Evaluation of the Quality of Experimental Research

pdf of this post

Experiments can have important advantages above other research designs. The most important advantage of experiments concerns internal validity. Random assignment to treatment reduces the attribution problem and increases the possibilities for causal inference. An additional advantage is that control over participants reduces heterogeneity of treatment effects observed.

The extent to which these advantages are realized in the data depends on the design and execution of the experiment. Experiments have a higher quality if the sample size is larger, the theoretical concepts are more reliably measured, and have a higher validity. The sufficiency of the sample size can be checked with a power analysis. For most effect sizes in the social sciences, which are small (d = 0.2), a sample of 1300 participants is required to detect it at conventional significance levels (p < .05) and 95% power (see appendix). Also for a stronger effect size (0.4) more than 300 participants are required. The reliability of normative scale measures can be judged with Cronbach’s alpha. A rule of thumb for unidimensional scales is that alpha should be at least .63 for a scale consisting of 4 items, .68 for 5 items, .72 for 6 items, .75 for 7 items, and so on. The validity of measures should be justified theoretically and can be checked with a manipulation check, which should reveal a sizeable and significant association with the treatment variables.

The advantages of experiments are reduced if assignment to treatment is non-random and treatment effects are confounded. In addition, a variety of other problems may endanger internal validity. Shadish, Cook & Campbell (2002) provide a useful list of such problems.

Also it should be noted that experiments can have important disadvantages. The most important disadvantage is that the external validity of the findings is limited to the participants in the setting in which their behavior was observed. This disadvantage can be avoided by creating more realistic decision situations, for instance in natural field experiments, and by recruiting (non-‘WEIRD’) samples of participants that are more representative of the target population. As Henrich, Heine & Norenzayan (2010) noted, results based on samples of participants in Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries have limited validity in the discovery of universal laws of human cognition, emotion or behavior.

Recently, experimental research paradigms have received fierce criticism. Results of research often cannot be reproduced (Open Science Collaboration, 2015), publication bias is ubiquitous (Ioannidis, 2005). It has become clear that there is a lot of undisclosed flexibility, in all phases of the empirical cycle. While these problems have been discussed widely in communities of researchers conducting experiments, they are by no means limited to one particular methodology or mode of data collection. It is likely that they also occur in communities of researchers using survey or interview data.

In the positivist paradigm that dominates experimental research, the empirical cycle starts with the formulation of a research question. To answer the question, hypotheses are formulated based on established theories and previous research findings. Then the research is designed, data are collected, a predetermined analysis plan is executed, results are interpreted, the research report is written and submitted for peer review. After the usual round(s) of revisions, the findings are incorporated in the body of knowledge.

The validity and reliability of results from experiments can be compromised in two ways. The first is by juggling with the order of phases in the empirical cycle. Researchers can decide to amend their research questions and hypotheses after they have seen the results of their analyses. Kerr (1989) labeled the practice of reformulating hypotheses HARKING: Hypothesizing After Results are Known. Amending hypotheses is not a problem when the goal of the research is to develop theories to be tested later, as in grounded theory or exploratory analyses (e.g., data mining). But in hypothesis-testing research harking is a problem, because it increases the likelihood of publishing false positives. Chance findings are interpreted post hoc as confirmations of hypotheses that a priori  are rather unlikely to be true. When these findings are published, they are unlikely to be reproducible by other researchers, creating research waste, and worse, reducing the reliability of published knowledge.

The second way the validity and reliability of results from experiments can be compromised is by misconduct and sloppy science within various stages of the empirical cycle (Simmons, Nelson & Simonsohn, 2011). The data collection and analysis phase as well as the reporting phase are most vulnerable to distortion by fraud, p-hacking and other questionable research practices (QRPs).

  • In the data collection phase, observations that (if kept) would lead to undesired conclusions or non-significant results can be altered or omitted. Also, fake observations can be added (fabricated).
  • In the analysis of data researchers can try alternative specifications of the variables, scale constructions, and regression models, searching for those that ‘work’ and choosing those that reach the desired conclusion.
  • In the reporting phase, things go wrong when the search for alternative specifications and the sensitivity of the results with respect to decisions in the data analysis phase is not disclosed.
  • In the peer review process, there can be pressure from editors and reviewers to cut reports of non-significant results, or to collect additional data supporting the hypotheses and the significant results reported in the literature.

Results from these forms of QRPs are that null-findings are less likely to be published, and that published research is biased towards positive findings, confirming the hypotheses, published findings are not reproducible, and when a replication attempt is made, published findings are found to be less significant, less often positive, and of a lower effect size (Open Science Collaboration, 2015).

Alarm bells, red flags and other warning signs

Some of the forms of misconduct mentioned above are very difficult to detect for reviewers and editors. When observations are fabricated or omitted from the analysis, only inside information, very sophisticated data detectives and stupidity of the authors can help us. Also many other forms of misconduct are difficult to prove. While smoking guns are rare, we can look for clues. I have developed a checklist of warning signs and good practices that editors and reviewers can use to screen submissions (see below). The checklist uses terminology that is not specific to experiments, but applies to all forms of data. While a high number of warning signs in itself does not prove anything, it should alert reviewers and editors. There is no norm for the number of flags. The table below only mentions the warning signs; the paper version of this blog post also shows a column with the positive poles. Those who would like to count good practices and reward authors for a higher number can count gold stars rather than red flags. The checklist was developed independently of the checklist that Wicherts et al. (2016) recently published.

Warning signs

  • The power of the analysis is too low.
  • The results are too good to be true.
  • All hypotheses are confirmed.
  • P-values are just below critical thresholds (e.g., p<.05)
  • A groundbreaking result is reported but not replicated in another sample.
  • The data and code are not made available upon request.
  • The data are not made available upon article submission.
  • The code is not made available upon article submission.
  • Materials (manipulations, survey questions) are described superficially.
  • Descriptive statistics are not reported.
  • The hypotheses are tested in analyses with covariates and results without covariates are not disclosed.
  • The research is not preregistered.
  • No details of an IRB procedure are given.
  • Participant recruitment procedures are not described.
  • Exact details of time and location of the data collection are not described.
  • A power analysis is lacking.
  • Unusual / non-validated measures are used without justification.
  • Different dependent variables are analyzed in different studies within the same article without justification.
  • Variables are (log)transformed or recoded in unusual categories without justification.
  • Numbers of observations mentioned at different places in the article are inconsistent. Loss or addition of observations is not justified.
  • A one-sided test is reported when a two-sided test would be appropriate.
  • Test-statistics (p-values, F-values) reported are incorrect.

With the increasing number of retractions of articles reporting on experimental research published in scholarly journals the awareness of the fallibility of peer review as a quality control mechanism has increased. Communities of researchers employing experimental designs have formulated solutions to these problems. In the review and publication stage, the following solutions have been proposed.

  • Access to data and code. An increasing number of science funders require grantees to provide open access to the data and the code that they have collected. Likewise, authors are required to provide access to data and code at a growing number of journals, such as Science, Nature, and the American Journal of Political Science. Platforms such as Dataverse, the Open Science Framework and Github facilitate sharing of data and code. Some journals do not require access to data and code, but provide Open Science badges for articles that do provide access.
  • Pledges, such as the ‘21 word solution’, a statement designed by Simmons, Nelson and Simonsohn (2012) that authors can include in their paper to ensure they have not fudged the data: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.”
  • Full disclosure of methodological details of research submitted for publication, for instance through psychdisclosure.org is now required by major journals in psychology.
  • Apps such as Statcheck, p-curve, p-checker, and r-index can help editors and reviewers detect fishy business. They also have the potential to improve research hygiene when researchers start using these apps to check their own work before they submit it for review.

As these solutions become more commonly used we should see the quality of research go up. The number of red flags in research should decrease and the number of gold stars should increase. This requires not only that reviewers and editors use the checklist, but most importantly, that also researchers themselves use it.

The solutions above should be supplemented by better research practices before researchers submit their papers for review. In particular, two measures are worth mentioning:

  • Preregistration of research, for instance on Aspredicted.org. An increasing number of journals in psychology require research to be preregistered. Some journals guarantee publication of research regardless of its results after a round of peer review of the research design.
  • Increasing the statistical power of research is one of the most promising strategies to increase the quality of experimental research (Bakker, Van Dijk & Wicherts, 2012). In many fields and for many decades, published research has been underpowered, using samples of participants that are not large enough the reported effect sizes. Using larger samples reduces the likelihood of both false positives as well as false negatives.

A variety of institutional designs have been proposed to encourage the use of the solutions mentioned above, including reducing the incentives in careers of researchers and hiring and promotion decisions for using questionable research practices, rewarding researchers for good conduct through badges, the adoption of voluntary codes of conduct, and socialization of students and senior staff through teaching and workshops. Research funders, journals, editors, authors, reviewers, universities, senior researchers and students all have a responsibility in these developments.

References

Bakker, M., Van Dijk, A. & Wicherts, J. (2012). The Rules of the Game Called Psychological Science. Perspectives on Psychological Science, 7(6): 543–554.

Henrich, J., Heine, S.J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33: 61 – 135.

Ioannidis, J.P.A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8): e124. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

Kerr, N.L. (1989). HARKing: Hypothesizing After Results are Known. Personality and Social Psychology Review, 2: 196-217.

Open Science Collaboration (2015). Estimating the Reproducibility of Psychological Science. Science, 349. http://www.sciencemag.org/content/349/6251/aac4716.full.html

Shadish, W.R., Cook, T.D., & Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Simmons, J.P., Nelson, L.D., & Simonsohn, U. (2011). False positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22: 1359–1366.

Simmons, J.P., Nelson, L.D. & Simonsohn, U. (2012). A 21 Word Solution. Available at SSRN: http://ssrn.com/abstract=2160588

Wicherts, J.M., Veldkamp, C.L., Augusteijn, H.E., Bakker, M., Van Aert, R.C & Van Assen, M.L.A.M. (2016). Researcher degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers of Psychology, 7: 1832. http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01832/abstract

Advertisements

1 Comment

Filed under academic misconduct, experiments, fraud, incentives, open science, psychology, survey research

Four Reasons Why We Are Converting to Open Science

The Center for Philanthropic Studies I am leading at VU Amsterdam is converting to Open Science.

Open Science offers four advantages to the scientific community, nonprofit organizations, and the public at large:

  1. Access: we make our work more easily accessible for everyone. Our research serves public goods, which are served best by open access.
  2. Efficiency: we make it easier for others to build on our work, which saves time.
  3. Quality: we enable others to check our work, find flaws and improve it.
  4. Innovation: ultimately, open science facilitates the production of knowledge.

What does the change mean in practice?

First, the source of funding for contract research we conduct will always be disclosed.

Second, data collection – interviews, surveys, experiments – will follow a prespecified protocol. This includes the number of observations forseen, the questions to be asked, measures to be included, hypotheses to be tested, and analyses to be conducted. New studies will be preferably be preregistered.

Third, data collected and the code used to conduct the analyses will be made public, through the Open Science Framework for instance. Obviously, personal or sensitive data will not be made public.

Fourth, results of research will preferably be published in open access mode. This does not mean that we will publish only in Open Access journals. Research reports and papers for academic will be made available online in working paper archives, as a ‘preprint’ version, or in other ways.

 

December 16, 2015 update:

A fifth reason, following directly from #1 and #2, is that open science reduces the costs of science for society.

See this previous post for links to our Giving in the Netherlands Panel Survey data and questionnaires.

 

July 8, 2017 update:

A public use file of the Giving in the Netherlands Panel Survey and the user manual are posted at the Open Science Framework.

Leave a comment

Filed under academic misconduct, Center for Philanthropic Studies, contract research, data, fraud, incentives, methodology, open science, regulation, survey research

Why a high R Square is not necessarily better

Often I encounter academics thinking that a high proportion of explained variance is the ideal outcome of a statistical analysis. The idea is that in regression analyses a high R Square is better than a low R Square. In my view, the emphasis on a high R2 should be reduced. A high R2 should not be a goal in itself. The reason is that a higher R2 can easily be obtained by using procedures that actually lower the external validity of coefficients.

It is possible to increase the proportion of variance explained in regression analyses in several ways that do not in fact our ability to ‘understand’ the behavior we are seeking to ‘explain’ or ‘predict’. One way to increase the R2 is to remove anomalous observations, such as ‘outliers’ or people who say they ‘don’t know’ and treat them like the average respondent. Replacing missing data by mean scores or using multiple imputation procedures often increases the Rsquare. I have used this procedure in several papers myself, including some of my dissertation chapters.

But in fact outliers can be true values. I have seen quite a few of them that destroyed correlations and lowered R squares while being valid observations. E.g., a widower donating a large amount of money to a charity after the death of his wife. A rare case of exceptional behavior for very specific reasons that seldom occur. In larger samples these outliers may become more frequent, affecting the R2 less strongly.

Also ‘Don’t Know’ respondents are often systematically different from the average respondent. Treating them as average respondents eliminates some of the real variance that would otherwise be hard to predict.

Finally, it is often possible to increase the proportion of variance explained by including more variables. This is particularly problematic if variables that are the result of the dependent variable are included as predictors. For instance if network size is added to the prediction of volunteering the R Square will increase. But a larger network not only increases volunteering; it is also a result of volunteering. Especially if the network questions refer to the present (do you know…) while the volunteering questions refer to the past (in the past year, have you…) it is dubious to ‘predict’ volunteering in the past by a measure of current network size.

As a reviewer, I give authors reporting an R2 exceeding 40% a treatment of high-level scrutiny for dubious decisions in data handling and inclusion of variables.

As a rule, R Squares tend to be higher at higher levels of aggregation, e.g. when analyzing cross-situational tendencies in behavior rather than specific behaviors in specific contexts; or when analyzing time-series data or macro-level data about countries rather than individuals. Why people do the things they do is often just very hard to predict, especially if you try to predict behavior in a specific case.

1 Comment

Filed under academic misconduct, data, methodology, regression analysis, survey research

Varieties of plagiarism

Academic misconduct figures prominently in the press this week: Peter Nijkamp, a well-known Dutch economist at VU University Amsterdam, supervised a dissertation in which self-plagiarism occurred, according to a ruling of an integrity committee of the National Association of Universities in the Netherlands. The complaint led two national newspapers to dig into the work of Nijkamp. NRC published an article by research journalist Frank van Kolfschooten, who took a small sample of his publications and found 6 cases of plagiarism, and 8 cases of self-plagiarism. Today De Volkskrant reports self-plagiarism in 60% of 115 articles co-authored by Nijkamp. VU University rector Frank van der Duyn Schouten said in a preliminary statement that he does not believe Nijkamp plagiarized on purpose, that the criteria for self-plagiarism have been changing in the past decades, and that they are currently not clear. The university issued a full investigation of Nijkamp’s publications.

Fundamentele_wetenschap

Nijkamp’s profile on Google Scholar is polluted. It counts 28,860 citations, but includes papers written by others, like  Zoltan Acs and Nobel-prize winner Daniel Kahneman. A Web of Knowledge author search yielded 3,638 citations of his 426 (co-authored) publications, 3,310 excluding self-citations. That’s 7.8 citations per article.  His H-index is 29. Typically Nijkamp appears as a co-author on publications. He is the single author of only one of his top 10 most cited articles, ranking 10th, with 58 citations.

The Nijkamp case looks different from another prominent case of self-citation in economics, by Bruno Frey. Frey submitted nearly identical research papers to different journals. Nijkamp seems to have allowed his many co-authors to copy and paste sentences and sometimes entire paragraphs from other articles he co-authored – which can be classified as self-plagiarism.

January 15, 2014 update: Nijkamp responded in a letter posted here that there may have been some flaws and accidents, but that these are to be expected in what he calls “the beautiful industry of academic publishing”.

Leave a comment

Filed under academic misconduct, economics, VU University

More But Not Less: a University Research and Education Reform Proposal

Yes, the incentive structure in the higher education and research industry should be reformed in order to reduce the inflation of academic degrees and research. That much is clear from the increasing numbers of cases of outright fraud and academic misconduct, including more subtle forms of data manipulation, p-hacking, and rising rates of (false) positive publication bias as a result. It is also clear from the declining numbers of professors employed by universities to teach the rising numbers of students, up to the PhD level. Yes, the increasing numbers of peer-reviewed journal publications and academic degrees awarded imply that the productivity of academia has increased in the past decades. But the marginal returns on investiment are now approaching zero or perhaps even becoming negative. The recent Science in Transition position paper identifies the issues. So what should we do? It is not just important to diagnose the symptoms, it is time for a reform. This takes years, and an international approach, as the chairman of the board of Erasmus University Rotterdam Pauline van der Meer-Mohr said recently in a radio interview. Here are some ideas.

  1. Evaluate the quality of research rather than the quantity. Examine a proportion of publications through audits, screening them for results that are too good to be true, statistical analysis and reporting errors, and the availability of data and coding for replication. Rankings of universities are often based in part on numbers of publications. Universities that want to climb on the rankings will promote or hire more productive researchers. Granting agencies and universities should reduce the influence of rankings and the current publication culture on promotion and granting decisions. Prohibit the payment of bonuses for publications (including those in specific high-impact journals).
  2. Evaluate the quality of education rather than the quantity. Examine a proportion of courses through mystery shoppers, screening them for tests that are too easy to pass, accuracy of grades for assignments, and the availability of student guidelines in course manuals. Rankings of universities are often based on evaluations by course-enrolled students. Universities that want to climb on the rankings will please the students and the evaluators. Accreditation bodies should reduce the self-selection of evaluators for academic programs. Prohibit the payment of departments and universities for letting students pass.
  3. We can have the cake and eat it at the same time. Let all students pass courses if the requirements for presence at meetings and submission of assignments are met, but give grades based on performance. This change puts students back in control and reduces the tendency among instructors to help students to pass.

Leave a comment

Filed under academic misconduct, fraud, incentives, politics

How Incentives Lead Us Astray in Academia

PDF of this post

The Teaching Trap

I did it again this week – I tried to teach students. Yes, It’s my job, and I love it. But that’s completely my own fault. If it were for the incentives I encounter in the academic institution where I work, it would be far better to not spend time on teaching at all. For my career in academia, the thing that counts most heavily is how many publications in top journals I can realize. For some, this is even the only thing that counts. Their promotion only depends on the number of publications. Last week going home on the train I overheard one young researcher from the medical school of our university saying to a colleague “I would be a sucker to spend time on teaching!”

I remember what I did when I was their age. I worked at another university in an era where excellent publications were not yet counted by the impact factors of journals. My dissertation supervisor asked me to teach a Sociology 101 class, and I spent all of my time on it. I loved it. I developed fun class assignments with creative methods. I gave weekly writing assignments to students and scribbled extensive comments in the margins of their essays. Students learned and wrote much better essays at the end of the course than at the beginning.

A few years later things started to change. We were told to ‘extensify’ teaching: spend less time as teachers, keeping the students as busy as ever. I developed checklists for students (‘Does my essay have a title?’ – ‘Is the reference list in alphabetical order and complete?’) and codes to grade essays with, ranging from ‘A. This sentence is not clear’ to ‘Z. Remember the difference between substance and significance: a p-value only tells you something about statistical significance, and not necessarily something about the effect size’. It was efficient for me – grading was much faster using the codes – and kept students busy – they could figure out themselves where they could improve their work. It was less attractive for students though and they progressed less than they used to. The extensification was required because the department spent too much time on teaching relative to the compensation it received from the university. I realized then that the department and my university earns money with teaching. For every student that passes a course the department earns money from the university, because for every student that graduates the university earns money from the Ministry of Education.

This incentive structure is still in place, and it is completely destroying the quality of teaching and the value of a university diploma. As a professor I can save a lot of time by just letting students pass the courses I teach without trying to have the students learn anything: by not giving them feedback on their essays, by not having them write essays, by not having them do a retake after a failed exam, or even by grading their exams with at least a ‘passed’ mark without reading what they wrote.

Allemaal_een_Tien

The awareness that incentives lead us astray has become clearer to me ever since the time the ‘extensify’ movement dawned. The latest illustration came to me earlier this academic year when I talked to a group of people interested in doing dissertation work as external PhD candidates. The university earns a premium from the Ministry of Education for each PhD dissertation that is defended successfully. Back in the old days, way before I got into academia, a dissertation was an eloquent monograph. When I graduated, the dissertation had become a set of four connected articles introduced by a literature review and a conclusion and discussion chapter. Today, the dissertation is a compilation of three articles, of which one could be a literature review. The process of diploma inflation has worked its way up to the PhD level. The minimum level of quality of required for dissertations has also declined. The procedures in place to check whether the research work by external PhD candidates conforms to minimum standards are weak. And why should they, if stringent criteria lower the profit for universities?

The Rat Race in Research

Academic careers are evaluated and shaped primarily by the number of publications, the impact factors of the journals in which they are published, and the number of citations by other researchers. At higher ranks the size and prestige of research grants starts to count as well. The dominance of output evaluations not only works against the attention paid to teaching, but also has perverse effects on research itself. The goal of research these days is not so much to get closer to the truth but to get published as frequently as possible in the most prestigious journals. A classic example of the replacement of substantive with instrumental rationality or the inversion between means and ends: an instrument becomes a goal in itself.[1] At some universities researchers can earn a salary bonus for each publication in a ‘top journal’. This leads to opportunistic behavior: salami tactics (thinly slicing the same research project in as many publications as possible), self-plagiarism (publishing the same or virtually the same research in different journals), self-citations, and even outright data fabrication.

What about the self-correcting power of science? Will reviewers not weed out the bad apples? Clearly not. The number of retractions in academic journals is increasing and not because reviewers are able to catch more cheaters. It is because colleagues and other bystanders witness misbehavior and are concerned about the reputation of science, or because they personally feel cheated or exploited. The recent high-profile cases of academic misbehavior as well as the growing number of retractions show it is surprisingly easy to engage in sloppy science. Because incentives lead us astray, it really comes down to our self-discipline and moral standards.

As an author of academic research articles I have rarely encountered reviewers who were doubting the validity of my analyses. Never did I encounter reviewers who asked for a more elaborate explanation of the procedures used or who wanted to see the data themselves. Only once I received a request from a graduate student from another university who asked me to provide a dataset and the code I used in an article. I do feel good about being able to provide the original data and the code even though they were located on a computer that I had not used for three years and were stored with software that has received 7 updates since that time. But why haven’t I received such requests on other occasions?

As a reviewer, I recently tried to replicate analyses of a publicly available dataset reported in a paper. It was the first time I ever went to the trouble of locating the data, interpreting the description of the data handling in the manuscript and replicating the analyses. I arrived at different estimates and discovered several omissions and other mistakes in the analyses. Usually it is not even possible to replicate results because the data on which they are based are not publicly available. But they should be made available. Secret data are not permissible.[2] Next time I review an article I might ask: ‘Show, don’t tell’.

As an author, I have experienced how easy and tempting it is to engage in p-hacking: “exploiting –perhaps unconsciously- researcher degrees-of-freedom until p<.05”.[3] It is not really difficult to publish a paper with a fun finding from an experiment that was initially designed to test a hypothesis predicting another finding.[4] The hypothesis was not confirmed, and that result was less appealing than the fun finding. I adapted the title of the paper to reflect the fun finding, and people loved it.

The temptation to report fun findings and not to report rejections is enhanced by the behavior of reviewers and journal editors. On multiple occasions I encountered reviewers who did not like my findings when they led to rejections of hypotheses – usually hypotheses they had promulgated in their own previous research. The original publication of a surprising new finding is rarely followed by a null-finding. Still I try to publish null-findings, and increasingly so.[5] It may take a few years, and the article ends up in a B-journal.[6] But persistence is fertile. Recently a colleague took the lead in an article in which we replicate that null-finding using five different datasets.

In the field of criminology, it is considered a trivial fact that crime increases with its profitability and decreases with the risk of detection. Academic misbehavior is like crime: the more profitable it is, and the lower the risk of getting caught, the more attractive it becomes. The low detection risk and high profitability create strong incentives. There must be an iceberg of academic misbehavior. Shall we crack it under the waterline or let it hit a cruise ship full of tourists?


[1] In 1917, this was Max Weber’s criticism of capitalism in The Protestant Ethic and the Spirit of Capitalism.

[2] As Graham Greene wrote in Our Man in Havana: “With a secret remedy you don’t have to print the formula. And there is something about a secret which makes people believe… perhaps a relic of magic.”

[3] The description is from Uri Simonsohn, http://opim.wharton.upenn.edu/~uws/SPSP/post.pdf

[4] The title of the paper is ‘George Gives to Geology Jane: The Name Letter Effect and Incidental Similarity Cues in Fundraising’. It appeared in the International Journal of Nonprofit and Voluntary Sector Marketing, 15 (2): 172-180.

[5] On average, 55% of the coefficients reported in my own publications are not significant. The figure increased from 46% in 2005 to 63% in 2011.

[6] It took six years before the paper ‘Trust and Volunteering: Selection or Causation? Evidence from a Four Year Panel Study’ was eventually published in Political Behavior (32 (2): 225-247), after initial rejections at the American Political Science Review and the American Sociological Review.

2 Comments

Filed under academic misconduct, fraud, incentives, law, methodology, psychology

Risk factors for fraud and academic misconduct in the social sciences

This note (also available in pdf here) aims to feed the discussion about how to deal with fraud and other forms of academic misconduct in the wake of the Stapel and Smeesters affair and the publication of the report by the Schuyt Commission of the Royal Dutch Academy of Sciences (KNAW).

The recent fraud cases in psychology (the report of the Levelt committee that investigated the Stapel fraud is here: http://www.tilburguniversity.edu/nl/nieuws-en-agenda/finalreportLevelt.pdf; read more on Retraction Watch here) do not only call the credibility of that particular field of science into question, but also reduce the reputation of social science research generally. The KNAW report urges universities to educate employees and students in academic honesty but does not suggest to implement a specific policy to detect fraud and other forms of academic misconduct. The diversity in research practices between disciplines makes it difficult to impose a general policy to detect and deter misconduct. However, skeptics may view the reluctance of the KNAW to increase scrutiny as a way to cover up fraud and misconduct. Universities and science in general run a serious risk of losing their credibility in society if they do not deal with misconduct. With every new case that comes to light the public will ask: how is it possible that this case was not detected and prevented? Anticipating a large scale national investigation universities screen their employees using a list of risk factors for fraud and misconduct. This screening exercise may give a rough sense of how prevalent and serious academic misconduct is at their institution is. Below I give some suggestions for such risk factors, relying on research on academic misconduct.

At present it is unclear how prevalent and serious academic misconduct is at universities. It is difficult to obtain complete, valid and reliable estimates of the prevalence and severity of academic misconduct. Just as in crime outside the walls of academia, it is likely that there is a dark number for academic misconduct that does not come to light because there are no victims or because the victims or other witnesses have no incentive to report misconduct or an incentive not to report it. Relying on a survey among 435 European economists (a 17% response rate), Feld, Necker & Frey (2012) report that less than a quarter of all forms of academic misconduct is reported. There is no official registration of cases of academic misconduct. Cases of misconduct are sometimes covered by news media or by academics on blogs like Retraction Watch (http://retractionwatch.wordpress.com/). Using surveys, researchers have tried to estimate misconduct relying on self-reports and peer reports. In a Gallup 2008 survey among NIH grantees, 7.4% of the respondents reported suspected misconduct (Wells, 2008). Other surveys suggest a much higher incidence of misconduct. John, Loewenstein and Prelec (2012) conducted a study among psychologists with incentives for truth-telling and found that 36% admitted to having engaged in at least one ‘questionable research practice, a much higher incidence than the 9.5% reported by Fanelli (2009). The research available shows that fraud is certainly not unique to experimental (social) psychology, as the high-profile cases of Stapel, Smeesters and Sanna from the University of Michigan seem to suggest. Fraud occurs in many fields of science. Retraction Watch profiles the cases of Jan Hendrik Schön in nanotechnology, Marc Hauser in biology, Hwang Woo-suk in stem cell research, Jon Sudbø and Dipak Das in medicine, and many other researchers working in the medical and natural sciences.

What forms of misconduct should be distinguished? Below is a list of behaviors that are mentioned in discussions on academic dishonesty and the code of conduct of the Association of Universities in the Netherlands (VSNU).

  • Fabrication of data. Stapel fabricated ‘data’: he claimed data were collected in experiments while in fact no experiment was conducted and no data were collected. In less severe cases researchers data points are fabricated and added to a real dataset. List et al. (2001) report that 4% of economists admit having fabricated data. A similar estimate emerges from the more recent survey by Feld, Necker & Frey (2012). John, Loewenstein & Prelec report that 1.7% of psychologists admit fabrication of data. However, from this number, they estimate the true prevalence to be around 9%.
  • Omission of data points. Smeesters admitted to have worked datasets such that the hypotheses were confirmed, e.g. by fabricating and adding ‘data points’ that increase the and by omitting those that reduced the p-value. John, Loewenstein & Prelec report that 43.4% of psychologists admit this.
  • Invalid procedures for data handling. Errors in recoding, reporting or interpreting, inspired by and leading to support for the hypotheses. Research by Bakker & Wichterts (2011) shows this is quite common in psychology: 18% of statistical results in 2008 are incorrectly reported, commonly in the direction of the hypothesis favored by the author.
  • ‘Data snooping’: ending data collection before a target sample is achieved when a significant result is realized. This increases the likelihood of false positives or Type I errors (Strube, 2006). John, Loewenstein & Prelec report that 58% of psychologists admit this.
  • Cherry picking: not reporting on data that were collected because the results did not support the hypothesis. John, Loewenstein & Prelec report that 50% of psychologists admit this. Cherry picking results in the file drawer problem: the ‘unexpected’ results disappear into a drawer.
  • ‘Harking’: Hypothesizing After Results are Known (Kerr, 1998). In a paper, reporting an unexpected finding as having been predicted from the start. John, Loewenstein & Prelec report that 35% of psychologists admit this.

All of the above forms of misconduct lead to artificially strong positive results that are difficult to replicate (Jha, 2012; Simmons, Nelson & Simonsohn, 2011). The positive publication bias is enhanced by high-impact journals that want ‘novel findings’ and refuse to report (failed) replications. In addition to forms of misconduct that lead to positive publication bias, there are several other forms of misconduct:

  • Plagiarism. Cut & paste of text without quotation marks and/or proper references.
  • Double publication. Sending essentially the same manuscript to different journals without informing them and accepting simultaneous publication without cross-references. Ironically, Bruno Frey, the third author of the Feld, Necker & Frey (2012) paper cited above, has engaged in this form of misconduct on several occasions. The Frey case is documented extensively by Olaf Storbeck on his Economics Intelligence blog (http://economicsintelligence.com/2012/03/19/self-plagiarism-bruno-frey-gets-away-with-a-slap-on-the-wrist/).
  • Undeserved authorship. Putting the name of a co-author on a paper who did not contribute to the paper. List et al. lumped undeserved authorship and sending manuscripts simultaneously to two journals and report that 7 to 10% of economists have engaged in this behavior.
  • Not disclosing conflicts of interest (e.g., reviewing your own paper, a paper to which you contributed or a paper by a close colleague; sponsorship of the research by a party with interests in a certain outcome).
  • Not observing professional codes of conduct. Each academic discipline has its own code of conduct. The content of these codes vary widely. Being aware of the code is phase 1; knowledge of its content is phase 2; observing it is phase 3.

Trends in misconduct

The recent Stapel and Smeesters cases suggest that misconduct is increasing. While Giner-Sorolla (2012) argues that the problems so vividly put on the agenda in this ‘year of horrors’ (Wagenmakers, 2012) are not new at all, Steen (2011) shows that the number of retractions of papers from academic journals covered in PubMed has increased sharply in the past years. These are the cases that form the tip of the iceberg because journal editors considered the evidence for misconduct so convincing. Whether the iceberg has in fact grown is not clear. Fanelli (2012) shows that negative results are disappearing from most disciplines and countries published in ISI journal articles. Most troubling is that the proportion of positive results in journal articles from the Netherlands is stronger than in many other countries (OR: 1.16, reference category: US). Also the proportion is higher in the Social Sciences (OR: 2.14; reference category: Space Science) than in other disciplines – though less strong than in Neuroscience (3.16), Psychology (OR: 2.99) and Economics (2.65).

Characteristics of those who engage in misconduct

Little is known about the characteristics of those who engage in misconduct. List et al.. (2001) find virtually no significant associations between self-reported misconduct and characteristics of economists. Stroebe, Postmes and Spears (2012) compared cases of academics caught for fraud and identified a set of common characteristics of these cases: the fraudsters were highly respected as researchers, published journal articles proficiently, were very quick in making their career, and had perfect datasets. Nosek, Spies & Motyl (2012) vividly illustrate the social dilemma for young researchers trying to build a career with novel findings that they cannot replicate. Pretty much the same sketch emerges from an analysis of retracted publications in PubMed (Steen, 2011). While Stapel and Smeesters seem to have been isolated fraudsters, Steen (2011) find that a fraudster whose PubMed publication has been retracted “more frequently publishes with at least one co-author who also has fraudulent publications”.

What can be done to reduce misconduct?

Nosek, Spies & Motyl (2012) and Stroebe (Hamel, 2012, Witlox, 2012) are skeptical about self-correction in science. At present, the benefits of misconduct are too high and the risk of getting caught is simply too low. The fraudsters lined up by Stroebe at al. were able to pass peer review procedures because the procedures were not stringent enough. Reviewers should be more aware of the possibility of fraud (Matías-Guiu & García-Ramos, 2010). Audits in which random samples of journal articles are drawn and the authors would be a solution because they increase the dection risk, Stroebe et al. argue. Food scientist Katan proposed such an audit at a KNAW conference on data sharing in 2011 (KNAW, 2012, p.47). However, audits are costly procedures. Another recommendation is that replications should be made public. This has also been the dominant response in academic psychology to the Stapel case (Wicherts, 2011). Researchers are often unwilling or reluctant to share data (Wichterts, Borsboom, Kats & Molenaar, 2006). At present the incentives discourage researchers to share data. Researchers save time by not making their data available (Firebaugh, 2007; Giner-Sorolla, 2012). The costs required to make data available are often not budgeted. If research funders such as the Netherlands Organization for Scientific Research (NWO) impose a data sharing requirement this will create an additional cost for researchers. This makes it improbable that scientists will adopt the solution without force. At present, reluctance to share data indicates lower quality of research (Wicherts, Bakker & Molenaar, 2011). While data-sharing is desirable for replication purposes, it is not something that universities can impose and only works in the long run. Journal editors and reviewers could insist on data-sharing, however. This also goes for the idea to require a power analysis for experiments (Ioannidis, 2005; Ioannidis & Trikalinos, 2007; Simmons, Nelson, & Simonsohn, 2011), the proposal that reviews are published along the articles (Mooneyham, Franklin, Mrazek, & Schooler, 2012) and various other ideas proposed by Nosek & Bar-Anan (2012) such as a completely open access data repository. An even longer term proposal is that researchers pre-register their studies and indicate in advance the analyses they intend to conduct (Wagenmakers, Wetzels, Borsboom, Van der Maas, & Kievit, 2012). Generally speaking, academic misconduct is likely to be more prevalent and more severe as the benefits of misconduct are higher, the costs are lower, and the detection risk is lower. Stroebe makes this point in two recent interviews (Hamel, 2012; Witlox, 2012). The increasing publication pressure in many sciences increases the benefit of misconduct (John, Loewenstein & Prelec, 2012). The lack of attention to details from overburdened reviewers, co-authors who are happy to score an additional publication and from dissertation supervisors loaded with work reduces the detection risk. The rat race increases the likelihood that the isolated cases of Stapel and Smeesters will be the ones who were stupid enough not to organize into a pack.

Lifelong anesthesia researcher Mutch (2011: 784) advises the following remedies against misconduct: “good mentoring, appropriately trained and diligent support staff, blinded assessment of data, data review by all investigators, a consensus agreement on data interpretation, a vigorous and independent Research Office, effective internal and external committees to assess adherence to protocols, and strong departmental leadership.” Conversely, it is likely that in the absence of these conditions, there are more opportunities for misconduct. Given all of the above, I propose the following list of conditions that increase the potential for fraud and academic misconduct. The list can be used as a checklist or screening device for academic publications. Obviously an article with a high score is not necessarily fraudulent and requires more detailed attention. I encourage universities, journal editors, and reviewers to use this list, and to suggest additions or modifications. It is by no means intended to be a definitive list, and replication is necessary.

Condition Potential misconduct Detection method
1. The researcher worked alone. Nobody else had or has access to the ‘data’. Co-authors were not involved in the ‘data collection’ and/or ‘data analysis’. Data fabrication as well as less serious forms of misconduct. Ask co-authors.
2. The ‘data’ were not collected by others, but by the researcher. Data fabrication. Ask co-authors and co-workers.
3. There are no witnesses of the ‘data’ collection. Data fabrication. Ask co-authors and co-workers.
4. The raw ‘data’ (documents, fieldwork notes, questionnaires, videos, electronic data files) are not available (anymore). They are reported confidential, missing, lost, or located on a previous computer. Data fabrication. Ask author and co-authors, check data archive.
5. The statistics are ‘too good to be true’. The p-values of statistical tests are more often just below .050 than would be expected based on chance (Krawczyk, 2008; Simonsohn, 2012). Data fabrication, selective omission of data points, cherry picking, and harking. Compare p-values to expected distribution.
6. The research only finds support for the hypotheses (Fanelli, 2012). Cherry picking and harking. Count the proportion of  hypotheses supported.
7. There is no fieldwork report or lab log entry available. Data fabrication. Check data archive, lab log, ask author.
8. Data are provided but original code or description of procedures followed by the author is not available or it is unclear for others how to replicate the research. Data fabrication, cherry picking, harking. Ask author.
9. Replication of the research is impossible with the available raw data and procedures and analyses described by the author. Cherry picking, harking. Ask author.
10. Replication of the research is possible but yields no support for the original findings. Cherry picking, harking. Try to replicate the findings.
11. The research appeared in high impact journals. Misconduct with higher benefits. Check impact factor.
12. The author is early in his/her career. Misconduct with higher benefits. Check career stage.

December 17 update:
If you’re involved with academic journals as an editor or editorial board member, read this COPE discussion paper drafted by Elizabeth Wagner in April 2011: “How should editors respond to plagiarism?”
December 20 update:
In het nieuwe nummer van Mens & Maatschappij schreef Aafke Komter een artikel met een vergelijkbare vraagstelling. Zeer lezenswaardig!
February 22 update:
The p-hacking debate is still raging. Read more about it here. Colleagues at the Department of Communication Science did a preliminary analysis of publications in their field and also found the blob just below .05.p_hacking

References

2 Comments

Filed under academic misconduct, fraud, incentives, law, methodology, psychology