Category Archives: survey research

Tools for the Evaluation of the Quality of Experimental Research

pdf of this post

Experiments can have important advantages above other research designs. The most important advantage of experiments concerns internal validity. Random assignment to treatment reduces the attribution problem and increases the possibilities for causal inference. An additional advantage is that control over participants reduces heterogeneity of treatment effects observed.

The extent to which these advantages are realized in the data depends on the design and execution of the experiment. Experiments have a higher quality if the sample size is larger, the theoretical concepts are more reliably measured, and have a higher validity. The sufficiency of the sample size can be checked with a power analysis. For most effect sizes in the social sciences, which are small (d = 0.2), a sample of 1300 participants is required to detect it at conventional significance levels (p < .05) and 95% power (see appendix). Also for a stronger effect size (0.4) more than 300 participants are required. The reliability of normative scale measures can be judged with Cronbach’s alpha. A rule of thumb for unidimensional scales is that alpha should be at least .63 for a scale consisting of 4 items, .68 for 5 items, .72 for 6 items, .75 for 7 items, and so on. The validity of measures should be justified theoretically and can be checked with a manipulation check, which should reveal a sizeable and significant association with the treatment variables.

The advantages of experiments are reduced if assignment to treatment is non-random and treatment effects are confounded. In addition, a variety of other problems may endanger internal validity. Shadish, Cook & Campbell (2002) provide a useful list of such problems.

Also it should be noted that experiments can have important disadvantages. The most important disadvantage is that the external validity of the findings is limited to the participants in the setting in which their behavior was observed. This disadvantage can be avoided by creating more realistic decision situations, for instance in natural field experiments, and by recruiting (non-‘WEIRD’) samples of participants that are more representative of the target population. As Henrich, Heine & Norenzayan (2010) noted, results based on samples of participants in Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries have limited validity in the discovery of universal laws of human cognition, emotion or behavior.

Recently, experimental research paradigms have received fierce criticism. Results of research often cannot be reproduced (Open Science Collaboration, 2015), publication bias is ubiquitous (Ioannidis, 2005). It has become clear that there is a lot of undisclosed flexibility, in all phases of the empirical cycle. While these problems have been discussed widely in communities of researchers conducting experiments, they are by no means limited to one particular methodology or mode of data collection. It is likely that they also occur in communities of researchers using survey or interview data.

In the positivist paradigm that dominates experimental research, the empirical cycle starts with the formulation of a research question. To answer the question, hypotheses are formulated based on established theories and previous research findings. Then the research is designed, data are collected, a predetermined analysis plan is executed, results are interpreted, the research report is written and submitted for peer review. After the usual round(s) of revisions, the findings are incorporated in the body of knowledge.

The validity and reliability of results from experiments can be compromised in two ways. The first is by juggling with the order of phases in the empirical cycle. Researchers can decide to amend their research questions and hypotheses after they have seen the results of their analyses. Kerr (1989) labeled the practice of reformulating hypotheses HARKING: Hypothesizing After Results are Known. Amending hypotheses is not a problem when the goal of the research is to develop theories to be tested later, as in grounded theory or exploratory analyses (e.g., data mining). But in hypothesis-testing research harking is a problem, because it increases the likelihood of publishing false positives. Chance findings are interpreted post hoc as confirmations of hypotheses that a priori  are rather unlikely to be true. When these findings are published, they are unlikely to be reproducible by other researchers, creating research waste, and worse, reducing the reliability of published knowledge.

The second way the validity and reliability of results from experiments can be compromised is by misconduct and sloppy science within various stages of the empirical cycle (Simmons, Nelson & Simonsohn, 2011). The data collection and analysis phase as well as the reporting phase are most vulnerable to distortion by fraud, p-hacking and other questionable research practices (QRPs).

  • In the data collection phase, observations that (if kept) would lead to undesired conclusions or non-significant results can be altered or omitted. Also, fake observations can be added (fabricated).
  • In the analysis of data researchers can try alternative specifications of the variables, scale constructions, and regression models, searching for those that ‘work’ and choosing those that reach the desired conclusion.
  • In the reporting phase, things go wrong when the search for alternative specifications and the sensitivity of the results with respect to decisions in the data analysis phase is not disclosed.
  • In the peer review process, there can be pressure from editors and reviewers to cut reports of non-significant results, or to collect additional data supporting the hypotheses and the significant results reported in the literature.

Results from these forms of QRPs are that null-findings are less likely to be published, and that published research is biased towards positive findings, confirming the hypotheses, published findings are not reproducible, and when a replication attempt is made, published findings are found to be less significant, less often positive, and of a lower effect size (Open Science Collaboration, 2015).

Alarm bells, red flags and other warning signs

Some of the forms of misconduct mentioned above are very difficult to detect for reviewers and editors. When observations are fabricated or omitted from the analysis, only inside information, very sophisticated data detectives and stupidity of the authors can help us. Also many other forms of misconduct are difficult to prove. While smoking guns are rare, we can look for clues. I have developed a checklist of warning signs and good practices that editors and reviewers can use to screen submissions (see below). The checklist uses terminology that is not specific to experiments, but applies to all forms of data. While a high number of warning signs in itself does not prove anything, it should alert reviewers and editors. There is no norm for the number of flags. The table below only mentions the warning signs; the paper version of this blog post also shows a column with the positive poles. Those who would like to count good practices and reward authors for a higher number can count gold stars rather than red flags. The checklist was developed independently of the checklist that Wicherts et al. (2016) recently published.

Warning signs

  • The power of the analysis is too low.
  • The results are too good to be true.
  • All hypotheses are confirmed.
  • P-values are just below critical thresholds (e.g., p<.05)
  • A groundbreaking result is reported but not replicated in another sample.
  • The data and code are not made available upon request.
  • The data are not made available upon article submission.
  • The code is not made available upon article submission.
  • Materials (manipulations, survey questions) are described superficially.
  • Descriptive statistics are not reported.
  • The hypotheses are tested in analyses with covariates and results without covariates are not disclosed.
  • The research is not preregistered.
  • No details of an IRB procedure are given.
  • Participant recruitment procedures are not described.
  • Exact details of time and location of the data collection are not described.
  • A power analysis is lacking.
  • Unusual / non-validated measures are used without justification.
  • Different dependent variables are analyzed in different studies within the same article without justification.
  • Variables are (log)transformed or recoded in unusual categories without justification.
  • Numbers of observations mentioned at different places in the article are inconsistent. Loss or addition of observations is not justified.
  • A one-sided test is reported when a two-sided test would be appropriate.
  • Test-statistics (p-values, F-values) reported are incorrect.

With the increasing number of retractions of articles reporting on experimental research published in scholarly journals the awareness of the fallibility of peer review as a quality control mechanism has increased. Communities of researchers employing experimental designs have formulated solutions to these problems. In the review and publication stage, the following solutions have been proposed.

  • Access to data and code. An increasing number of science funders require grantees to provide open access to the data and the code that they have collected. Likewise, authors are required to provide access to data and code at a growing number of journals, such as Science, Nature, and the American Journal of Political Science. Platforms such as Dataverse, the Open Science Framework and Github facilitate sharing of data and code. Some journals do not require access to data and code, but provide Open Science badges for articles that do provide access.
  • Pledges, such as the ‘21 word solution’, a statement designed by Simmons, Nelson and Simonsohn (2012) that authors can include in their paper to ensure they have not fudged the data: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.”
  • Full disclosure of methodological details of research submitted for publication, for instance through psychdisclosure.org is now required by major journals in psychology.
  • Apps such as Statcheck, p-curve, p-checker, and r-index can help editors and reviewers detect fishy business. They also have the potential to improve research hygiene when researchers start using these apps to check their own work before they submit it for review.

As these solutions become more commonly used we should see the quality of research go up. The number of red flags in research should decrease and the number of gold stars should increase. This requires not only that reviewers and editors use the checklist, but most importantly, that also researchers themselves use it.

The solutions above should be supplemented by better research practices before researchers submit their papers for review. In particular, two measures are worth mentioning:

  • Preregistration of research, for instance on Aspredicted.org. An increasing number of journals in psychology require research to be preregistered. Some journals guarantee publication of research regardless of its results after a round of peer review of the research design.
  • Increasing the statistical power of research is one of the most promising strategies to increase the quality of experimental research (Bakker, Van Dijk & Wicherts, 2012). In many fields and for many decades, published research has been underpowered, using samples of participants that are not large enough the reported effect sizes. Using larger samples reduces the likelihood of both false positives as well as false negatives.

A variety of institutional designs have been proposed to encourage the use of the solutions mentioned above, including reducing the incentives in careers of researchers and hiring and promotion decisions for using questionable research practices, rewarding researchers for good conduct through badges, the adoption of voluntary codes of conduct, and socialization of students and senior staff through teaching and workshops. Research funders, journals, editors, authors, reviewers, universities, senior researchers and students all have a responsibility in these developments.

References

Bakker, M., Van Dijk, A. & Wicherts, J. (2012). The Rules of the Game Called Psychological Science. Perspectives on Psychological Science, 7(6): 543–554.

Henrich, J., Heine, S.J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33: 61 – 135.

Ioannidis, J.P.A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8): e124. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

Kerr, N.L. (1989). HARKing: Hypothesizing After Results are Known. Personality and Social Psychology Review, 2: 196-217.

Open Science Collaboration (2015). Estimating the Reproducibility of Psychological Science. Science, 349. http://www.sciencemag.org/content/349/6251/aac4716.full.html

Shadish, W.R., Cook, T.D., & Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Simmons, J.P., Nelson, L.D., & Simonsohn, U. (2011). False positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22: 1359–1366.

Simmons, J.P., Nelson, L.D. & Simonsohn, U. (2012). A 21 Word Solution. Available at SSRN: http://ssrn.com/abstract=2160588

Wicherts, J.M., Veldkamp, C.L., Augusteijn, H.E., Bakker, M., Van Aert, R.C & Van Assen, M.L.A.M. (2016). Researcher degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers of Psychology, 7: 1832. http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01832/abstract

1 Comment

Filed under academic misconduct, experiments, fraud, incentives, open science, psychology, survey research

Introducing Mega-analysis

How to find truth in an ocean of correlations – with breakers, still waters, tidal waves, and undercurrents? In the old age of responsible research and publication, we would collect estimates reported in previous research, and compute a correlation across correlations. Those days are long gone.

In the age of rat race research and publication it became increasingly difficult to do a meta-analysis. It is a frustrating experience for anyone who has conducted one: endless searches on the Web of Science and Google Scholar to collect all published research, input the estimates in a database, find that a lot of fields are blank, email authors for zero-order correlations and other statistics they had failed to report in their publications and get very little response.

Meta-analysis is not only a frustrating experience, it is also a bad idea when results that authors do not like do not get published. A host of techniques have been developed to find and correct publication bias, but the problem that we do not know the results that do not get reported is not solved easily.

As we enter the age of open science,  we do not have to rely any longer on the far from perfect cooperation from colleagues who have moved to a different university, left academia, died, or think you’re trying to prove them wrong and destroy your career. We can simply download all the raw data and analyze them.

Enter mega-analysis: include all the data points relevant for a certain hypothesis, cluster them by original publication, date, country, or any potentially relevant property of the research design, and add the substantial predictors you find documented in the literature. The results reveal not only the underlying correlations between substantial variables, but also the differences between studies, periods, countries and design properties that affect these correlations.

The method itself is not new. In epidemiology, and Steinberg et al. (1997) labeled it ‘meta-analysis of individual patient data’. In human genetics, genome wide association studies (GWAS) by large international consortia are common examples of mega-analysis.

Mega-analysis includes the file-drawer of papers that never saw the light of day after they were put in. It also includes the universe of papers that have never been written because the results were unpublishable.

If meta-analysis gives you an estimate for the universe of published research, mega-analysis can be used to detect just how unique that universe is in the milky way. My prediction would be that correlations in published research are mostly further from zero than the same correlation in a mega-analysis.

Mega-analysis bears great promise for the social sciences. Samples for population surveys are large, which enables optimal learning from variations in sampling procedures, data collection mode, and questionnaire design. It is time for a Global Social Science Consortium that pools all of its data. As an illustration, I have started a project on the Open Science Framework that mega-analyzes generalized social trust. It is a public project: anyone can contribute. We have reached mark of 1 million observations.

The idea behind mega-analysis originated from two different projects. In the first project, Erik van Ingen and I analyzed the effects of volunteering on trust, to check if results from an analysis of the Giving in the Netherlands Panel Survey (Van Ingen & Bekkers, 2015) would replicate with data from other panel studies. We found essentially the same results in five panel studies, although subtle differences emerged in the quantative estimates. In the second project, with Arjen de Wit and colleagues from the Center for Philanthropic Studies at VU Amsterdam, we analyzed the effects of volunteering on well-being conducted as part of the EC-FP7 funded ITSSOIN study. We collected 845.733 survey responses from 154.970 different respondents in six panel studies, spanning 30 years (De Wit, Bekkers, Karamat Ali & Verkaik, 2015). We found that volunteering is associated with a 1% increase in well-being.

In these projects, the data from different studies were analyzed separately. I realized that we could learn much more if the data are pooled in one single analysis: a mega-analysis.

References

De Wit, A., Bekkers, R., Karamat Ali, D., & Verkaik, D. (2015). Welfare impacts of participation. Deliverable 3.3 of the project: “Impact of the Third Sector as Social Innovation” (ITSSOIN), European Commission – 7th Framework Programme, Brussels: European Commission, DG Research.

Van Ingen, E. & Bekkers, R. (2015). Trust Through Civic Engagement? Evidence From Five National Panel StudiesPolitical Psychology, 36 (3): 277-294.

Steinberg, K.K., Smith, S.J., Stroup, D.F., Olkin, I., Lee, N.C., Williamson, G.D. & Thacker, S.B. (1997). Comparison of Effect Estimates from a Meta-Analysis of Summary Data from Published Studies and from a Meta-Analysis Using Individual Patient Data for Ovarian Cancer Studies. American Journal of Epidemiology, 145: 917-925.

Leave a comment

Filed under data, methodology, open science, regression analysis, survey research, trends, trust, volunteering

Four Reasons Why We Are Converting to Open Science

The Center for Philanthropic Studies I am leading at VU Amsterdam is converting to Open Science.

Open Science offers four advantages to the scientific community, nonprofit organizations, and the public at large:

  1. Access: we make our work more easily accessible for everyone. Our research serves public goods, which are served best by open access.
  2. Efficiency: we make it easier for others to build on our work, which saves time.
  3. Quality: we enable others to check our work, find flaws and improve it.
  4. Innovation: ultimately, open science facilitates the production of knowledge.

What does the change mean in practice?

First, the source of funding for contract research we conduct will always be disclosed.

Second, data collection – interviews, surveys, experiments – will follow a prespecified protocol. This includes the number of observations forseen, the questions to be asked, measures to be included, hypotheses to be tested, and analyses to be conducted. New studies will be preferably be preregistered.

Third, data collected and the code used to conduct the analyses will be made public, through the Open Science Framework for instance. Obviously, personal or sensitive data will not be made public.

Fourth, results of research will preferably be published in open access mode. This does not mean that we will publish only in Open Access journals. Research reports and papers for academic will be made available online in working paper archives, as a ‘preprint’ version, or in other ways.

 

December 16, 2015 update:

A fifth reason, following directly from #1 and #2, is that open science reduces the costs of science for society.

See this previous post for links to our Giving in the Netherlands Panel Survey data and questionnaires.

 

July 8, 2017 update:

A public use file of the Giving in the Netherlands Panel Survey and the user manual are posted at the Open Science Framework.

Leave a comment

Filed under academic misconduct, Center for Philanthropic Studies, contract research, data, fraud, incentives, methodology, open science, regulation, survey research

Why a high R Square is not necessarily better

Often I encounter academics thinking that a high proportion of explained variance is the ideal outcome of a statistical analysis. The idea is that in regression analyses a high R Square is better than a low R Square. In my view, the emphasis on a high R2 should be reduced. A high R2 should not be a goal in itself. The reason is that a higher R2 can easily be obtained by using procedures that actually lower the external validity of coefficients.

It is possible to increase the proportion of variance explained in regression analyses in several ways that do not in fact our ability to ‘understand’ the behavior we are seeking to ‘explain’ or ‘predict’. One way to increase the R2 is to remove anomalous observations, such as ‘outliers’ or people who say they ‘don’t know’ and treat them like the average respondent. Replacing missing data by mean scores or using multiple imputation procedures often increases the Rsquare. I have used this procedure in several papers myself, including some of my dissertation chapters.

But in fact outliers can be true values. I have seen quite a few of them that destroyed correlations and lowered R squares while being valid observations. E.g., a widower donating a large amount of money to a charity after the death of his wife. A rare case of exceptional behavior for very specific reasons that seldom occur. In larger samples these outliers may become more frequent, affecting the R2 less strongly.

Also ‘Don’t Know’ respondents are often systematically different from the average respondent. Treating them as average respondents eliminates some of the real variance that would otherwise be hard to predict.

Finally, it is often possible to increase the proportion of variance explained by including more variables. This is particularly problematic if variables that are the result of the dependent variable are included as predictors. For instance if network size is added to the prediction of volunteering the R Square will increase. But a larger network not only increases volunteering; it is also a result of volunteering. Especially if the network questions refer to the present (do you know…) while the volunteering questions refer to the past (in the past year, have you…) it is dubious to ‘predict’ volunteering in the past by a measure of current network size.

As a reviewer, I give authors reporting an R2 exceeding 40% a treatment of high-level scrutiny for dubious decisions in data handling and inclusion of variables.

As a rule, R Squares tend to be higher at higher levels of aggregation, e.g. when analyzing cross-situational tendencies in behavior rather than specific behaviors in specific contexts; or when analyzing time-series data or macro-level data about countries rather than individuals. Why people do the things they do is often just very hard to predict, especially if you try to predict behavior in a specific case.

1 Comment

Filed under academic misconduct, data, methodology, regression analysis, survey research

Lunch Talk: “Generalized Trust Through Civic Engagement? Evidence from Five National Panel Studies”

Does civic engagement breed trust? According to a popular version of social capital theory, civic engagement should produce generalized trust among citizens. In a new paper accepted for publication in Political Psychology, Erik van Ingen (Tilburg University) and I put this theory to the test by examining the causal connection between civic engagement and generalized trust using multiple methods and multiple (prospective) panel datasets. We found participants to be more trusting. This was mostly likely caused by selection effects: the causal effects of civic engagement on trust were very small or non-significant. In the cases where small causal effects were found, they turned out not to last. We found no differences across types of organizations and only minor variations across countries.

At the PARIS colloquium of the Department of Sociology at VU University on November 12, 2013 (Room Z531, 13.00-14.00), I will not just be talking about this paper published in Political Behavior and about the new paper forthcoming in Political Psychology (here is the prepublication version). In addition to a substantive story about a research project there is also a story about the process of getting a paper accepted with a null-finding that goes against received wisdom. This story is quite informative about the publication factory that we are all in.

Leave a comment

Filed under data, psychology, survey research, trust, volunteering

Update: Giving in the Netherlands Panel Survey User Manual

A new version of the User Manual for the Giving in the Netherlands Panel Survey is now available: version 2.2.

The GINPS12 questionnaire is here (in Dutch).

Leave a comment

Filed under data, empathy, experiments, helping, household giving, methodology, philanthropy, principle of care, survey research, trends, trust, volunteering, wealth

You are welcome to use our data

“Can I please use your data on giving and volunteering?” Yes you can! In fact, you are very welcome to use the data we have collected at the Center for Philanthropic Studies. The data from the Giving in the Netherlands Panel Study (GINPS) on households are currently being used by students in Amsterdam, Rotterdam and Utrecht in statistics tutorials, by students in Amsterdam for Master Thesis projects, and by PhD candidates and established researchers around the world for academic research. The panel design allows for dynamic analyses of giving and volunteering, answering questions like:

  • How does volunteering affect the size and composition of social networks?
  • Are giving and volunteering substitutes or complements?
  • How does household giving change as people age?

To get access to the data, here’s what you will need to do.

Note that if you just need aggregate statistics on giving and volunteering you will not need access to the micro-level data. You can probably find the data you need in our biennial ‘Giving in the Netherlands’ book. A summary in English of the 2015 edition is here.

Questionnaires:

The data on corporate social responsibility and corporate philanthropy are less well documented, but also available to researchers.

2 Comments

Filed under corporate social responsibility, data, experiments, household giving, methodology, survey research, volunteering