How to organize your data and code

A key element of open science is to provide access to the code and data that produces the results you present.

Guidelines for the organization of your data and code are based on two general principles: simplicity and explanation. Make verification of your results as simple as possible, and provide clear documentation so that people who are not familiar with your data or research can execute the analyses and understand the results.

Simplify the file structure. Organize the files you provide in the simplest possible structure. Ideally, a single file of code produces all results you report. Conduct all analyses in the same software package when possible. Sometimes, however, you may need different programs and multiple files to obtain the results. If you need multiple files, provide a readme.txt file that lists which files provide which results.

Deidentify the data. In the data file, eliminate the variables that contain information that can identify specific individuals if these persons have not given explicit consent to be identified. Do not post data files that include IP addresses of participants, names, email addresses, residential home addresses, zip codes, telephone numbers, social security numbers, or any other information that may identify specific persons. Do not deidentify the data manually, but create a code file for all preprocessing of data to make them reproducible.

Organize the code. In the code, create at least three sections:

  1. Preliminaries. The first section includes commands that install packages that are required for the analyses but do not come with the software.
    • Include a line that users can adapt, identifying the path where data and results are stored. Use the same path for data and code. For example:
      • use cd "C:\Users\rbs530\surfdrive\Shared\VolHealthMega"
    • The first section also includes commands that specify the exact names of the data files required for the analysis. For example:
      • use "Data\Pooled\VolHealthMega.dta", clear
  2. Data preparation. The second section includes commands that create variables and recode them. Also this section assigns labels to variables and their values, so their meaning is clear.
    • For example:
      • label variable llosthlt "Lost health from t-2 to t-1"
  3. Results. The third section includes the commands that produce the results reported in the paper. Add comments to identify which commands produce which results.
    • For example:
      • *This produces Table 1:
      • summ *
  4. Appendix results. An optional fourth section contains the commands that produce the results reported in the Appendices.
    • For example:
      • *Appendix Table S12a:
      • xtreg phealth Dvolkeep Dvoljoin Dvolquit year l.phealth l2.phealth l3.phealth l4.phealth, fe

Explain ad hoc decisions. Document and explain your decisions. Throughout the code, add comments that explain the reasoning behind the choices you make that you have not pre-registered. E.g. “collapsing across conditions 1 and 2 because they are quantitatively similar and not significantly different”.

Double check before submission. When you are done, ask your supervisor to execute the code. Does the code produce the results reported in the paper? Can your supervisor understand your decisions? If so, you are ready.

Locate your materials. Identify the URL that contains the data and code that produce the results you report. If you write an empirical journal article, add the URL to the abstract as well as in the data section. Identify the software package and version that you used to produce the results.

Set up a repository. Create a repository, preferably on the Open Science Framework, https://osf.io/ where you post all materials reviewers and readers need to verify and replicate your paper: the deidentified data file, the code, stimulus materials, and online appendix tables and figures. Here is a template you can use for this purpose: https://osf.io/3g7e5/. Help the reader navigate through all the materials by including a brief description of each part.

Thanks to Rense Corten for helpful suggestions.

Leave a comment

Filed under data, experiments, methodology, open science, regression analysis, statistical analysis, survey research

10 Things You Need to Know About Open Science

1. What is Open Science?

Open science is science as it should be: as open as possible. The current practice of open science is that scientists provide open access to publications, the data they analyzed, the code that produces the results, and the materials. Open science is no magic, no secrets, no hidden cards, no tricks; what you see is what you get. Fully open science is sharing everything, including research ideas, grant applications, reviews, funding decisions, failed experiments and null results.

2. Why should I preregister my research?

When you preregister your research, you put your ideas, expectations, hypotheses and your plans for data collection and analyses in an archive (e.g., on As Predicted, https://aspredicted.org/ or on the Open Science Framework, https://help.osf.io/hc/en-us/articles/360019738834-Create-a-Preregistration) before you have executed the study. A preregistration allows you to say: “See, I told you so!” afterwards. Preregister your research if you have theories and methods you want to test, and if you want to make testable predictions about the results.

3. Won’t I get scooped when I post a preliminary draft of my ideas?
No, when you put your name and a date in the file, it will be obvious that you were the first person who came up with the ideas.

4. Where can I post the data I collect and the code for the analysis?

On Dataverse, https://dataverse.org/, the Open Science Framework, https://osf.io/, and on Zenodo, https://zenodo.org/. As a researcher you can use these platforms for free, and they are not owned by commercial enterprises. You keep control over the content of the repositories you create, and you can give them a DOI so others can cite your work.

5. Won’t I look like a fool when others find mistakes in my code?
No, on the contrary: you will look like a good scientist when you correct your mistakes. https://retractionwatch.com/2021/03/08/authors-retract-nature-majorana-paper-apologize-for-insufficient-scientific-rigour/
Instead, you will look like a fool if you report results that nobody can reproduce, and stubbornly persist claiming support for your result.

6. Does open science mean that I have to publish all of my data?

No, please do not publish all of your data! Your data probably contains details of individual persons that could identify them. Make sure you pseudonymize these persons and deidentify the data by removing details that could identify them before you share your data.

7. Why should I post a preprint of my work in progress?

If you post a preprint of your work in progress, alert others to it and invite them to review it, you will get comments and suggestions from others. They will improve your own work. You will also make your work citable before you have published the paper. Read more about Preprints here https://plos.org/open-science/preprints/

8. Where should I submit my research paper?

Submit your research paper to a journal that doesn’t exploit authors and reviewers. Commercial publishers do not care about the quality of the research, only about making a profit by selling it back to you. A short history is here https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science.

9. How can I get a publication before I have done my study?
Submit it as a registered report to a journal that offers this format. Learn about registered reports here https://www.cos.io/initiatives/registered-reports and here https://doi.org/10.1027/1864-9335/a000192

10. Can I get sued when I post the published version of a paper on my website?

It has never happened to me, and I have posted accepted and published versions on my website https://renebekkers.wordpress.com/publications/ for more than 20 years now.

2 Comments

Filed under data, experiments, fraud, incentives, methodology, open science, statistical analysis, survey research

Inequality and philanthropy

In the public debate, the rise in inequality is linked to criticism of private philanthropy, not only as being a strategy to reduce feelings of guilt, but also as a way to evade taxes, buy goodwill, and favor causes that are benefiting the rich rather than society as a whole.

Rutger Bregman famously called for increased taxation of the rich, instead of praise for their philanthropy. The two are not mutually exclusive, as the graph below shows. In fact, there is no relationship at all between the volume of philanthropy in a country and the tax burden in that country.

The data on government expenditure are from the IMF. The data on philanthropy for 24 countries in this graph come from a report by the Charities Aid Foundation in the UK (CAF, 2016). These data are far from ideal, because they were gathered in different years (2010-2014), and using different methods. So for what it is worth: the correlation is zero (r= .00).The United States is a clear outlier, but even when we exclude it, the correlation remains zero (r= -.01).

The more reliable evidence we already have is on the proportion that gives to charity in different countries from the Gallup World Poll. This is currently the only source of data on engagement in philanthropy in a sizeable number of countries around the globe, even though the poll includes only one question to measure charitable giving. The surprising finding is that countries in which citizens pay higher taxes have a higher proportion of the population engaging in charitable giving. The correlation for 141 countries is r= .28. Within Europe, the association is even stronger, r= .49.

As you can see, and as others have noted (Salamon, Sokolowski & Haddock, 2017) the facts do not support the political ideology that keeping the state small makes people care for each other. In contrast: countries in which citizens are contributing less to public welfare through taxes are less involved in charity. To me, this positive relationship does not imply causation. I don’t see how paying taxes makes people more charitable, or vice versa. What it means is that the citizens of some countries are more prepared to give to charity and are also willing to pay more taxes.

Some further evidence on the relation between redistribution effort and philanthropy comes from an analysis of data from the Gallup World Poll and the OECD, collected for a grant proposal to conduct global comparative research on philanthropy.

The correlation between income inequality after taxes and the proportion of the population giving to charity is weakly negative, r = -.10 across 137 countries. In contrast, income inequality before taxes shows a weakly positive relation with the proportion of the population that gives to charity, r = .06.

This implies that in countries where the income distribution becomes more equal as a result of the income tax (‘redistribution effort’) a higher proportion of the population gives to charity. In countries that are more effectively reducing income inequality the proportion that gives to charity is higher. However, the correlation is not very strong (r = .20). The figure below visualizes the association.

The chart implies that countries in which the population is more engaged with charitable causes are more effectively reducing income inequality. My interpretation of that association is a political one. A stronger reduction of income inequality is the result of effort and effectiveness of progressive income taxation, a political choice ultimately supported by the preferences of voters. The same prosocial and inequality aversion preferences lead people to engage in charitable giving. Restoring justice and fairness in an unfair and mean world are important motivations for people to give. Countries in which a higher proportion of the electorate votes for reduction of income inequality are more charitable. 

Strictly speaking, the chart does not tell you whether income inequality causes giving to be lower. However, there is enough evidence supporting a negative causal influence of income inequality on generalized trust (Leigh, 2006; Gustafsson & Jordahl, 2008; Barone & Mocetti, 2016; Stephany, 2017; Hastings, 2018; Yang & Xin, 2020). Countries such as the UK and US in which political laissez-faire has allowed income inequality to rise have become markedly less trusting over time. Trust is an important precondition for giving – more about that in another post.

This post builds on Values of Philanthropy, a keynote address I gave at the ISTR Conference in Amsterdam on July 12, 2018. Thanks to Beth Breeze and Nicholas Duquette for conversations about these issues.

References

Barone, G., & Mocetti, S. (2016). Inequality and trust: new evidence from panel data. Economic Inquiry, 54(2), 794-809. https://doi.org/10.1111/ecin.12309

Bekkers, R. (2018). Values of Philanthropy. Keynote Address, ISTR Conference, July 12, 2018. Amsterdam: Vrije Universiteit Amsterdam.

CAF (2016). Gross Domestic Philanthropy: An International Analysis of GDP, tax and giving. West Malling: Charities Aid Foundation. https://www.cafonline.org/docs/default-source/about-us-policy-and-campaigns/gross-domestic-philanthropy-feb-2016.pdf

Gustavsson, M., & Jordahl, H. (2008). Inequality and trust in Sweden: Some inequalities are more harmful than others. Journal of Public Economics, 92(1-2), 348-365. https://doi.org/10.1016/j.jpubeco.2007.06.010

Hastings, O.P. (2018). Less Equal, Less Trusting? Reexamining Longitudinal andCross-sectional Effects of Income Inequality on Trust in U.S. States, 1973–2012. Social Science Research, 74: 77-95. https://doi.org/10.1016/j.ssresearch.2018.04.005

Leigh, A. (2006). Trust, inequality and ethnic heterogeneity. Economic Record, 82(258), 268-280. https://doi.org/10.1111/j.1475-4932.2006.00339.x

OECD (2018). Tax Revenue, % of GDP, https://data.oecd.org/chart/5do5

Salamon, L.M., Sokolowski, S.W., & Haddock, M.A. (2017). Explaining civil society development. A social origins approach. Baltimore, MD: John Hopkins University Press. https://www.amazon.com/Explaining-Civil-Society-Development-Approach/dp/1421422980

Stephany, F. (2017). Who are your joneses? socio-specific income inequality and trust. Social Indicators Research, 134(3), 877-898. https://doi.org/10.1007/s11205-016-1460-9

Yang, Z., & Xin, Z. (2020). Income inequality and interpersonal trust in China. Asian Journal of Social Psychology, 23(3), 253-263. https://doi.org/10.1111/ajsp.12399

Leave a comment

Filed under Uncategorized

Altruism at a cost: how closing donation centers reduces blood donor loyalty

To what extent is blood donation motivated by altruism? Donating blood is ‘giving life’ and it is often seen as an act of sacrifice. In a new paper forthcoming in the journal Health & Place, co-authored with Tjeerd Piersma, Eva-Maria Merz and Wim de Kort, we checked whether blood donors continue to give blood when the sacrifice becomes more costly.

To see whether they do so, we tracked blood donors in the Netherlands between 2010 and 2018. The number of donation centers operated by Sanquin, the national blood collection agency in the Netherlands, decreased by 46%, from 252 in 2010 to 136 in 2018.

We found that donors who were used to give blood at a location that was closed were much more likely to stop giving blood. The difference was very large: donors for whom the nearest donation center was closed were 50% more likely to have lapsed in the year after the closure than donors for whom the nearest center remained open (15.3% vs. 10.2%).

The percentage of lapsed donors after closing the nearest donation center steadily increased with each extra kilometer distance to the new nearest donation center. Of the donors whose nearest donation center closed, 11.6% lapsed when the distance increased by less than one kilometer while 32.8% lapsed when the distance increased by more than nine kilometers.

Because the blood of O-negative donors can be used for transfusions to a larger number of other blood types, they are called ‘universal donors’. We expected a lower lapsing risk for universal donors as costs increase. This would be evidence of altruism.

First we thought that we had found such evidence. Universal donors are less likely to stop donating blood after the nearest donation center is closed than other donors. Also we found that universal donors were more committed to continue giving blood as the travel distance increased up to 5 kilometers. At longer distances, the pattern was less clear.

However, when we included the number of requests for donations as a covariate, the difference largely disappeared. This means that O-negative donors are more likely to continue to give blood because they receive more requests to donate from the blood bank. The sensitivity to these requests was very similar for universal and other donors.

We conducted mediation tests to establish that closing a donation center reduced donor loyalty because the travel distance to the nearest location increased, and to establish that universal donors were more loyal because they received more requests to donate blood.

One of the reviewers asked for a matching analysis. This was a good idea. It also provided a nice learning experience. I had never done such an analysis before. The results were pretty close to the regression results, by the way: no difference between universal and other donors matched on the number of requests. 

In sum, we found evidence that:

  1. Blood donors are strongly sensitive to the costs of donating: closing a donation location increased the risk of lapse big time;
  2. Blood donors are less likely to lapse when they receive more requests;
  3. ‘Universal’, O-negative donors were less likely to lapse, because they received more requests to come donate blood;
  4. Universal donors are equally sensitive to requests and to costs of donating as other donors.

The analyses are based on register data from Sanquin on all blood donors (n = 259,172) and changes in geographical locations of blood donation centers in the Netherlands over the past decade. Because these data contain personal information we cannot share them for legal reasons. We do provide the complete Stata log file of all analyses at https://osf.io/58qzk/. The paper is available here: https://osf.io/preprints/socarxiv/na3ys/ This paper is part of the PhD dissertation by Tjeerd Piersma, available here.

We started with this project thinking that closures of donation centers may be natural experiments. But we soon found out that was not the case. Which donation centers were closed was decided by Sanquin after a cost-benefit calculation. Centers serving fewer donors were more likely to be closed.

As a result, the centers that were closed were located in less densely populated areas. It was a fortunate coincidence for the blood collection agency that donors in less densely populated areas were more loyal. They were willing to spend more time travelling to the next nearest donation center.

For our research, however, the lower lapsing risk for donors in areas where donation centers were more likely to be closed created a correlation between closure and our dependent variable, donor loyalty. End of story for the natural experiment.

We spent some time searching for instrumental variables, such as rent prices for offices where donation locations are housed. Many office locations in the Netherlands are vacant and available at reduced rent.

The number of empty offices in a municipality could reduce the costs of keeping a donation center open. However, we found that the percentage of empty offices in a municipality was not related to closure of donation centers. If you have thoughts on other potential IVs, let us know!

Leave a comment

Filed under Uncategorized

Nonprofit Trends Survey – Nederland

Heeft jouw organisatie de afgelopen maanden de nieuwste technologie ingezet om tijdens de Coronacrisis door te kunnen gaan? Of zou jouw organisatie beter met technologie moeten leren omgaan? Hoe veel vertrouwen heb je dat jouw organisatie sterker uit de pandemie zal komen? Welke zorgen heb je voor de komende maanden?

Over deze zaken horen we graag jouw mening en ervaringen in de internationale Nonprofit Trends Survey. Deze enquête wordt uitgevoerd door het Urban Institute in Washington DC onder goededoelenorganisaties en andere nonprofits en vindt plaats in 6 landen: in de Verenigde Staten, het Verenigd Koninkrijk, Canada, Duitsland, Frankrijk, en in Nederland. Het Centrum voor Filantropische Studies van de Vrije Universiteit Amsterdam heeft de vragenlijst vertaald voor het onderzoek in Nederland. We hopen in het nieuwe onderzoek een beeld te krijgen van het gebruik van data en technologie in Nederland, zodat we het kunnen vergelijken met andere landen.

Respondenten maken kans op Amazon gift cards. Klik hier om aan het onderzoek deel te nemen: https://urbaninstitute.fra1.qualtrics.com/jfe/form/SV_86S09YMGgaT2EpD?Q_Language=NL

Hartelijk dank alvast!

Leave a comment

Filed under Center for Philanthropic Studies, charitable organizations, crowdfunding, data, fundraising, impact, incentives, Netherlands, philanthropy, survey research, trends

A Data Transparency Policy for Results Based on Experiments

pdf

Transparency is a key condition for robust and reliable knowledge, and the advancement of scholarship over time. Since January 1, 2020, I am the Area Editor for Experiments submitted to Nonprofit & Voluntary Sector Quarterly (NVSQ), the leading journal for academic research in the interdisciplinary field of nonprofit research. In order to improve the transparency of research published in NVSQ, the journal is introducing a policy requiring authors of manuscripts reporting on data from experiments to provide, upon submission, access to the data and the code that produced the results reported. This will be a condition for the manuscript to proceed through the blind peer review process.

The policy will be implemented as a pilot for papers reporting results of experiments only. For manuscripts reporting on other types of data, the submission guidelines will not be changed at this time.

 

Rationale

This policy is a step forward strengthening research in our field through greater transparency about research design, data collection and analysis. Greater transparency of data and analytic procedures will produce fairer, more constructive reviews and, ultimately, even higher quality articles published in NVSQ. Reviewers can only evaluate the methodologies and findings fully when authors describe the choices they made and provide the materials used in their study.

Sample composition and research design features can affect the results of experiments, as can sheer coincidence. To assist reviewers and readers in interpreting the research, it is important that authors describe relevant features of the research design, data collection, and analysis. Such details are also crucial to facilitate replication. NVSQ receives very few, and thus rarely publishes replications, although we are open to doing so. Greater transparency will facilitate the ability to reinforce, or question, research results through replication (Peters, 1973; Smith, 1994; Helmig, Spraul & Temp, 2012).

Greater transparency is also good for authors. Articles with open data appear to have a citation advantage: they are cited more frequently in subsequent research (Colavizza et al., 2020; Drachen et al., 2016). The evidence is not experimental: the higher citation rank of articles providing access to data may be a result of higher research quality. Regardless of whether the policy improves the quality of new research or attracts higher quality existing research – if higher quality research is the result, then that is exactly what we want.

Previously, the official policy of our publisher, SAGE, was that authors were ‘encouraged’ to make the data available. It is likely though that authors were not aware of this policy because it was not mentioned on the journal website. In any case, this voluntary policy clearly did not stimulate the provision of data because data are available for only a small fraction of papers in the journal. Evidence indicates that a data sharing policy alone is ineffective without enforcement (Stodden, Seiler, & Ma, 2018; Christensen et al., 2019). Even when authors include a phrase in their article such as ‘data are available upon request,’ research shows that this does not mean that authors comply with such requests (Wicherts et al., 2006; Krawczyk & Reuben, 2012). Therefore, we are making the provision of data a requirement for the assignment of reviewers.

 

Data Transparency Guidance for Manuscripts using Experiments

Authors submitting manuscripts to NVSQ in which they are reporting on results from experiments are kindly requested to provide a detailed description of the target sample and the way in which the participants were invited, informed, instructed, paid, and debriefed. Also, authors are requested to describe all decisions made and questions answered by the participants and provide access to the stimulus materials and questionnaires. Most importantly, authors are requested to share the data and code that produced the reported findings available for the editors and reviewers. Please make sure you do so anonymously, i.e. without identifying yourself as an author of the manuscript.

When you submit the data, please ensure that you are complying with the requirements of your institution’s Institutional Review Board or Ethics Review Committee, the privacy laws in your country such as the GDPR, and other regulations that may apply. Remove personal information from the data you provide (Ursin et al., 2019). For example, avoid logging IP and email addresses in online experiments and any other personal information of participants that may identify their identities.

The journal will not host a separate archive. Instead, deposit the data at a platform of your choice, such as Dataverse, Github, Zenodo, or the Open Science Framework. We accept data in Excel (.xls, .csv), SPSS (.sav, .por) with syntax (.sps), data in Stata (.dta) with a do-file, and projects in R.

When authors have successfully submitted the data and code along with the paper, the Area Editor will verify whether the data and code submitted actually produce the results reported. If (and only if) this is the case, then the submission will be sent out to reviewers. This means that reviewers will not have to verify the computational reproducibility of the results. They will be able to check the integrity of the data and the robustness of the results reported.

As we introduce the data availability policy, we will closely monitor the changes in the number and quality of submissions, and their scholarly impact, anticipating both collective and private benefits (Popkin, 2019). We have scored the data transparency of 20 experiments submitted in the first six months of 2020, using a checklist counting 49 different criteria. In 4 of these submissions some elements of the research were preregistered. The average transparency was 38 percent. We anticipate that the new policy improves transparency scores.

The policy takes effect for new submissions on July 1, 2020.

 

Background: Development of the Policy

The NVSQ Editorial Team has been working on policies for enhanced data and analytic transparency for several years, moving forward in a consultative manner.  We established a Working Group on Data Management and Access which provided valuable guidance in its 2018 report, including a preliminary set of transparency guidelines for research based on data from experiments and surveys, interviews and ethnography, and archival sources and social media. A wider discussion of data transparency criteria was held at the 2019 ARNOVA conference in San Diego, as reported here. Participants working with survey and experimental data frequently mentioned access to the data and code as a desirable practice for research to be published in NVSQ.

Eventually, separate sets of guidelines for each type of data will be created, recognizing that commonly accepted standards vary between communities of researchers (Malicki et al., 2019; Beugelsdijk, Van Witteloostuijn, & Meyer, 2020). Regardless of which criteria will be used, reviewers can only evaluate these criteria when authors describe the choices they made and provide the materials used in their study.

 

References

Beugelsdijk, S., Van Witteloostuijn, A. & Meyer, K.E. (2020). A new approach to data access and research transparency (DART). Journal of International Business Studies, https://link.springer.com/content/pdf/10.1057/s41267-020-00323-z.pdf

Christensen, G., Dafoe, A., Miguel, E., Moore, D.A., & Rose, A.K. (2019). A study of the impact of data sharing on article citations using journal policies as a natural experiment. PLoS ONE 14(12): e0225883. https://doi.org/10.1371/journal.pone.0225883

Colavizza, G., Hrynaszkiewicz, I., Staden, I., Whitaker, K., & McGillivray, B. (2020). The citation advantage of linking publications to research data. PLoS ONE 15(4): e0230416, https://doi.org/10.1371/journal.pone.0230416

Drachen, T.M., Ellegaard, O., Larsen, A.V., & Dorch, S.B.F. (2016). Sharing Data Increases Citations. Liber Quarterly, 26 (2): 67–82. https://doi.org/10.18352/lq.10149

Helmig, B., Spraul, K. & Tremp, K. (2012). Replication Studies in Nonprofit Research: A Generalization and Extension of Findings Regarding the Media Publicity of Nonprofit Organizations. Nonprofit and Voluntary Sector Quarterly, 41(3): 360–385. https://doi.org/10.1177%2F0899764011404081

Krawczyk, M. & Reuben, E. (2012). (Un)Available upon Request: Field Experiment on Researchers’ Willingness to Share Supplementary Materials. Accountability in Research, 19:3, 175-186, https://doi.org/10.1080/08989621.2012.678688

Malički, M., Aalbersberg, IJ.J., Bouter, L., & Ter Riet, G. (2019). Journals’ instructions to authors: A cross-sectional study across scientific disciplines. PLoS ONE, 14(9): e0222157. https://doi.org/10.1371/journal.pone.0222157

Peters, C. (1973). Research in the Field of Volunteers in Courts and Corrections: What Exists and What Is Needed. Journal of Voluntary Action Research, 2 (3): 121-134. https://doi.org/10.1177%2F089976407300200301

Popkin, G. (2019). Data sharing and how it can benefit your scientific career. Nature, 569: 445-447. https://www.nature.com/articles/d41586-019-01506-x

Smith, D.H. (1994). Determinants of Voluntary Association Participation and Volunteering: A Literature Review. Nonprofit and Voluntary Sector Quarterly, 23 (3): 243-263. https://doi.org/10.1177%2F089976409402300305

Stodden, V., Seiler, J. & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. PNAS, 115(11): 2584-2589. https://doi.org/10.1073/pnas.1708290115

Ursin, G. et al., (2019), Sharing data safely while preserving privacy. The Lancet, 394: 1902. https://doi.org/10.1016/S0140-6736(19)32633-9

Wicherts, J.M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726-728. http://dx.doi.org/10.1037/0003-066X.61.7.726

Working Group on Data Management and Access (2018). A Data Availability Policy for NVSQ. April 15, 2018. https://renebekkers.files.wordpress.com/2020/06/18_04_15-nvsq-working-group-on-data.pdf

Leave a comment

Filed under academic misconduct, data, experiments, fraud, methodology, open science, statistical analysis

How to review a paper

Including a Checklist for Hypothesis Testing Research Reports *

See https://osf.io/6cw7b/ for a pdf of this post

 

Academia critically relies on our efforts as peer reviewers to evaluate the quality of research that is published in journals. Reading the reviews of others, I have noticed that the quality varies considerably, and that some reviews are not helpful. The added value of a journal article above and beyond the original manuscript or a non-reviewed preprint is in the changes the authors made in response to the reviews. Through our reviews, we can help to improve the quality of the research. This memo provides guidance on how to review a paper, partly inspired by suggestions provided by Alexander (2005), Lee (1995) and the Committee on Publication Ethics (2017). To improve the quality of the peer review process, I suggest that you use the following guidelines. Some of the guidelines – particularly the criteria at the end of this post – are peculiar for the kind of research that I tend to review – hypothesis testing research reports relying on administrative data and surveys, sometimes with an experimental design. But let me start with guidelines that I believe make sense for all research.

Things to check before you accept the invitation
First, I encourage you to check whether the journal aligns with your vision of science. I find that a journal published by an exploitative publisher making a profit in the range of 30%-40% is not worth my time. A journal that I have submitted my own work to and gave me good reviews is worth the number of reviews I received for my article. The review of a revised version of the paper does not count as a separate paper.
Next, I check whether I am the right person to review the paper. I think it is a good principle to describe my disciplinary background and expertise in relation to the manuscript I am invited to review. Reviewers do not need to be experts in all respects. If you do not have useful expertise to improve the paper, politely decline.

Then I check whether I know the author(s). If I do, and I have not collaborated with the author(s), if I am not currently collaborating or planning to do so, I describe how I know the author(s) and ask the editor whether it is appropriate for me to review the paper. If I have a conflict of interest, I notify the editor and politely decline. It is a good principle to let the editor know immediately if you are unable to review a paper, so the editor can start to look for someone else to review the paper. Your non-response means a delay for the authors and the editor.

Sometimes I get requests to review a paper that I have reviewed before, for a conference or another journal. In these cases I let the editor know and ask the editor whether she would like to see the previous review. For the editor it will be useful to know whether the current manuscript is the same as the version, or includes revisions.

Finally, I check whether the authors have made the data and code available. I have made it a requirement that authors have to fulfil before I accept an invitation to review their work. An exception can be made for data that would be illegal or dangerous to make available, such as datasets that contain identifying information that cannot be removed. In most cases, however, the authors can provide at least partial access to the data by excluding variables that contain personal information.

A paper that does not provide access to the data analyzed and the code used to produce the results in the paper is not worth my time. If the paper does not provide a link to the data and the analysis script, I ask the editor to ask the authors to provide the data and the code. I encourage you to do the same. Almost always the editor is willing to ask the authors to provide access. If the editor does not respond to your request, that is a red flag to me. I decline future invitation requests from the journal. If the authors do not respond to the editor’s request, or are unwilling to provide access to the data and code, that is a red flag for the editor.

The tone of the review
When I write a review, I think of the ‘golden rule’: treat others as you would like to be treated. I write the review report that I would have liked to receive if I had been the author. I use the following principles:

  • Be honest but constructive. You are not at war. There is no need to burn a paper to the ground.
  • Avoid addressing the authors personally. Say: “the paper could benefit from…” instead of “the authors need”.
  • Stay close to the facts. Do not speculate about reasons why the authors have made certain choices beyond the arguments stated in the paper.
  • Take a developmental approach. Any paper will contain flaws and imperfections. Your job is to improve science by identifying problems and suggesting ways to repair them. Think with the authors about ways they can improve the paper in such a way that it benefits collective scholarship. After a quick glance at the paper, I determine whether I think the paper has the potential to be published, perhaps after revisions. If I think the paper is beyond repair, I explain this to the editor.
  • Try to see beyond bad writing style and mistakes in spelling. Also be mindful of disciplinary and cultural differences between the authors and yourself.

The substance of the advice
In my view, it is a good principle to begin the review report by describing your expertise and the way you reviewed the paper. If you searched for literature, checked the data and verified the results, ran additional analyses, state this. It will allow the editor to adjudicate the review.

Then give a brief overview of the paper. If the invitation asks you to provide a general recommendation, consider whether you’d like to give one. Typically, you are invited to recommend ‘reject’, ‘revise & resubmit’ – with major or minor revisions, or ‘accept’. Because the recommendation is the first thing the editor wants to know it is convenient to state it early in the review.

When giving such a recommendation, I start from the assumption that the authors have invested a great deal of time in the paper and that they want to improve it. Also I consider the desk-rejection rate at the journal. If the editor sent the paper out for review, she probably thinks it has the potential to be published.

To get to the general recommendation, I list the strengths and the weaknesses of the paper. To ease the message you can use the sandwich principle: start with the strengths, then discuss the weaknesses, and conclude with an encouragement.

For authors and editors alike it is convenient to give actionable advice. For the weaknesses in the paper I suggest ways to repair them. I distinguish major issues such as not discussing alternative explanations from minor issues such as missing references and typos. It is convenient for both the editor and the authors to number your suggestions.

The strengths could be points that the authors are underselling. In that case, I identify them as strengths that the authors can emphasize more strongly.

It is handy to refer to issues with direct quotes and page numbers. To refer to the previous sentence: “As the paper states on page 3, [use] “direct quotes and page numbers””.

In 2016 I have started to sign my reviews. This is an accountability device: by exposing who I am to the authors of the paper I’m reviewing, I set higher standards for myself. I encourage you to think about this as an option, though I can imagine that you may not want to risk retribution as a graduate student or an early career researcher. Also some editors do not appreciate signed reviews and may take away your identifying information.

How to organize the review work
Usually, I read a paper twice. First, I go over the paper superficially and quickly. I do not read it closely. This gets me a sense of where the authors are going. After the first superficial reading, I determine whether the paper is good enough to be revised and resubmitted, and if so, I provide more detailed comments. After the report is done, I revisit my initial recommendation.

The second time I go over the paper, I do a very close reading. Because the authors had a word limit, I assume that literally every word in the manuscript is absolutely necessary – the paper should have no repetitions. Some of the information may be in the supplementary information provided with the paper.

Below you find a checklist of things I look for in a paper. The checklist reflects the kind of research that I tend to review, which is typically testing a set of hypotheses based on theory and previous research with data from surveys, experiments, or archival sources. For other types of research – such as non-empirical papers, exploratory reports, and studies based on interviews or ethnographic material – the checklist is less appropriate. The checklist may also be helpful for authors preparing research reports.

I realize that this is an extensive set of criteria for reviews. It sets the bar pretty high. A review checking each of the criteria will take you at least three hours, but more likely between five and eight hours. As a reviewer, I do not always check all criteria myself. Some of the criteria do not necessarily have to be done by peer reviewers. For instance, some journals employ data editors who check whether data and code provided by authors produce the results reported.

I do hope that journals and editors can get to a consensus on a set of minimum criteria that the peer review process should cover, or at least provide clarity about the criteria that they do check.

After the review
If the authors have revised their paper, it is a good principle to avoid making new demands for the second round that you have not made before. Otherwise the revise and resubmit path can be very long.

 

References
Alexander, G.R. (2005). A Guide to Reviewing Manuscripts. Maternal and Child Health Journal, 9 (1): 113-117. https://doi.org/10.1007/s10995-005-2423-y
Committee on Publication Ethics Council (2017). Ethical guidelines for peer reviewers. https://publicationethics.org/files/Ethical_Guidelines_For_Peer_Reviewers_2.pdf
Lee, A.S. (1995). Reviewing a manuscript for publication. Journal of Operations Management, 13: 87-92. https://doi.org/10.1016/0272-6963(95)94762-W

 

Review checklist for hypothesis testing reports

Research question

  1. Is it clear from the beginning what the research question is? If it is in the title, that’s good. In the first part of the abstract is good too. Is it at the end of the introduction section? In most cases that is too late.
  2. Is it clearly formulated? By the research question alone, can you tell what the paper is about?
  3. Does the research question align with what the paper actually does – or can do – to answer it?
  4. Is it important to know the answer to the research question for previous theory and methods?
  5. Does the paper address a question that is important from a societal or practical point of view?

 

Research design

  1. Does the research design align with the research question? If the question is descriptive, do the data actually allow for a representative and valid description? If the question is a causal question, do the data allow for causal inference? If not, ask the authors to report ‘associations’ rather than ‘effects’.
  2. Is the research design clearly described? Does the paper report all the steps taken to collect the data?
  3. Does the paper identify mediators of the alleged effect? Does the paper identify moderators as boundary conditions?
  4. Is the research design waterproof? Does the study allow for alternative interpretations?
  5. Has the research design been preregistered? Does the paper refer to a public URL where the preregistration is posted? Does the preregistration include a statistical power analysis? Is the number of observations sufficient for statistical tests of hypotheses? Are deviations from the preregistered design reported?
  6. Has the experiment been approved by an Internal or Ethics Review Board (IRB/ERB)? What is the IRB registration number?

 

Theory

  1. Does the paper identify multiple relevant theories?
  2. Does the theory section specify hypotheses? Have the hypotheses been formulated before the data were collected? Before the data were analyzed?
  3. Do hypotheses specify arguments why two variables are associated? Have alternative arguments been considered?
  4. Is the literature review complete? Does the paper cover the most relevant previous studies, also outside the discipline? Provide references to research that is not covered in the paper, but should definitely be cited.

 

Data & Methods

  1. Target group – Is it identified? If mankind, is the sample a good sample of mankind? Does it cover all relevant units?
  2. Sample – Does the paper identify the procedure used to obtain the sample from the target group? Is the sample a random sample? If not, has selective non-response been dealt with, examined, and have constraints on generality been identified as a limitation?
  3. Number of observations – What is the statistical power of the analysis? Does the paper report a power analysis?
  4. Measures – Does the paper provide the complete topic list, questionnaire, instructions for participants? To what extent are the measures used valid? Reliable?
  5. Descriptive statistics – Does the paper provide a table of descriptive statistics (minimum, maximum, mean, standard deviation, number of observations) for all variables in the analyses? If not, ask for such a table.
  6. Outliers – Does the paper identify treatment of outliers, if any?
  7. Is the multi-level structure (e.g., persons in time and space) identified and taken into account in an appropriate manner in the analysis? Are standard errors clustered?
  8. Does the paper report statistical mediation analyses for all hypothesized explanation(s)? Do the mediation analyses evaluate multiple pathways, or just one?
  9. Do the data allow for testing additional explanations that are not reported in the paper?

 

Results

  1. Can the results be reproduced from the data and code provided by the authors?
  2. Are the results robust to different specifications?

Conclusion

  1. Does the paper give a clear answer to the research question posed in the introduction?
  2. Does the paper identify implications for the theories tested, and are they justified?
  3. Does the paper identify implications for practice, and are they justified given the evidence presented?

 

Discussion

  1. Does the paper revisit the limitations of the data and methods?
  2. Does the paper suggest future research to repair the limitations?

 

Meta

  1. Does the paper have an author contribution note? Is it clear who did what?
  2. Are all analyses reported, if they are not in the main text, are they available in an online appendix?
  3. Are references up to date? Does the reference list include a reference to the dataset analyzed, including an URL/DOI?

 

 

* This work is licensed under a Creative Commons Attribution 4.0 International License. Thanks to colleagues at the Center for Philanthropic Studies at Vrije Universiteit Amsterdam, in particular Pamala Wiepking, Arjen de Wit, Theo Schuyt and Claire van Teunenbroek, for insightful comments on the first version. Thanks to Robin Banks, Pat Danahey Janin, Rense Corten, David Reinstein, Eleanor Brilliant, Claire Routley, Margaret Harris, Brenda Bushouse, Craig Furneaux, Angela Eikenberry, Jennifer Dodge, and Tracey Coule for responses to the second draft. The current text is the fourth draft. The most recent version of this paper is available as a preprint at https://doi.org/10.31219/osf.io/7ug4w. Suggestions continue to be welcome at r.bekkers@vu.nl.

Leave a comment

Filed under academic misconduct, data, experiments, fraud, helping, incentives, law, open science, sociology, survey research

The Work & Worries of a Webinar

Can everyone hear me? Does my hair look OK? What does the audience think about what I just said? Did I answer the most important questions? Some of these worries are the same now in the Webinar Age as for an old style Pre-COVID-19 in-person conference presentation, but many are new. In a webinar setting it is very difficult to get cues from the audience. Solution: organize an honest feedback channel, separate from your audience.

This is just one of the things we have learned at the Center for Philanthropic Studies at the Vrije Universiteit Amsterdam from transforming an in-person conference to an online webinar. The day before yesterday we organized our Giving in the Netherlands conference entirely online. We had planned this conference to be an in-person event for 260 participants – the maximum capacity of the room that a sponsor kindly offered to us. We were fully booked. Registration was free, with a €50 late cancellations penalty.

GIN_EN

Then the ‘intelligent lockdown’ and physical distancing measures imposed by the government in the Netherlands made it impossible to do the conference as planned. After some checks of various presentation platforms, we decided to move the conference online, using Zoom. We reworked the program, and made it shorter. We removed the opening reception, break, and drinks afterward. We first did 3 plenary presentations, and then a panel discussion. Total length of the program was 90 minutes.

We pre-recorded two of the three the presentations (using Loom) so we could broadcast them in a zoom-session. This worked well, though it was a lot of work to create good quality sound and a ‘talking head’ image in the presentations. We have learned a lot about audio feedback loops, natural light effects, and the importance of a neutral background for presentations.

In the preparations for the symposium, I also benefited from the experience moderating the opening plenary at the ARNOVA conference last year. In our online format, instead of having volunteers going around the room, I gave the audience the opportunity to pose questions through a separate online channel, www.menti.com. The online format even had an advantage compared to the hotel ballroom stage setting. During the interview I was able to keep an eye on the questions channel, and I could secretly look at my phone as colleagues sent me texts and emails identifying the questions as they came in. As a result, the discussion went smoothly, and the audience was engaged. After the unilateral research presentations, the panel discussion was a lively change of scene. I interviewed three sector leaders in the Netherlands about COVID-19 effects, and again presented questions from the audience.

Overall, this was a good experience for us, proving that it is possible to do a traditional symposium in an online setting. We also learned that it was a lot of work. You need new audiovisual skills that you don’t learn in graduate school.

You need a team of people working behind the scenes to make it work. We had a moderator, Barry Hoolwerf, introducing the house rules, broadcasting the pre-recorded presentations, and giving the floor to the live speakers – unmuting their microphones and allowing their video to be visible on screen. We had two people, Arjen de Wit and Claire van Teunenbroek, monitoring the questions channel, selecting the most important ones.

Finally, we learned how important it is to test, learn and adapt. We tested the presentations for a smaller audience that we gave a ‘sneak preview’ and learned about technical issues. The test was additional work, but worth it because it took away most of our worries.

You can watch the presentations (in Dutch) here: https://www.geveninnederland.nl/presentatie-geven-in-nederland-2020/. If you’re interested in the book you can download it here: https://www.geveninnederland.nl/publicatie-geven-in-nederland-2020/. A visual summary of the book in English is here: https://renebekkers.files.wordpress.com/2020/04/giving-in-the-netherlands-2020-summary.pdf

Leave a comment

Filed under Center for Philanthropic Studies

Cut the crap, fund the research

We all spend way too much time preparing applications for research grants. This is a collective waste of time. For the 2019 vici grant scheme of the Netherlands Organization for Scientific Research (NWO) in which I recently participated, 87% of all applicants received no grant. Based on my own experiences, I made a conservative calculation (here is the excel file so you can check it yourself) of the total costs for all people involved. The costs total €18.7 million. Imagine how much research time that is worth!

Cost

Applicants account for the bulk of the costs. Taken together, all applicants invested €15.8 million euro in the grant competition. As an applicant, I read the call for proposals, first considered whether or not I would apply, decided yes, I read the guidelines for applications, discussed ideas with colleagues, read the literature, wrote a short draft of the proposal to invite research partners, then wrote the proposal text, formatted the application according to the guidelines, prepared a budget for approval, collected some new data and analyzed it, considered whether ethics review was necessary, created a data management plan, corresponded with: grants advisors, a budget controller, HR advisors, internal reviewers, my head of department, the dean, a coach, and with societal partners. I revised the application, revised the budget, and submitted the preproposal. I waited. And waited. Then I read the preproposal evaluation by the committee members, and wrote responses to the preproposal evaluation. I revised my draft application again, and submitted the full application. I waited. And waited. I read the external reviews, wrote responses to their comments, and submitted a rebuttal. I waited. And waited. Then I prepared a 5 minutes pitch for the interview by the committee, responded to questions, and waited. Imagine I would have spent all that time on actual research. Each applicant could have spent 971 hours on research instead.

Also the university support system spends a lot of resources preparing budgets, internal reviews, and training of candidates. I involved research partners and societal partners to support the proposal. I feel bad for wasting their time as well.

The procedure also puts a burden on external reviewers. At a conference I attended, one of the reviewers of my application identified herself and asked me what had happened with the review she had provided. She had not heard back from the grant agency. I told her that she was not the only one who had given an A+ evaluation, but that NWO had overruled it in its procedures.

For the entire vici competition, an amount of €46.5 million was available, for 32 grants to be awarded. The amount wasted is 40% of that amount! That is unacceptable.

It is time to stop wasting our time.

 

Note: In a previous version of this post, I assumed that the number of applicants was 100. This estimate was much too low. The grant competition website says that across all domains 242 proposals were submitted. I revised the cost calculation (v2) to reflect the actual number of applicants. Note that this calculation leaves out hours spent by researchers who eventually decided not to submit a (pre-)proposal. The calculation further assumes that 180 full proposals were submitted and 105 candidates were interviewed.

Update, February 26: In the previous the cost of the procedure for NWO was severely underestimated. According to the annual report of NWO, the total salary costs for its staff that handles grant applications is €72 million per year. In the revised cost calculation, I’m assuming staff spend 218 hours for the entire vici competition. This amount consists of €198k variable costs (checking applications, inviting reviewers, composing decision letters, informing applicants, informing reviewers, handling appeals by 10% of full proposals, and handling ‘WOB verzoeken’ = Freedom Of Information Act requests) and €20k fixed costs: preparing the call for proposals, organizing committee meetings to discuss applications and their evaluations, attending committee meetings, reporting on committee meetings, evaluating the procedure).

Leave a comment

Filed under academic misconduct, economics, incentives, policy evaluation, taxes, time use

Revolutionizing Philanthropy Research Webinar

January 30, 11am-12pm (EST) / 5-6pm (CET) / 9-10pm (IST)

Why do people give to the benefit of others – or keep their resources to themselves? What is the core evidence on giving that holds across cultures? How does giving vary between cultures? How has the field of research on giving changed in the past decades?

10 years after the publication of “A Literature Review of Empirical Studies of Philanthropy: Eight Mechanisms that Drive Charitable Giving” in Nonprofit and Voluntary Sector Quarterly, it is time for an even more comprehensive effort to review the evidence base on giving. We envision an ambitious approach, using the most innovative tools and data science algorithms available to visualize the structure of research networks, identify theoretical foundations and provide a critical assessment of previous research.

We are inviting you to join this exciting endeavor in an open, global, cross-disciplinary collaboration. All expertise is very much welcome – from any discipline, country, or methodology. The webinar consists of four parts:

  1. Welcome: by moderator Pamala Wiepking, Lilly Family School of Philanthropy and VU Amsterdam;
  2. The strategy for collecting research evidence on giving from publications: by Ji Ma, University of Texas;
  3. Tools we plan to use for the analyses: by René Bekkers, Vrije Universiteit Amsterdam;
  4. The project structure, and opportunities to participate: by Pamala Wiepking.

The webinar is interactive. You can provide comments and feedback during each presentation. After each presentation, the moderator selects key questions for discussion.

We ask you to please register for the webinar here: https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g.

Registration is free. After you register, you will receive an automated message that includes a URL for the webinar, as well as international calling numbers. In addition, a recording of the webinar will be available soon after on the Open Science Framework Project page: https://osf.io/46e8x/

Please feel free to share with everyone who may be interested, and do let us know if you have any questions or suggestions at this stage.

We look forward to hopefully seeing you on January 30!

You can register at https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g

René Bekkers, Ji Ma, Pamala Wiepking, Arjen de Wit, and Sasha Zarins

1 Comment

Filed under altruism, bequests, charitable organizations, crowdfunding, economics, experiments, fundraising, helping, household giving, informal giving, open science, philanthropy, psychology, remittances, sociology, survey research, taxes, volunteering