A Data Transparency Policy for Results Based on Experiments


Transparency is a key condition for robust and reliable knowledge, and the advancement of scholarship over time. Since January 1, 2020, I am the Area Editor for Experiments submitted to Nonprofit & Voluntary Sector Quarterly (NVSQ), the leading journal for academic research in the interdisciplinary field of nonprofit research. In order to improve the transparency of research published in NVSQ, the journal is introducing a policy requiring authors of manuscripts reporting on data from experiments to provide, upon submission, access to the data and the code that produced the results reported. This will be a condition for the manuscript to proceed through the blind peer review process.

The policy will be implemented as a pilot for papers reporting results of experiments only. For manuscripts reporting on other types of data, the submission guidelines will not be changed at this time.



This policy is a step forward strengthening research in our field through greater transparency about research design, data collection and analysis. Greater transparency of data and analytic procedures will produce fairer, more constructive reviews and, ultimately, even higher quality articles published in NVSQ. Reviewers can only evaluate the methodologies and findings fully when authors describe the choices they made and provide the materials used in their study.

Sample composition and research design features can affect the results of experiments, as can sheer coincidence. To assist reviewers and readers in interpreting the research, it is important that authors describe relevant features of the research design, data collection, and analysis. Such details are also crucial to facilitate replication. NVSQ receives very few, and thus rarely publishes replications, although we are open to doing so. Greater transparency will facilitate the ability to reinforce, or question, research results through replication (Peters, 1973; Smith, 1994; Helmig, Spraul & Temp, 2012).

Greater transparency is also good for authors. Articles with open data appear to have a citation advantage: they are cited more frequently in subsequent research (Colavizza et al., 2020; Drachen et al., 2016). The evidence is not experimental: the higher citation rank of articles providing access to data may be a result of higher research quality. Regardless of whether the policy improves the quality of new research or attracts higher quality existing research – if higher quality research is the result, then that is exactly what we want.

Previously, the official policy of our publisher, SAGE, was that authors were ‘encouraged’ to make the data available. It is likely though that authors were not aware of this policy because it was not mentioned on the journal website. In any case, this voluntary policy clearly did not stimulate the provision of data because data are available for only a small fraction of papers in the journal. Evidence indicates that a data sharing policy alone is ineffective without enforcement (Stodden, Seiler, & Ma, 2018; Christensen et al., 2019). Even when authors include a phrase in their article such as ‘data are available upon request,’ research shows that this does not mean that authors comply with such requests (Wicherts et al., 2006; Krawczyk & Reuben, 2012). Therefore, we are making the provision of data a requirement for the assignment of reviewers.


Data Transparency Guidance for Manuscripts using Experiments

Authors submitting manuscripts to NVSQ in which they are reporting on results from experiments are kindly requested to provide a detailed description of the target sample and the way in which the participants were invited, informed, instructed, paid, and debriefed. Also, authors are requested to describe all decisions made and questions answered by the participants and provide access to the stimulus materials and questionnaires. Most importantly, authors are requested to share the data and code that produced the reported findings available for the editors and reviewers. Please make sure you do so anonymously, i.e. without identifying yourself as an author of the manuscript.

When you submit the data, please ensure that you are complying with the requirements of your institution’s Institutional Review Board or Ethics Review Committee, the privacy laws in your country such as the GDPR, and other regulations that may apply. Remove personal information from the data you provide (Ursin et al., 2019). For example, avoid logging IP and email addresses in online experiments and any other personal information of participants that may identify their identities.

The journal will not host a separate archive. Instead, deposit the data at a platform of your choice, such as Dataverse, Github, Zenodo, or the Open Science Framework. We accept data in Excel (.xls, .csv), SPSS (.sav, .por) with syntax (.sps), data in Stata (.dta) with a do-file, and projects in R.

When authors have successfully submitted the data and code along with the paper, the Area Editor will verify whether the data and code submitted actually produce the results reported. If (and only if) this is the case, then the submission will be sent out to reviewers. This means that reviewers will not have to verify the computational reproducibility of the results. They will be able to check the integrity of the data and the robustness of the results reported.

As we introduce the data availability policy, we will closely monitor the changes in the number and quality of submissions, and their scholarly impact, anticipating both collective and private benefits (Popkin, 2019). We have scored the data transparency of 20 experiments submitted in the first six months of 2020, using a checklist counting 49 different criteria. In 4 of these submissions some elements of the research were preregistered. The average transparency was 38 percent. We anticipate that the new policy improves transparency scores.

The policy takes effect for new submissions on July 1, 2020.


Background: Development of the Policy

The NVSQ Editorial Team has been working on policies for enhanced data and analytic transparency for several years, moving forward in a consultative manner.  We established a Working Group on Data Management and Access which provided valuable guidance in its 2018 report, including a preliminary set of transparency guidelines for research based on data from experiments and surveys, interviews and ethnography, and archival sources and social media. A wider discussion of data transparency criteria was held at the 2019 ARNOVA conference in San Diego, as reported here. Participants working with survey and experimental data frequently mentioned access to the data and code as a desirable practice for research to be published in NVSQ.

Eventually, separate sets of guidelines for each type of data will be created, recognizing that commonly accepted standards vary between communities of researchers (Malicki et al., 2019; Beugelsdijk, Van Witteloostuijn, & Meyer, 2020). Regardless of which criteria will be used, reviewers can only evaluate these criteria when authors describe the choices they made and provide the materials used in their study.



Beugelsdijk, S., Van Witteloostuijn, A. & Meyer, K.E. (2020). A new approach to data access and research transparency (DART). Journal of International Business Studies, https://link.springer.com/content/pdf/10.1057/s41267-020-00323-z.pdf

Christensen, G., Dafoe, A., Miguel, E., Moore, D.A., & Rose, A.K. (2019). A study of the impact of data sharing on article citations using journal policies as a natural experiment. PLoS ONE 14(12): e0225883. https://doi.org/10.1371/journal.pone.0225883

Colavizza, G., Hrynaszkiewicz, I., Staden, I., Whitaker, K., & McGillivray, B. (2020). The citation advantage of linking publications to research data. PLoS ONE 15(4): e0230416, https://doi.org/10.1371/journal.pone.0230416

Drachen, T.M., Ellegaard, O., Larsen, A.V., & Dorch, S.B.F. (2016). Sharing Data Increases Citations. Liber Quarterly, 26 (2): 67–82. https://doi.org/10.18352/lq.10149

Helmig, B., Spraul, K. & Tremp, K. (2012). Replication Studies in Nonprofit Research: A Generalization and Extension of Findings Regarding the Media Publicity of Nonprofit Organizations. Nonprofit and Voluntary Sector Quarterly, 41(3): 360–385. https://doi.org/10.1177%2F0899764011404081

Krawczyk, M. & Reuben, E. (2012). (Un)Available upon Request: Field Experiment on Researchers’ Willingness to Share Supplementary Materials. Accountability in Research, 19:3, 175-186, https://doi.org/10.1080/08989621.2012.678688

Malički, M., Aalbersberg, IJ.J., Bouter, L., & Ter Riet, G. (2019). Journals’ instructions to authors: A cross-sectional study across scientific disciplines. PLoS ONE, 14(9): e0222157. https://doi.org/10.1371/journal.pone.0222157

Peters, C. (1973). Research in the Field of Volunteers in Courts and Corrections: What Exists and What Is Needed. Journal of Voluntary Action Research, 2 (3): 121-134. https://doi.org/10.1177%2F089976407300200301

Popkin, G. (2019). Data sharing and how it can benefit your scientific career. Nature, 569: 445-447. https://www.nature.com/articles/d41586-019-01506-x

Smith, D.H. (1994). Determinants of Voluntary Association Participation and Volunteering: A Literature Review. Nonprofit and Voluntary Sector Quarterly, 23 (3): 243-263. https://doi.org/10.1177%2F089976409402300305

Stodden, V., Seiler, J. & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. PNAS, 115(11): 2584-2589. https://doi.org/10.1073/pnas.1708290115

Ursin, G. et al., (2019), Sharing data safely while preserving privacy. The Lancet, 394: 1902. https://doi.org/10.1016/S0140-6736(19)32633-9

Wicherts, J.M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726-728. http://dx.doi.org/10.1037/0003-066X.61.7.726

Working Group on Data Management and Access (2018). A Data Availability Policy for NVSQ. April 15, 2018. https://renebekkers.files.wordpress.com/2020/06/18_04_15-nvsq-working-group-on-data.pdf

Leave a comment

Filed under academic misconduct, data, experiments, fraud, methodology, open science, statistical analysis

How to review a paper

Including a Checklist for Hypothesis Testing Research Reports *

See https://osf.io/6cw7b/ for a pdf of this post


Academia critically relies on our efforts as peer reviewers to evaluate the quality of research that is published in journals. Reading the reviews of others, I have noticed that the quality varies considerably, and that some reviews are not helpful. The added value of a journal article above and beyond the original manuscript or a non-reviewed preprint is in the changes the authors made in response to the reviews. Through our reviews, we can help to improve the quality of the research. This memo provides guidance on how to review a paper, partly inspired by suggestions provided by Alexander (2005), Lee (1995) and the Committee on Publication Ethics (2017). To improve the quality of the peer review process, I suggest that you use the following guidelines. Some of the guidelines – particularly the criteria at the end of this post – are peculiar for the kind of research that I tend to review – hypothesis testing research reports relying on administrative data and surveys, sometimes with an experimental design. But let me start with guidelines that I believe make sense for all research.

Things to check before you accept the invitation
First, I encourage you to check whether the journal aligns with your vision of science. I find that a journal published by an exploitative publisher making a profit in the range of 30%-40% is not worth my time. A journal that I have submitted my own work to and gave me good reviews is worth the number of reviews I received for my article. The review of a revised version of the paper does not count as a separate paper.
Next, I check whether I am the right person to review the paper. I think it is a good principle to describe my disciplinary background and expertise in relation to the manuscript I am invited to review. Reviewers do not need to be experts in all respects. If you do not have useful expertise to improve the paper, politely decline.

Then I check whether I know the author(s). If I do, and I have not collaborated with the author(s), if I am not currently collaborating or planning to do so, I describe how I know the author(s) and ask the editor whether it is appropriate for me to review the paper. If I have a conflict of interest, I notify the editor and politely decline. It is a good principle to let the editor know immediately if you are unable to review a paper, so the editor can start to look for someone else to review the paper. Your non-response means a delay for the authors and the editor.

Sometimes I get requests to review a paper that I have reviewed before, for a conference or another journal. In these cases I let the editor know and ask the editor whether she would like to see the previous review. For the editor it will be useful to know whether the current manuscript is the same as the version, or includes revisions.

Finally, I check whether the authors have made the data and code available. I have made it a requirement that authors have to fulfil before I accept an invitation to review their work. An exception can be made for data that would be illegal or dangerous to make available, such as datasets that contain identifying information that cannot be removed. In most cases, however, the authors can provide at least partial access to the data by excluding variables that contain personal information.

A paper that does not provide access to the data analyzed and the code used to produce the results in the paper is not worth my time. If the paper does not provide a link to the data and the analysis script, I ask the editor to ask the authors to provide the data and the code. I encourage you to do the same. Almost always the editor is willing to ask the authors to provide access. If the editor does not respond to your request, that is a red flag to me. I decline future invitation requests from the journal. If the authors do not respond to the editor’s request, or are unwilling to provide access to the data and code, that is a red flag for the editor.

The tone of the review
When I write a review, I think of the ‘golden rule’: treat others as you would like to be treated. I write the review report that I would have liked to receive if I had been the author. I use the following principles:

  • Be honest but constructive. You are not at war. There is no need to burn a paper to the ground.
  • Avoid addressing the authors personally. Say: “the paper could benefit from…” instead of “the authors need”.
  • Stay close to the facts. Do not speculate about reasons why the authors have made certain choices beyond the arguments stated in the paper.
  • Take a developmental approach. Any paper will contain flaws and imperfections. Your job is to improve science by identifying problems and suggesting ways to repair them. Think with the authors about ways they can improve the paper in such a way that it benefits collective scholarship. After a quick glance at the paper, I determine whether I think the paper has the potential to be published, perhaps after revisions. If I think the paper is beyond repair, I explain this to the editor.
  • Try to see beyond bad writing style and mistakes in spelling. Also be mindful of disciplinary and cultural differences between the authors and yourself.

The substance of the advice
In my view, it is a good principle to begin the review report by describing your expertise and the way you reviewed the paper. If you searched for literature, checked the data and verified the results, ran additional analyses, state this. It will allow the editor to adjudicate the review.

Then give a brief overview of the paper. If the invitation asks you to provide a general recommendation, consider whether you’d like to give one. Typically, you are invited to recommend ‘reject’, ‘revise & resubmit’ – with major or minor revisions, or ‘accept’. Because the recommendation is the first thing the editor wants to know it is convenient to state it early in the review.

When giving such a recommendation, I start from the assumption that the authors have invested a great deal of time in the paper and that they want to improve it. Also I consider the desk-rejection rate at the journal. If the editor sent the paper out for review, she probably thinks it has the potential to be published.

To get to the general recommendation, I list the strengths and the weaknesses of the paper. To ease the message you can use the sandwich principle: start with the strengths, then discuss the weaknesses, and conclude with an encouragement.

For authors and editors alike it is convenient to give actionable advice. For the weaknesses in the paper I suggest ways to repair them. I distinguish major issues such as not discussing alternative explanations from minor issues such as missing references and typos. It is convenient for both the editor and the authors to number your suggestions.

The strengths could be points that the authors are underselling. In that case, I identify them as strengths that the authors can emphasize more strongly.

It is handy to refer to issues with direct quotes and page numbers. To refer to the previous sentence: “As the paper states on page 3, [use] “direct quotes and page numbers””.

In 2016 I have started to sign my reviews. This is an accountability device: by exposing who I am to the authors of the paper I’m reviewing, I set higher standards for myself. I encourage you to think about this as an option, though I can imagine that you may not want to risk retribution as a graduate student or an early career researcher. Also some editors do not appreciate signed reviews and may take away your identifying information.

How to organize the review work
Usually, I read a paper twice. First, I go over the paper superficially and quickly. I do not read it closely. This gets me a sense of where the authors are going. After the first superficial reading, I determine whether the paper is good enough to be revised and resubmitted, and if so, I provide more detailed comments. After the report is done, I revisit my initial recommendation.

The second time I go over the paper, I do a very close reading. Because the authors had a word limit, I assume that literally every word in the manuscript is absolutely necessary – the paper should have no repetitions. Some of the information may be in the supplementary information provided with the paper.

Below you find a checklist of things I look for in a paper. The checklist reflects the kind of research that I tend to review, which is typically testing a set of hypotheses based on theory and previous research with data from surveys, experiments, or archival sources. For other types of research – such as non-empirical papers, exploratory reports, and studies based on interviews or ethnographic material – the checklist is less appropriate. The checklist may also be helpful for authors preparing research reports.

I realize that this is an extensive set of criteria for reviews. It sets the bar pretty high. A review checking each of the criteria will take you at least three hours, but more likely between five and eight hours. As a reviewer, I do not always check all criteria myself. Some of the criteria do not necessarily have to be done by peer reviewers. For instance, some journals employ data editors who check whether data and code provided by authors produce the results reported.

I do hope that journals and editors can get to a consensus on a set of minimum criteria that the peer review process should cover, or at least provide clarity about the criteria that they do check.

After the review
If the authors have revised their paper, it is a good principle to avoid making new demands for the second round that you have not made before. Otherwise the revise and resubmit path can be very long.


Alexander, G.R. (2005). A Guide to Reviewing Manuscripts. Maternal and Child Health Journal, 9 (1): 113-117. https://doi.org/10.1007/s10995-005-2423-y
Committee on Publication Ethics Council (2017). Ethical guidelines for peer reviewers. https://publicationethics.org/files/Ethical_Guidelines_For_Peer_Reviewers_2.pdf
Lee, A.S. (1995). Reviewing a manuscript for publication. Journal of Operations Management, 13: 87-92. https://doi.org/10.1016/0272-6963(95)94762-W


Review checklist for hypothesis testing reports

Research question

  1. Is it clear from the beginning what the research question is? If it is in the title, that’s good. In the first part of the abstract is good too. Is it at the end of the introduction section? In most cases that is too late.
  2. Is it clearly formulated? By the research question alone, can you tell what the paper is about?
  3. Does the research question align with what the paper actually does – or can do – to answer it?
  4. Is it important to know the answer to the research question for previous theory and methods?
  5. Does the paper address a question that is important from a societal or practical point of view?


Research design

  1. Does the research design align with the research question? If the question is descriptive, do the data actually allow for a representative and valid description? If the question is a causal question, do the data allow for causal inference? If not, ask the authors to report ‘associations’ rather than ‘effects’.
  2. Is the research design clearly described? Does the paper report all the steps taken to collect the data?
  3. Does the paper identify mediators of the alleged effect? Does the paper identify moderators as boundary conditions?
  4. Is the research design waterproof? Does the study allow for alternative interpretations?
  5. Has the research design been preregistered? Does the paper refer to a public URL where the preregistration is posted? Does the preregistration include a statistical power analysis? Is the number of observations sufficient for statistical tests of hypotheses? Are deviations from the preregistered design reported?
  6. Has the experiment been approved by an Internal or Ethics Review Board (IRB/ERB)? What is the IRB registration number?



  1. Does the paper identify multiple relevant theories?
  2. Does the theory section specify hypotheses? Have the hypotheses been formulated before the data were collected? Before the data were analyzed?
  3. Do hypotheses specify arguments why two variables are associated? Have alternative arguments been considered?
  4. Is the literature review complete? Does the paper cover the most relevant previous studies, also outside the discipline? Provide references to research that is not covered in the paper, but should definitely be cited.


Data & Methods

  1. Target group – Is it identified? If mankind, is the sample a good sample of mankind? Does it cover all relevant units?
  2. Sample – Does the paper identify the procedure used to obtain the sample from the target group? Is the sample a random sample? If not, has selective non-response been dealt with, examined, and have constraints on generality been identified as a limitation?
  3. Number of observations – What is the statistical power of the analysis? Does the paper report a power analysis?
  4. Measures – Does the paper provide the complete topic list, questionnaire, instructions for participants? To what extent are the measures used valid? Reliable?
  5. Descriptive statistics – Does the paper provide a table of descriptive statistics (minimum, maximum, mean, standard deviation, number of observations) for all variables in the analyses? If not, ask for such a table.
  6. Outliers – Does the paper identify treatment of outliers, if any?
  7. Is the multi-level structure (e.g., persons in time and space) identified and taken into account in an appropriate manner in the analysis? Are standard errors clustered?
  8. Does the paper report statistical mediation analyses for all hypothesized explanation(s)? Do the mediation analyses evaluate multiple pathways, or just one?
  9. Do the data allow for testing additional explanations that are not reported in the paper?



  1. Can the results be reproduced from the data and code provided by the authors?
  2. Are the results robust to different specifications?


  1. Does the paper give a clear answer to the research question posed in the introduction?
  2. Does the paper identify implications for the theories tested, and are they justified?
  3. Does the paper identify implications for practice, and are they justified given the evidence presented?



  1. Does the paper revisit the limitations of the data and methods?
  2. Does the paper suggest future research to repair the limitations?



  1. Does the paper have an author contribution note? Is it clear who did what?
  2. Are all analyses reported, if they are not in the main text, are they available in an online appendix?
  3. Are references up to date? Does the reference list include a reference to the dataset analyzed, including an URL/DOI?



* This work is licensed under a Creative Commons Attribution 4.0 International License. Thanks to colleagues at the Center for Philanthropic Studies at Vrije Universiteit Amsterdam, in particular Pamala Wiepking, Arjen de Wit, Theo Schuyt and Claire van Teunenbroek, for insightful comments on the first version. Thanks to Robin Banks, Pat Danahey Janin, Rense Corten, David Reinstein, Eleanor Brilliant, Claire Routley, Margaret Harris, Brenda Bushouse, Craig Furneaux, Angela Eikenberry, Jennifer Dodge, and Tracey Coule for responses to the second draft. The current text is the fourth draft. The most recent version of this paper is available as a preprint at https://doi.org/10.31219/osf.io/7ug4w. Suggestions continue to be welcome at r.bekkers@vu.nl.

Leave a comment

Filed under academic misconduct, data, experiments, fraud, helping, incentives, law, open science, sociology, survey research

The Work & Worries of a Webinar

Can everyone hear me? Does my hair look OK? What does the audience think about what I just said? Did I answer the most important questions? Some of these worries are the same now in the Webinar Age as for an old style Pre-COVID-19 in-person conference presentation, but many are new. In a webinar setting it is very difficult to get cues from the audience. Solution: organize an honest feedback channel, separate from your audience.

This is just one of the things we have learned at the Center for Philanthropic Studies at the Vrije Universiteit Amsterdam from transforming an in-person conference to an online webinar. The day before yesterday we organized our Giving in the Netherlands conference entirely online. We had planned this conference to be an in-person event for 260 participants – the maximum capacity of the room that a sponsor kindly offered to us. We were fully booked. Registration was free, with a €50 late cancellations penalty.


Then the ‘intelligent lockdown’ and physical distancing measures imposed by the government in the Netherlands made it impossible to do the conference as planned. After some checks of various presentation platforms, we decided to move the conference online, using Zoom. We reworked the program, and made it shorter. We removed the opening reception, break, and drinks afterward. We first did 3 plenary presentations, and then a panel discussion. Total length of the program was 90 minutes.

We pre-recorded two of the three the presentations (using Loom) so we could broadcast them in a zoom-session. This worked well, though it was a lot of work to create good quality sound and a ‘talking head’ image in the presentations. We have learned a lot about audio feedback loops, natural light effects, and the importance of a neutral background for presentations.

In the preparations for the symposium, I also benefited from the experience moderating the opening plenary at the ARNOVA conference last year. In our online format, instead of having volunteers going around the room, I gave the audience the opportunity to pose questions through a separate online channel, www.menti.com. The online format even had an advantage compared to the hotel ballroom stage setting. During the interview I was able to keep an eye on the questions channel, and I could secretly look at my phone as colleagues sent me texts and emails identifying the questions as they came in. As a result, the discussion went smoothly, and the audience was engaged. After the unilateral research presentations, the panel discussion was a lively change of scene. I interviewed three sector leaders in the Netherlands about COVID-19 effects, and again presented questions from the audience.

Overall, this was a good experience for us, proving that it is possible to do a traditional symposium in an online setting. We also learned that it was a lot of work. You need new audiovisual skills that you don’t learn in graduate school.

You need a team of people working behind the scenes to make it work. We had a moderator, Barry Hoolwerf, introducing the house rules, broadcasting the pre-recorded presentations, and giving the floor to the live speakers – unmuting their microphones and allowing their video to be visible on screen. We had two people, Arjen de Wit and Claire van Teunenbroek, monitoring the questions channel, selecting the most important ones.

Finally, we learned how important it is to test, learn and adapt. We tested the presentations for a smaller audience that we gave a ‘sneak preview’ and learned about technical issues. The test was additional work, but worth it because it took away most of our worries.

You can watch the presentations (in Dutch) here: https://www.geveninnederland.nl/presentatie-geven-in-nederland-2020/. If you’re interested in the book you can download it here: https://www.geveninnederland.nl/publicatie-geven-in-nederland-2020/. A visual summary of the book in English is here: https://renebekkers.files.wordpress.com/2020/04/giving-in-the-netherlands-2020-summary.pdf

Leave a comment

Filed under Center for Philanthropic Studies

Cut the crap, fund the research

We all spend way too much time preparing applications for research grants. This is a collective waste of time. For the 2019 vici grant scheme of the Netherlands Organization for Scientific Research (NWO) in which I recently participated, 87% of all applicants received no grant. Based on my own experiences, I made a conservative calculation (here is the excel file so you can check it yourself) of the total costs for all people involved. The costs total €18.7 million. Imagine how much research time that is worth!


Applicants account for the bulk of the costs. Taken together, all applicants invested €15.8 million euro in the grant competition. As an applicant, I read the call for proposals, first considered whether or not I would apply, decided yes, I read the guidelines for applications, discussed ideas with colleagues, read the literature, wrote a short draft of the proposal to invite research partners, then wrote the proposal text, formatted the application according to the guidelines, prepared a budget for approval, collected some new data and analyzed it, considered whether ethics review was necessary, created a data management plan, corresponded with: grants advisors, a budget controller, HR advisors, internal reviewers, my head of department, the dean, a coach, and with societal partners. I revised the application, revised the budget, and submitted the preproposal. I waited. And waited. Then I read the preproposal evaluation by the committee members, and wrote responses to the preproposal evaluation. I revised my draft application again, and submitted the full application. I waited. And waited. I read the external reviews, wrote responses to their comments, and submitted a rebuttal. I waited. And waited. Then I prepared a 5 minutes pitch for the interview by the committee, responded to questions, and waited. Imagine I would have spent all that time on actual research. Each applicant could have spent 971 hours on research instead.

Also the university support system spends a lot of resources preparing budgets, internal reviews, and training of candidates. I involved research partners and societal partners to support the proposal. I feel bad for wasting their time as well.

The procedure also puts a burden on external reviewers. At a conference I attended, one of the reviewers of my application identified herself and asked me what had happened with the review she had provided. She had not heard back from the grant agency. I told her that she was not the only one who had given an A+ evaluation, but that NWO had overruled it in its procedures.

For the entire vici competition, an amount of €46.5 million was available, for 32 grants to be awarded. The amount wasted is 40% of that amount! That is unacceptable.

It is time to stop wasting our time.


Note: In a previous version of this post, I assumed that the number of applicants was 100. This estimate was much too low. The grant competition website says that across all domains 242 proposals were submitted. I revised the cost calculation (v2) to reflect the actual number of applicants. Note that this calculation leaves out hours spent by researchers who eventually decided not to submit a (pre-)proposal. The calculation further assumes that 180 full proposals were submitted and 105 candidates were interviewed.

Update, February 26: In the previous the cost of the procedure for NWO was severely underestimated. According to the annual report of NWO, the total salary costs for its staff that handles grant applications is €72 million per year. In the revised cost calculation, I’m assuming staff spend 218 hours for the entire vici competition. This amount consists of €198k variable costs (checking applications, inviting reviewers, composing decision letters, informing applicants, informing reviewers, handling appeals by 10% of full proposals, and handling ‘WOB verzoeken’ = Freedom Of Information Act requests) and €20k fixed costs: preparing the call for proposals, organizing committee meetings to discuss applications and their evaluations, attending committee meetings, reporting on committee meetings, evaluating the procedure).

Leave a comment

Filed under academic misconduct, economics, incentives, policy evaluation, taxes, time use

Revolutionizing Philanthropy Research Webinar

January 30, 11am-12pm (EST) / 5-6pm (CET) / 9-10pm (IST)

Why do people give to the benefit of others – or keep their resources to themselves? What is the core evidence on giving that holds across cultures? How does giving vary between cultures? How has the field of research on giving changed in the past decades?

10 years after the publication of “A Literature Review of Empirical Studies of Philanthropy: Eight Mechanisms that Drive Charitable Giving” in Nonprofit and Voluntary Sector Quarterly, it is time for an even more comprehensive effort to review the evidence base on giving. We envision an ambitious approach, using the most innovative tools and data science algorithms available to visualize the structure of research networks, identify theoretical foundations and provide a critical assessment of previous research.

We are inviting you to join this exciting endeavor in an open, global, cross-disciplinary collaboration. All expertise is very much welcome – from any discipline, country, or methodology. The webinar consists of four parts:

  1. Welcome: by moderator Pamala Wiepking, Lilly Family School of Philanthropy and VU Amsterdam;
  2. The strategy for collecting research evidence on giving from publications: by Ji Ma, University of Texas;
  3. Tools we plan to use for the analyses: by René Bekkers, Vrije Universiteit Amsterdam;
  4. The project structure, and opportunities to participate: by Pamala Wiepking.

The webinar is interactive. You can provide comments and feedback during each presentation. After each presentation, the moderator selects key questions for discussion.

We ask you to please register for the webinar here: https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g.

Registration is free. After you register, you will receive an automated message that includes a URL for the webinar, as well as international calling numbers. In addition, a recording of the webinar will be available soon after on the Open Science Framework Project page: https://osf.io/46e8x/

Please feel free to share with everyone who may be interested, and do let us know if you have any questions or suggestions at this stage.

We look forward to hopefully seeing you on January 30!

You can register at https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g

René Bekkers, Ji Ma, Pamala Wiepking, Arjen de Wit, and Sasha Zarins

1 Comment

Filed under altruism, bequests, charitable organizations, crowdfunding, economics, experiments, fundraising, helping, household giving, informal giving, open science, philanthropy, psychology, remittances, sociology, survey research, taxes, volunteering

The Magic of Science

Dinosaurs are like magic. They capture the attention because of their size and sharp teeth. The fact they are no longer among us may also contribute to their popularity. In science, we still have dinosaurs. They do date back to the prehistoric age, when scientists could build careers on undisclosed data and procedures. But we have entered the new age of open science, with comets and earthquakes causing dark clouds in the sky and blocking our view of the sun.


In the prehistoric age, a lot of science was like magic. The wizard waved his wand, and…. poof: there was the result that only the wizard could reproduce. If nobody can repeat your trick, it’s not science. When you dig up old research, you are stuck with a lot of ‘magic’. Make sure you can detect it.


Unlike real magic, the tricks of illusionists are highly reproducible. It may take some time to learn tricks and you will need the appropriate equipment, but if you know the secret recipe, you can dress up like a magician, and perform the very same act you could not figure out when you were in the audience.

Needless to say, it is our collective responsibility to disclose all the tricks and equipment we use in our research. Here’s a list of things we can do to make this happen.

Leave a comment

Filed under open science

A Conversation About Data Transparency

The integrity of the research process serves as the foundation for excellence in research on nonprofit and voluntary action. While transparency does not guarantee credibility, it guarantees you will get the credibility you deserve. Therefore we are developing criteria for transparency standards with regards to the reporting of methods and data.

We started this important conversation at the 48th ARNOVA Conference in San Diego, on Friday, November 22, 2019. In the session, we held a workshop to survey which characteristics of data and methods transparency that help review research and utilize past work as building blocks for future research.

This session was well attended and very interactive. After a short introduction by the editors of NVSQ, the leading journal in the field, we split up in three groups of researchers that work with the same type of data. One group for data from interviews, one for survey data, and one for administrative data such as 990s. In each group we first took 10 minutes for ourselves, formulating criteria for transparency that allow readers to assess the quality of research. All participants received colored sticky notes, and wrote down one idea per note: laudable indicators on green notes, and bad signals on red notes.


Next, we put the notes on the wall and grouped them. Each cluster received a name on a yellow note. Finally, we shared the results of the small group sessions with the larger group.


Though the different types of data to some extent have their own quality indicators, there were striking parallels in the match between theory and research design, ethics, sampling, measures, analysis, coding, interpretation, and write-up of results. After the workshop, we collected the notes. I’ve summarized the results in a report about the workshop. In a nutshell, all groups distinguished five clusters of criteria:

  • A. Meta-criteria: transparency about the research process and the data collection in particular;
  • B. Before data collection: research design and sampling;
  • C. Characteristics of the data as presented: response, reliability, validity;
  • D. Decisions about data collected: analysis and causal inference;
  • E. Write-up: interpretation of and confidence in results presented.


Here is the full report about the workshop. Do you have suggestions about the report? Let me know!

1 Comment

Filed under data, experiments, methodology, open science, survey research

Gevonden: student-assistent Geven in Nederland 2020

De werkgroep Filantropische Studies van de Faculteit Sociale Wetenschappen aan de Vrije Universiteit Amsterdam is het expertisecentrum op het gebied van onderzoek naar filantropie in Nederland. De werkgroep houdt zich bezig met vragen zoals: Waarom geven mensen vrijwillig geld aan goede doelen? Waarom verrichten mensen vrijwilligerswerk? Hoeveel geld gaat er om in de filantropische sector? Voor het onderzoek Geven in Nederland heeft de werkgroep een student-assistent gevonden: Florian van Heijningen. Welkom!

Leave a comment

Filed under bequests, Center for Philanthropic Studies, charitable organizations, corporate social responsibility, data, foundations, household giving, Netherlands, philanthropy, statistical analysis, survey research

Research on giving in the Netherlands continues, funding secured

We are pleased to announce that the Center for Philanthropic Studies has been able to secure funding for continued research on giving in the Netherlands. The funding enables data collection for the Giving in the Netherlands Panel Survey among households, as well as data collection on corporations, foundations, charity lotteries, and bequests.

In the past 20 years, Giving in the Netherlands has been the prime source of data on trends in the size and composition of philanthropy in the Netherlands. Continuation of the research was uncertain for more than a year because the ministry of Justice and Security withdrew 50% of its funding, calling upon the philanthropic sector to co-fund the research. In an ongoing dialogue with the philanthropic sector, the VU-Center sought stronger alignment of the research with the need for research in practice. The Center has organized round table discussions and an advisory group of experts from the sector has been composed. The Center will use the insights from this dialogue in the research.

Meanwhile the fieldwork has started. Preliminary estimates of giving in the Netherlands will be discussed at a symposium for members of branch organizations in the philanthropic sector in the Fall of 2019. Full publication of the results is scheduled mid-April 2020, at the National Day of Philanthropy.


Filed under Center for Philanthropic Studies, charitable organizations, corporate social responsibility, data, foundations, fundraising, household giving, informal giving, Netherlands, survey research, trends

Grootste goededoelenorganisaties ontvangen minder uit giften

Het ‘sectoronderzoek’ van Goede Doelen Nederland onder de 24 grootste goede doelen organisaties is weer verschenen. https://www.goededoelennederland.nl/system/files/public/Sector/190726%20Overzicht%20cijfers%20grote%20goede%20doelen%20met%20meer%20dan%2020%20miljoen.pdf

In het persbericht is onder de optimistische kop “Maatschappelijke betrokkenheid bij goede doelen onveranderd groot” te lezen dat “dat de resultaten in 2018 nagenoeg hetzelfde zijn als in 2017”. Ik zie iets anders. Drie tekenen aan de wand:

  1. De vrijgevigheid in Nederland neemt af. In 2018 ontvingen de grootste goede doelen in Nederland €571,6 miljoen, uit giften van particulieren en bedrijven. In 2017 was dit nog €583,6 miljoen. Een afname van 2%. De afname van giften van particulieren was bijna 4%. Tegelijk was de inflatie in Nederland 1,7%. De ontvangen euro’s zijn dus ook nog minder waard geworden.
  2. Een toename in inkomsten uit overheidssubsidie (goed voor 35% van de inkomsten van deze goede doelen), nalatenschappen (goed voor 12% van de inkomsten) en de goede doelen loterijen (nu 10% van de inkomsten) hebben de daling van de vrijgevigheid van particulieren en bedrijven opgevangen, maar niet goed gemaakt.
  3. De grootste goede doelen in Nederland besteden exact nul euro (= €0) aan opleidingen.

Update, 26 september 2019: ook de Volkskrant heeft de jaarverslagen van de grootste goededoelenorganisaties geanalyseerd en komt tot vergelijkbare conclusies. Zie hier het hoofdartikel dat op de voorpagina stond en hier het achtergrondartikel. Navraag bij het CBF leert dat de nul euro besteed aan opleidingen geen betrekking heeft op opleidingen van medewerkers, maar opleidingen van doelgroepen; de grootste goededoelenorganisaties zijn dus niet actief op het terrein van onderwijs.

Leave a comment

Filed under bequests, charitable organizations, data, economics, fundraising, household giving, Netherlands, trends