Category Archives: survey research

How to review a paper

Including a Checklist for Hypothesis Testing Research Reports *

See https://osf.io/6cw7b/ for a pdf of this post

 

Academia critically relies on our efforts as peer reviewers to evaluate the quality of research that is published in journals. Reading the reviews of others, I have noticed that the quality varies considerably, and that some reviews are not helpful. The added value of a journal article above and beyond the original manuscript or a non-reviewed preprint is in the changes the authors made in response to the reviews. Through our reviews, we can help to improve the quality of the research. This memo provides guidance on how to review a paper, partly inspired by suggestions provided by Alexander (2005), Lee (1995) and the Committee on Publication Ethics (2017). To improve the quality of the peer review process, I suggest that you use the following guidelines. Some of the guidelines – particularly the criteria at the end of this post – are peculiar for the kind of research that I tend to review – hypothesis testing research reports relying on administrative data and surveys, sometimes with an experimental design. But let me start with guidelines that I believe make sense for all research.

Things to check before you accept the invitation
First, I encourage you to check whether the journal aligns with your vision of science. I find that a journal published by an exploitative publisher making a profit in the range of 30%-40% is not worth my time. A journal that I have submitted my own work to and gave me good reviews is worth the number of reviews I received for my article. The review of a revised version of the paper does not count as a separate paper.
Next, I check whether I am the right person to review the paper. I think it is a good principle to describe my disciplinary background and expertise in relation to the manuscript I am invited to review. Reviewers do not need to be experts in all respects. If you do not have useful expertise to improve the paper, politely decline.

Then I check whether I know the author(s). If I do, and I have not collaborated with the author(s), if I am not currently collaborating or planning to do so, I describe how I know the author(s) and ask the editor whether it is appropriate for me to review the paper. If I have a conflict of interest, I notify the editor and politely decline. It is a good principle to let the editor know immediately if you are unable to review a paper, so the editor can start to look for someone else to review the paper. Your non-response means a delay for the authors and the editor.

Sometimes I get requests to review a paper that I have reviewed before, for a conference or another journal. In these cases I let the editor know and ask the editor whether she would like to see the previous review. For the editor it will be useful to know whether the current manuscript is the same as the version, or includes revisions.

Finally, I check whether the authors have made the data and code available. I have made it a requirement that authors have to fulfil before I accept an invitation to review their work. An exception can be made for data that would be illegal or dangerous to make available, such as datasets that contain identifying information that cannot be removed. In most cases, however, the authors can provide at least partial access to the data by excluding variables that contain personal information.

A paper that does not provide access to the data analyzed and the code used to produce the results in the paper is not worth my time. If the paper does not provide a link to the data and the analysis script, I ask the editor to ask the authors to provide the data and the code. I encourage you to do the same. Almost always the editor is willing to ask the authors to provide access. If the editor does not respond to your request, that is a red flag to me. I decline future invitation requests from the journal. If the authors do not respond to the editor’s request, or are unwilling to provide access to the data and code, that is a red flag for the editor.

The tone of the review
When I write a review, I think of the ‘golden rule’: treat others as you would like to be treated. I write the review report that I would have liked to receive if I had been the author. I use the following principles:

  • Be honest but constructive. You are not at war. There is no need to burn a paper to the ground.
  • Avoid addressing the authors personally. Say: “the paper could benefit from…” instead of “the authors need”.
  • Stay close to the facts. Do not speculate about reasons why the authors have made certain choices beyond the arguments stated in the paper.
  • Take a developmental approach. Any paper will contain flaws and imperfections. Your job is to improve science by identifying problems and suggesting ways to repair them. Think with the authors about ways they can improve the paper in such a way that it benefits collective scholarship. After a quick glance at the paper, I determine whether I think the paper has the potential to be published, perhaps after revisions. If I think the paper is beyond repair, I explain this to the editor.
  • Try to see beyond bad writing style and mistakes in spelling. Also be mindful of disciplinary and cultural differences between the authors and yourself.

The substance of the advice
In my view, it is a good principle to begin the review report by describing your expertise and the way you reviewed the paper. If you searched for literature, checked the data and verified the results, ran additional analyses, state this. It will allow the editor to adjudicate the review.

Then give a brief overview of the paper. If the invitation asks you to provide a general recommendation, consider whether you’d like to give one. Typically, you are invited to recommend ‘reject’, ‘revise & resubmit’ – with major or minor revisions, or ‘accept’. Because the recommendation is the first thing the editor wants to know it is convenient to state it early in the review.

When giving such a recommendation, I start from the assumption that the authors have invested a great deal of time in the paper and that they want to improve it. Also I consider the desk-rejection rate at the journal. If the editor sent the paper out for review, she probably thinks it has the potential to be published.

To get to the general recommendation, I list the strengths and the weaknesses of the paper. To ease the message you can use the sandwich principle: start with the strengths, then discuss the weaknesses, and conclude with an encouragement.

For authors and editors alike it is convenient to give actionable advice. For the weaknesses in the paper I suggest ways to repair them. I distinguish major issues such as not discussing alternative explanations from minor issues such as missing references and typos. It is convenient for both the editor and the authors to number your suggestions.

The strengths could be points that the authors are underselling. In that case, I identify them as strengths that the authors can emphasize more strongly.

It is handy to refer to issues with direct quotes and page numbers. To refer to the previous sentence: “As the paper states on page 3, [use] “direct quotes and page numbers””.

In 2016 I have started to sign my reviews. This is an accountability device: by exposing who I am to the authors of the paper I’m reviewing, I set higher standards for myself. I encourage you to think about this as an option, though I can imagine that you may not want to risk retribution as a graduate student or an early career researcher. Also some editors do not appreciate signed reviews and may take away your identifying information.

How to organize the review work
Usually, I read a paper twice. First, I go over the paper superficially and quickly. I do not read it closely. This gets me a sense of where the authors are going. After the first superficial reading, I determine whether the paper is good enough to be revised and resubmitted, and if so, I provide more detailed comments. After the report is done, I revisit my initial recommendation.

The second time I go over the paper, I do a very close reading. Because the authors had a word limit, I assume that literally every word in the manuscript is absolutely necessary – the paper should have no repetitions. Some of the information may be in the supplementary information provided with the paper.

Below you find a checklist of things I look for in a paper. The checklist reflects the kind of research that I tend to review, which is typically testing a set of hypotheses based on theory and previous research with data from surveys, experiments, or archival sources. For other types of research – such as non-empirical papers, exploratory reports, and studies based on interviews or ethnographic material – the checklist is less appropriate. The checklist may also be helpful for authors preparing research reports.

I realize that this is an extensive set of criteria for reviews. It sets the bar pretty high. A review checking each of the criteria will take you at least three hours, but more likely between five and eight hours. As a reviewer, I do not always check all criteria myself. Some of the criteria do not necessarily have to be done by peer reviewers. For instance, some journals employ data editors who check whether data and code provided by authors produce the results reported.

I do hope that journals and editors can get to a consensus on a set of minimum criteria that the peer review process should cover, or at least provide clarity about the criteria that they do check.

After the review
If the authors have revised their paper, it is a good principle to avoid making new demands for the second round that you have not made before. Otherwise the revise and resubmit path can be very long.

 

References
Alexander, G.R. (2005). A Guide to Reviewing Manuscripts. Maternal and Child Health Journal, 9 (1): 113-117. https://doi.org/10.1007/s10995-005-2423-y
Committee on Publication Ethics Council (2017). Ethical guidelines for peer reviewers. https://publicationethics.org/files/Ethical_Guidelines_For_Peer_Reviewers_2.pdf
Lee, A.S. (1995). Reviewing a manuscript for publication. Journal of Operations Management, 13: 87-92. https://doi.org/10.1016/0272-6963(95)94762-W

 

Review checklist for hypothesis testing reports

Research question

  1. Is it clear from the beginning what the research question is? If it is in the title, that’s good. In the first part of the abstract is good too. Is it at the end of the introduction section? In most cases that is too late.
  2. Is it clearly formulated? By the research question alone, can you tell what the paper is about?
  3. Does the research question align with what the paper actually does – or can do – to answer it?
  4. Is it important to know the answer to the research question for previous theory and methods?
  5. Does the paper address a question that is important from a societal or practical point of view?

 

Research design

  1. Does the research design align with the research question? If the question is descriptive, do the data actually allow for a representative and valid description? If the question is a causal question, do the data allow for causal inference? If not, ask the authors to report ‘associations’ rather than ‘effects’.
  2. Is the research design clearly described? Does the paper report all the steps taken to collect the data?
  3. Does the paper identify mediators of the alleged effect? Does the paper identify moderators as boundary conditions?
  4. Is the research design waterproof? Does the study allow for alternative interpretations?
  5. Has the research design been preregistered? Does the paper refer to a public URL where the preregistration is posted? Does the preregistration include a statistical power analysis? Is the number of observations sufficient for statistical tests of hypotheses? Are deviations from the preregistered design reported?
  6. Has the experiment been approved by an Internal or Ethics Review Board (IRB/ERB)? What is the IRB registration number?

 

Theory

  1. Does the paper identify multiple relevant theories?
  2. Does the theory section specify hypotheses? Have the hypotheses been formulated before the data were collected? Before the data were analyzed?
  3. Do hypotheses specify arguments why two variables are associated? Have alternative arguments been considered?
  4. Is the literature review complete? Does the paper cover the most relevant previous studies, also outside the discipline? Provide references to research that is not covered in the paper, but should definitely be cited.

 

Data & Methods

  1. Target group – Is it identified? If mankind, is the sample a good sample of mankind? Does it cover all relevant units?
  2. Sample – Does the paper identify the procedure used to obtain the sample from the target group? Is the sample a random sample? If not, has selective non-response been dealt with, examined, and have constraints on generality been identified as a limitation?
  3. Number of observations – What is the statistical power of the analysis? Does the paper report a power analysis?
  4. Measures – Does the paper provide the complete topic list, questionnaire, instructions for participants? To what extent are the measures used valid? Reliable?
  5. Descriptive statistics – Does the paper provide a table of descriptive statistics (minimum, maximum, mean, standard deviation, number of observations) for all variables in the analyses? If not, ask for such a table.
  6. Outliers – Does the paper identify treatment of outliers, if any?
  7. Is the multi-level structure (e.g., persons in time and space) identified and taken into account in an appropriate manner in the analysis? Are standard errors clustered?
  8. Does the paper report statistical mediation analyses for all hypothesized explanation(s)? Do the mediation analyses evaluate multiple pathways, or just one?
  9. Do the data allow for testing additional explanations that are not reported in the paper?

 

Results

  1. Can the results be reproduced from the data and code provided by the authors?
  2. Are the results robust to different specifications?

Conclusion

  1. Does the paper give a clear answer to the research question posed in the introduction?
  2. Does the paper identify implications for the theories tested, and are they justified?
  3. Does the paper identify implications for practice, and are they justified given the evidence presented?

 

Discussion

  1. Does the paper revisit the limitations of the data and methods?
  2. Does the paper suggest future research to repair the limitations?

 

Meta

  1. Does the paper have an author contribution note? Is it clear who did what?
  2. Are all analyses reported, if they are not in the main text, are they available in an online appendix?
  3. Are references up to date? Does the reference list include a reference to the dataset analyzed, including an URL/DOI?

 

 

* This work is licensed under a Creative Commons Attribution 4.0 International License. Thanks to colleagues at the Center for Philanthropic Studies at Vrije Universiteit Amsterdam, in particular Pamala Wiepking, Arjen de Wit, Theo Schuyt and Claire van Teunenbroek, for insightful comments on the first version. Thanks to Robin Banks, Pat Danahey Janin, Rense Corten, David Reinstein, Eleanor Brilliant, Claire Routley, Margaret Harris, Brenda Bushouse, Craig Furneaux, Angela Eikenberry, Jennifer Dodge, and Tracey Coule for responses to the second draft. The current text is the fourth draft. The most recent version of this paper is available as a preprint at https://doi.org/10.31219/osf.io/7ug4w. Suggestions continue to be welcome at r.bekkers@vu.nl.

Leave a comment

Filed under academic misconduct, data, experiments, fraud, helping, incentives, law, open science, sociology, survey research

Revolutionizing Philanthropy Research Webinar

January 30, 11am-12pm (EST) / 5-6pm (CET) / 9-10pm (IST)

Why do people give to the benefit of others – or keep their resources to themselves? What is the core evidence on giving that holds across cultures? How does giving vary between cultures? How has the field of research on giving changed in the past decades?

10 years after the publication of “A Literature Review of Empirical Studies of Philanthropy: Eight Mechanisms that Drive Charitable Giving” in Nonprofit and Voluntary Sector Quarterly, it is time for an even more comprehensive effort to review the evidence base on giving. We envision an ambitious approach, using the most innovative tools and data science algorithms available to visualize the structure of research networks, identify theoretical foundations and provide a critical assessment of previous research.

We are inviting you to join this exciting endeavor in an open, global, cross-disciplinary collaboration. All expertise is very much welcome – from any discipline, country, or methodology. The webinar consists of four parts:

  1. Welcome: by moderator Pamala Wiepking, Lilly Family School of Philanthropy and VU Amsterdam;
  2. The strategy for collecting research evidence on giving from publications: by Ji Ma, University of Texas;
  3. Tools we plan to use for the analyses: by René Bekkers, Vrije Universiteit Amsterdam;
  4. The project structure, and opportunities to participate: by Pamala Wiepking.

The webinar is interactive. You can provide comments and feedback during each presentation. After each presentation, the moderator selects key questions for discussion.

We ask you to please register for the webinar here: https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g.

Registration is free. After you register, you will receive an automated message that includes a URL for the webinar, as well as international calling numbers. In addition, a recording of the webinar will be available soon after on the Open Science Framework Project page: https://osf.io/46e8x/

Please feel free to share with everyone who may be interested, and do let us know if you have any questions or suggestions at this stage.

We look forward to hopefully seeing you on January 30!

You can register at https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g

René Bekkers, Ji Ma, Pamala Wiepking, Arjen de Wit, and Sasha Zarins

1 Comment

Filed under altruism, bequests, charitable organizations, crowdfunding, economics, experiments, fundraising, helping, household giving, informal giving, open science, philanthropy, psychology, remittances, sociology, survey research, taxes, volunteering

A Conversation About Data Transparency

The integrity of the research process serves as the foundation for excellence in research on nonprofit and voluntary action. While transparency does not guarantee credibility, it guarantees you will get the credibility you deserve. Therefore we are developing criteria for transparency standards with regards to the reporting of methods and data.

We started this important conversation at the 48th ARNOVA Conference in San Diego, on Friday, November 22, 2019. In the session, we held a workshop to survey which characteristics of data and methods transparency that help review research and utilize past work as building blocks for future research.

This session was well attended and very interactive. After a short introduction by the editors of NVSQ, the leading journal in the field, we split up in three groups of researchers that work with the same type of data. One group for data from interviews, one for survey data, and one for administrative data such as 990s. In each group we first took 10 minutes for ourselves, formulating criteria for transparency that allow readers to assess the quality of research. All participants received colored sticky notes, and wrote down one idea per note: laudable indicators on green notes, and bad signals on red notes.

IMG-2421

Next, we put the notes on the wall and grouped them. Each cluster received a name on a yellow note. Finally, we shared the results of the small group sessions with the larger group.

IMG-2424

Though the different types of data to some extent have their own quality indicators, there were striking parallels in the match between theory and research design, ethics, sampling, measures, analysis, coding, interpretation, and write-up of results. After the workshop, we collected the notes. I’ve summarized the results in a report about the workshop. In a nutshell, all groups distinguished five clusters of criteria:

  • A. Meta-criteria: transparency about the research process and the data collection in particular;
  • B. Before data collection: research design and sampling;
  • C. Characteristics of the data as presented: response, reliability, validity;
  • D. Decisions about data collected: analysis and causal inference;
  • E. Write-up: interpretation of and confidence in results presented.

bty

Here is the full report about the workshop. Do you have suggestions about the report? Let me know!

1 Comment

Filed under data, experiments, methodology, open science, survey research

Gevonden: student-assistent Geven in Nederland 2020

De werkgroep Filantropische Studies van de Faculteit Sociale Wetenschappen aan de Vrije Universiteit Amsterdam is het expertisecentrum op het gebied van onderzoek naar filantropie in Nederland. De werkgroep houdt zich bezig met vragen zoals: Waarom geven mensen vrijwillig geld aan goede doelen? Waarom verrichten mensen vrijwilligerswerk? Hoeveel geld gaat er om in de filantropische sector? Voor het onderzoek Geven in Nederland heeft de werkgroep een student-assistent gevonden: Florian van Heijningen. Welkom!

Leave a comment

Filed under bequests, Center for Philanthropic Studies, charitable organizations, corporate social responsibility, data, foundations, household giving, Netherlands, philanthropy, statistical analysis, survey research

Research on giving in the Netherlands continues, funding secured

We are pleased to announce that the Center for Philanthropic Studies has been able to secure funding for continued research on giving in the Netherlands. The funding enables data collection for the Giving in the Netherlands Panel Survey among households, as well as data collection on corporations, foundations, charity lotteries, and bequests.

In the past 20 years, Giving in the Netherlands has been the prime source of data on trends in the size and composition of philanthropy in the Netherlands. Continuation of the research was uncertain for more than a year because the ministry of Justice and Security withdrew 50% of its funding, calling upon the philanthropic sector to co-fund the research. In an ongoing dialogue with the philanthropic sector, the VU-Center sought stronger alignment of the research with the need for research in practice. The Center has organized round table discussions and an advisory group of experts from the sector has been composed. The Center will use the insights from this dialogue in the research.

Meanwhile the fieldwork has started. Preliminary estimates of giving in the Netherlands will be discussed at a symposium for members of branch organizations in the philanthropic sector in the Fall of 2019. Full publication of the results is scheduled mid-April 2020, at the National Day of Philanthropy.

5 Comments

Filed under Center for Philanthropic Studies, charitable organizations, corporate social responsibility, data, foundations, fundraising, household giving, informal giving, Netherlands, survey research, trends

Global Giving: Open Grant Proposal

Here’s an unusual thing for you to read: I am posting a brief description of a grant proposal that I will submit for the ‘vici’-competition of the Netherlands Organization for Scientific Research 2019 later this year. You can download the “pre-proposal” here. It is called “Global Giving”. With the study I aim to describe and explain philanthropy in a large number of countries across the world. I invite you to review the “pre-proposal” and suggest improvements; please use the comments box below, or write to me directly.

You may have heard the story that university researchers these days spend a lot of their time writing grant proposals for funding competitions. Also you may have heard the story that chances of success in such competitions are getting smaller and smaller. These stories are all true. But the story you seldom hear is how such competitions actually work: they are a source of stress, frustration, burnouts and depression, and a complete waste of the precious time of the smartest people in the world. Recently, Gross and Bergstrom found that “the effort researchers waste in writing proposals may be comparable to the total scientific value of the research that the funding supports”.

Remember the last time you saw the announcement of prize winners in a research grant competition? I have not heard a single voice in the choir of the many near-winners speak up: “Hey, I did not get a grant!” It is almost as if everybody wins all the time. It is not common in academia to be open about failures to win. How many vitaes you have seen recently contain a list of failures? This is a grave distortion of reality. Less than one in ten applications is succesful. This means that for each winning proposal there are at least nine proposals that did not get funding. I want you to know how much time is wasted by this procedure. So here I will be sharing my experiences with the upcoming ‘vici’-competition.

single-shot-santa

First let me tell you about the funny name of the competition. The name ‘vici’ derives from roman emperor Caesar’s famous phrase in Latin: ‘veni, vidi, vici’, which he allegedly used to describe a swift victory. The translation is: “I came, I saw, I conquered”. The Netherlands Organization for Scientific Research (‘Nederlandse organisatie voor Wetenschappelijk Onderzoek’, NWO) thought it fitting to use these names as titles of their personal grant schemes. The so-called ‘talent schemes’ are very much about the personal qualities of the applicant. The scheme heralds heroes. The fascination with talent goes against the very nature of science, where the value of an idea, method or result is not measured by the personality of the author, but by its validity and reliability. That is why peer review is often double blind and evaluators do not know who wrote the research report or proposal.

plt132

Yet in the talent scheme, the personality of the applicant is very important. The fascination with talent creates Matthew effects, first described in 1968 by Robert K. Merton. The name ‘Matthew effect’ derives from the biblical phrase “For to him who has will more be given” (Mark 4:25). Simply stated: success breeds success. Recently, this effect has been documented in the talent scheme by Thijs Bol, Matthijs de Vaan and Arnout van de Rijt. When two applicants are equally good but one – by mere chance – receives a grant and the other does not, the ‘winner’ is ascribed with talent and the ‘loser’ is not. The ‘winner’ then gets a tremendously higher chance of receiving future grants.

As a member of committees for the ‘veni’ competition I have seen how this works in practice. Applicants received scores for the quality of their proposal from expert reviewers before we interviewed them. When we had minimal differences between the expert reviewer scores of candidates – differing only in the second decimal – personal characteristics of the researchers such as their self-confidence and manner of speaking during the interview often made the difference between ‘winners’ and ‘losers’. Ultimately, such minute differences add up to dramatically higher chances to be a full professor 10 years later, as the analysis in Figure 4 of the Bol, De Vaan & Van de Rijt paper shows.

matthew

My career is in this graph. In 2005, I won a ‘veni’-grant, the early career grant that the Figure above is about. The grant gave me a lot of freedom for research and I enjoyed it tremendously. I am pretty certain that the freedom that the grant gave me paved the way for the full professorship that I was recently awarded, thirteen years later. But back then, the size of the grant did not feel right. I felt sorry for those who did not make it. I knew I was privileged, and the research money I obtained was more than I needed. It would be much better to reduce the size of grants, so that a larger number of researchers can be funded. Yet the scheme is there, and it is a rare opportunity for researchers in the Netherlands to get funding for their own ideas.

This is my third and final application for a vici-grant. The rules for submission of proposals in this competition limit the number of attempts to three. Why am I going public with this final attempt?

The Open Science Revolution

You will have heard about open science. Most likely you will associate it with the struggle to publish research articles without paywalls, the exploitation of government funded scientists by commercial publishers, and perhaps even with Plan S. You may also associate open science with the struggle to get researchers to publish the data and the code they used to get to their results. Perhaps you have heard about open peer review of research publications. But most likely you will not have heard about open grant review. This is because it rarely happens. I am not the first to publish my proposal; the Open Grants repository currently contains 160 grant proposals. These proposals were shared after the competitions had run. The RIO Journal published 52 grant proposals. This is only a fraction of all grant proposals being created, submitted and reviewed. The many advantages of open science are not limited to funded research, they also apply to research ideas and proposals. By publishing my grant proposal before the competition, the expert reviews, the recommendations of the committee, my responses and experiences with the review process, I am opening up the procedure of grant review as much as possible.

Stages in the NWO Talent Scheme Grant Review Procedure

Each round of this competition takes almost a year, and proceeds in eight stages:

  1. Pre-application – March 26, 2019 <– this is where we are now
  2. Non-binding advice from committee: submit full proposal, or not – Summer 2019
  3. Full proposal – end of August 2019
  4. Expert reviews – October 2019
  5. Rebuttal to criticism in expert reviews – end of October 2019
  6. Selection for interview – November 2019
  7. Interview – January or February 2020
  8. Grant, or not – March 2020

If you’re curious to learn how this application procedure works in practice,
check back in a few weeks. Your comments and suggestions on the ideas above and the pre-proposal are most welcome!

Leave a comment

Filed under altruism, charitable organizations, data, economics, empathy, experiments, fundraising, happiness, helping, household giving, incentives, methodology, open science, organ donation, philanthropy, politics, principle of care, psychology, regression analysis, regulation, sociology, statistical analysis, survey research, taxes, trends, trust, volunteering, wealth

Hoe rijker, hoe minder vrijgevig?

Economen spreken van een basisgoed als de consumptie ervan relatief gesproken afneemt met het inkomen. Dit geldt heel duidelijk voor geven aan goede doelen. Hogere inkomens en vermogens doen in euro’s meer aan filantropie, maar als deel van hun inkomen en vermogen juist minder. In de jubileumeditie van Geven in Nederland (GIN) publiceerden Arjen de Wit, Pamala Wiepking en ik een special, waarin waarin we alle gegevens over giften uit de jaren 2001-2015 hebben gecombineerd en de inkomens in decielen (groepen van 10%) hebben ingedeeld. De invloed van uitschieters hebben we verminderd door de 1% hoogste waarnemingen te winsoriseren, dat wil zeggen ze te behandelen alsof ze net iets lager zijn. Met uitschieters is het plaatje overigens niet veel anders, de lijn loopt nog steeds naar beneden, maar minder recht.

Fig24

Het percentage van het inkomen dat huishoudens doneren aan goededoelenorganisaties neemt stelselmatig af met de hoogte van het inkomen. De 10% huishoudens die de laagste inkomens in Nederland verdienen, geven 1,16% van het inkomen aan goede doelen. Onder de hoogste 10% van de inkomens is dat 0,44%.

Vivienne van Leuken vroeg me per e-mail hoe dit komt.

Er zijn grofweg drie groepen verklaringen voor deze bevinding.

  1. Het ligt aan de gevers:
    • (a) rijkdom maakt mensen hebberig;
    • (b) hebberige mensen worden rijker.
  2. Het ligt aan de vragers:
    • (a) goededoelenorganisaties spreken de taal niet waarin ze de rijken kunnen overtuigen,
    • (b) goededoelenorganisaties hebben niet de juiste netwerken en
    • (c) goededoelenorganisaties doen niet de juiste proposities.
  3. Het ligt aan de samenleving:
    • (a) dat je moet geven is de norm, maar niet dat je meer moet geven naarmate je inkomen stijgt;
    • (b) voor verschillende soorten giften is er een geefstandaard, een bedrag dat normaal is om te geven. Die geefstandaard is een specifiek bedrag en niet relatief naar inkomen en vermogen;
    • (c) De vrijgevigheidsnorm dat je een deel van je inkomen zou moeten geven is in de loop van de geschiedenis verdwenen. Bovendien houdt met de ontkerkelijking een steeds kleiner deel van de bevolking zich aan zulke normen.

In elk van deze verklaringen zit wel een kern van waarheid, maar er is nog geen goed onderzoek dat aantoont in welke mate deze drie soorten verklaringen verantwoordelijk zijn voor de afname van de vrijgevigheid met inkomen en vermogen.

Leave a comment

Filed under charitable organizations, economics, fundraising, household giving, Netherlands, statistical analysis, survey research, wealth

Uncertain Future for Giving in the Netherlands Panel Survey

By Barbara Gouwenberg and René Bekkers

At the Center for Philanthropic Studies we have been working hard to secure funding for three rounds of funding for the Giving in the Netherlands Study, including the Giving in the Netherlands Panel Survey for the years 2020-2026. During the previous round of the research, the ministry of Justice and Security has said that it would no longer fund the study on its own, because the research is important not only for the government but also for the philanthropic sector. The national government no longer sees itself as the sole funder of the research.

The ministry does think the research is important and is prepared to commit funding for the research in the form of a 1:1 matching subsidy to contributions received by VU Amsterdam from other funders. To strengthen the societal relevance and commitment for the Giving in the Netherlands study the Center has engaged in a dialogue with relevant stakeholders, including the council of foundations, the association of fundraising organizations, and several endowed foundations and fundraising charities in the Netherlands. The goal of these talks was to get science and practice closer together. From these talks we have gained three important general insights:

  • The Giving in the Netherlands study contributes to the visibility of philanthropy in the Netherlands. This is important for the legitimacy of an autonomous and growing sector.
  • It is important to engage in a conversation with relevant stakeholders before the fieldwork for a next round starts, in order to align the research more strongly with practice.
  • After the analyses have been completed, communication with relevant stakeholders about the results should be improved. Stakeholders desire more conversations about the application of insights from the research in practice.

The center includes these issues in the plans for the upcoming three editions. VU Amsterdam has been engaged in conversations with branch organizations and individual foundations in the philanthropic sector for a long time, in order to build a sustainable financial model for the future of the research. However, at the moment we do not have the funds together to continue the research. That is why we did not collect data for the 2018 wave of the Giving in the Netherlands Panel Survey. As a result, we will not publish estimates for the size and composition of philanthropy in the Netherlands in spring 2019. We do hope that after this gap year we can restart the research next year, with a publication of new estimates in 2020.

Your ideas and support are very welcome at r.bekkers@vu.nl.

2 Comments

Filed under Center for Philanthropic Studies, charitable organizations, contract research, data, experiments, foundations, fundraising, household giving, methodology, Netherlands, philanthropy, policy evaluation, statistical analysis, survey research

Onderzoek Geven in Nederland in gevaar

Door Barbara Gouwenberg – uit de nieuwsbrief van de werkgroep Filantropische Studies aan de VU (december 2018)

Het Centrum voor Filantropische Studies werkt momenteel met man en macht om de financiering voor het onderzoek Geven in Nederland voor de komende 6 jaar (3 edities) veilig te stellen. Het Ministerie van Justitie en Veiligheid (J&V) heeft bij de opzet van Geven in Nederland 2017 medio 2015 te kennen gegeven dat het onderzoek niet langer alleen door de overheid zal worden gefinancierd, met als belangrijkste argumentatie dat het onderzoek van belang is voor overheid én sector filantropie. De overheid ziet zichzelf niet langer als enige verantwoordelijke voor de financiering van het onderzoek.

Het Ministerie van J&V wil zich wel voor een langere tijd structureel verbinden aan Geven in Nederland en geeft 1:1 matching voor financiële bijdragen die de VU vanuit de sector ontvangt.

Om de maatschappelijke relevantie van – en commitment voor – het onderzoek Geven in Nederland te versterken heeft de VU de afgelopen maanden de dialoog opgezocht met diverse relevante doelgroepen. Doel: wetenschap en praktijk dichter bij elkaar brengen.

Deze rondgang heeft ons – naast specifieke inzichten – drie belangrijke algemene inzichten opgeleverd; te weten:

  • ‘Geven in Nederland’ draagt bij aan de zichtbaarheid van maatschappelijk initiatief in Nederland. Belangrijk ter legitimatie van een zelfstandige en snel groeiende sector.
  • Communicatie met relevante doelgroepen vóór de start van het onderzoek dient verbeterd te worden met als doel om inhoudelijk beter aansluiting te vinden bij praktijk en beleid.
  • Communicatie over onderzoeksresultaten naar relevante doelgroepen dient verbeterd te worden. Het gaat dan om de praktische toepasbaarheid van het onderzoek, de vertaling van de onderzoeksresultaten naar de praktijk.

De onderzoekers nemen deze verbeterpunten mee in hun plan van aanpak voor de komende drie edities. De VU is al enige tijd in gesprek met de brancheorganisaties en individuele fondsen om tot een duurzaam financieringsmodel voor de toekomst te komen. Op dit moment is de continuering van het onderzoek echter nog niet gegarandeerd. Dat betekent dat er helaas geen Geven in Nederland 2019 komt en dus ook geen presentatie van de nieuwe onderzoeksresultaten zoals u van ons gewend bent op de Dag van de Filantropie. We spreken echter onze hoop uit dat we zeer binnenkort met een Geven in Nederland 2020 kunnen starten!

Leave a comment

Filed under Center for Philanthropic Studies, charitable organizations, contract research, data, foundations, fundraising, household giving, methodology, Netherlands, open science, philanthropy, statistical analysis, survey research, trends, VU University

Closing the Age of Competitive Science

In the prehistoric era of competitive science, researchers were like magicians: they earned a reputation for tricks that nobody could repeat and shared their secrets only with trusted disciples. In the new age of open science, researchers share by default, not only with peer reviewers and fellow researchers, but with the public at large. The transparency of open science reduces the temptation of private profit maximization and the collective inefficiency in information asymmetries inherent in competitive markets. In a seminar organized by the University Library at Vrije Universiteit Amsterdam on November 1, 2018, I discussed recent developments in open science and its implications for research careers and progress in knowledge discovery. The slides are posted here. The podcast is here.

2 Comments

Filed under academic misconduct, data, experiments, fraud, incentives, law, Netherlands, open science, statistical analysis, survey research, VU University