Category Archives: experiments

Revolutionizing Philanthropy Research Webinar

January 30, 11am-12pm (EST) / 5-6pm (CET) / 9-10pm (IST)

Why do people give to the benefit of others – or keep their resources to themselves? What is the core evidence on giving that holds across cultures? How does giving vary between cultures? How has the field of research on giving changed in the past decades?

10 years after the publication of “A Literature Review of Empirical Studies of Philanthropy: Eight Mechanisms that Drive Charitable Giving” in Nonprofit and Voluntary Sector Quarterly, it is time for an even more comprehensive effort to review the evidence base on giving. We envision an ambitious approach, using the most innovative tools and data science algorithms available to visualize the structure of research networks, identify theoretical foundations and provide a critical assessment of previous research.

We are inviting you to join this exciting endeavor in an open, global, cross-disciplinary collaboration. All expertise is very much welcome – from any discipline, country, or methodology. The webinar consists of four parts:

  1. Welcome: by moderator Pamala Wiepking, Lilly Family School of Philanthropy and VU Amsterdam;
  2. The strategy for collecting research evidence on giving from publications: by Ji Ma, University of Texas;
  3. Tools we plan to use for the analyses: by René Bekkers, Vrije Universiteit Amsterdam;
  4. The project structure, and opportunities to participate: by Pamala Wiepking.

The webinar is interactive. You can provide comments and feedback during each presentation. After each presentation, the moderator selects key questions for discussion.

We ask you to please register for the webinar here: https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g.

Registration is free. After you register, you will receive an automated message that includes a URL for the webinar, as well as international calling numbers. In addition, a recording of the webinar will be available soon after on the Open Science Framework Project page: https://osf.io/46e8x/

Please feel free to share with everyone who may be interested, and do let us know if you have any questions or suggestions at this stage.

We look forward to hopefully seeing you on January 30!

You can register at https://iu.zoom.us/webinar/register/WN_faEQe2UtQAq3JldcokFU3g

René Bekkers, Ji Ma, Pamala Wiepking, Arjen de Wit, and Sasha Zarins

1 Comment

Filed under altruism, bequests, charitable organizations, crowdfunding, economics, experiments, fundraising, helping, household giving, informal giving, open science, philanthropy, psychology, remittances, sociology, survey research, taxes, volunteering

A Conversation About Data Transparency

The integrity of the research process serves as the foundation for excellence in research on nonprofit and voluntary action. While transparency does not guarantee credibility, it guarantees you will get the credibility you deserve. Therefore we are developing criteria for transparency standards with regards to the reporting of methods and data.

We started this important conversation at the 48th ARNOVA Conference in San Diego, on Friday, November 22, 2019. In the session, we held a workshop to survey which characteristics of data and methods transparency that help review research and utilize past work as building blocks for future research.

This session was well attended and very interactive. After a short introduction by the editors of NVSQ, the leading journal in the field, we split up in three groups of researchers that work with the same type of data. One group for data from interviews, one for survey data, and one for administrative data such as 990s. In each group we first took 10 minutes for ourselves, formulating criteria for transparency that allow readers to assess the quality of research. All participants received colored sticky notes, and wrote down one idea per note: laudable indicators on green notes, and bad signals on red notes.

IMG-2421

Next, we put the notes on the wall and grouped them. Each cluster received a name on a yellow note. Finally, we shared the results of the small group sessions with the larger group.

IMG-2424

Though the different types of data to some extent have their own quality indicators, there were striking parallels in the match between theory and research design, ethics, sampling, measures, analysis, coding, interpretation, and write-up of results. After the workshop, we collected the notes. I’ve summarized the results in a report about the workshop. In a nutshell, all groups distinguished five clusters of criteria:

  • A. Meta-criteria: transparency about the research process and the data collection in particular;
  • B. Before data collection: research design and sampling;
  • C. Characteristics of the data as presented: response, reliability, validity;
  • D. Decisions about data collected: analysis and causal inference;
  • E. Write-up: interpretation of and confidence in results presented.

bty

Here is the full report about the workshop. Do you have suggestions about the report? Let me know!

Leave a comment

Filed under data, experiments, methodology, open science, survey research

Global Giving: Open Grant Proposal

Here’s an unusual thing for you to read: I am posting a brief description of a grant proposal that I will submit for the ‘vici’-competition of the Netherlands Organization for Scientific Research 2019 later this year. You can download the “pre-proposal” here. It is called “Global Giving”. With the study I aim to describe and explain philanthropy in a large number of countries across the world. I invite you to review the “pre-proposal” and suggest improvements; please use the comments box below, or write to me directly.

You may have heard the story that university researchers these days spend a lot of their time writing grant proposals for funding competitions. Also you may have heard the story that chances of success in such competitions are getting smaller and smaller. These stories are all true. But the story you seldom hear is how such competitions actually work: they are a source of stress, frustration, burnouts and depression, and a complete waste of the precious time of the smartest people in the world. Recently, Gross and Bergstrom found that “the effort researchers waste in writing proposals may be comparable to the total scientific value of the research that the funding supports”.

Remember the last time you saw the announcement of prize winners in a research grant competition? I have not heard a single voice in the choir of the many near-winners speak up: “Hey, I did not get a grant!” It is almost as if everybody wins all the time. It is not common in academia to be open about failures to win. How many vitaes you have seen recently contain a list of failures? This is a grave distortion of reality. Less than one in ten applications is succesful. This means that for each winning proposal there are at least nine proposals that did not get funding. I want you to know how much time is wasted by this procedure. So here I will be sharing my experiences with the upcoming ‘vici’-competition.

single-shot-santa

First let me tell you about the funny name of the competition. The name ‘vici’ derives from roman emperor Caesar’s famous phrase in Latin: ‘veni, vidi, vici’, which he allegedly used to describe a swift victory. The translation is: “I came, I saw, I conquered”. The Netherlands Organization for Scientific Research (‘Nederlandse organisatie voor Wetenschappelijk Onderzoek’, NWO) thought it fitting to use these names as titles of their personal grant schemes. The so-called ‘talent schemes’ are very much about the personal qualities of the applicant. The scheme heralds heroes. The fascination with talent goes against the very nature of science, where the value of an idea, method or result is not measured by the personality of the author, but by its validity and reliability. That is why peer review is often double blind and evaluators do not know who wrote the research report or proposal.

plt132

Yet in the talent scheme, the personality of the applicant is very important. The fascination with talent creates Matthew effects, first described in 1968 by Robert K. Merton. The name ‘Matthew effect’ derives from the biblical phrase “For to him who has will more be given” (Mark 4:25). Simply stated: success breeds success. Recently, this effect has been documented in the talent scheme by Thijs Bol, Matthijs de Vaan and Arnout van de Rijt. When two applicants are equally good but one – by mere chance – receives a grant and the other does not, the ‘winner’ is ascribed with talent and the ‘loser’ is not. The ‘winner’ then gets a tremendously higher chance of receiving future grants.

As a member of committees for the ‘veni’ competition I have seen how this works in practice. Applicants received scores for the quality of their proposal from expert reviewers before we interviewed them. When we had minimal differences between the expert reviewer scores of candidates – differing only in the second decimal – personal characteristics of the researchers such as their self-confidence and manner of speaking during the interview often made the difference between ‘winners’ and ‘losers’. Ultimately, such minute differences add up to dramatically higher chances to be a full professor 10 years later, as the analysis in Figure 4 of the Bol, De Vaan & Van de Rijt paper shows.

matthew

My career is in this graph. In 2005, I won a ‘veni’-grant, the early career grant that the Figure above is about. The grant gave me a lot of freedom for research and I enjoyed it tremendously. I am pretty certain that the freedom that the grant gave me paved the way for the full professorship that I was recently awarded, thirteen years later. But back then, the size of the grant did not feel right. I felt sorry for those who did not make it. I knew I was privileged, and the research money I obtained was more than I needed. It would be much better to reduce the size of grants, so that a larger number of researchers can be funded. Yet the scheme is there, and it is a rare opportunity for researchers in the Netherlands to get funding for their own ideas.

This is my third and final application for a vici-grant. The rules for submission of proposals in this competition limit the number of attempts to three. Why am I going public with this final attempt?

The Open Science Revolution

You will have heard about open science. Most likely you will associate it with the struggle to publish research articles without paywalls, the exploitation of government funded scientists by commercial publishers, and perhaps even with Plan S. You may also associate open science with the struggle to get researchers to publish the data and the code they used to get to their results. Perhaps you have heard about open peer review of research publications. But most likely you will not have heard about open grant review. This is because it rarely happens. I am not the first to publish my proposal; the Open Grants repository currently contains 160 grant proposals. These proposals were shared after the competitions had run. The RIO Journal published 52 grant proposals. This is only a fraction of all grant proposals being created, submitted and reviewed. The many advantages of open science are not limited to funded research, they also apply to research ideas and proposals. By publishing my grant proposal before the competition, the expert reviews, the recommendations of the committee, my responses and experiences with the review process, I am opening up the procedure of grant review as much as possible.

Stages in the NWO Talent Scheme Grant Review Procedure

Each round of this competition takes almost a year, and proceeds in eight stages:

  1. Pre-application – March 26, 2019 <– this is where we are now
  2. Non-binding advice from committee: submit full proposal, or not – Summer 2019
  3. Full proposal – end of August 2019
  4. Expert reviews – October 2019
  5. Rebuttal to criticism in expert reviews – end of October 2019
  6. Selection for interview – November 2019
  7. Interview – January or February 2020
  8. Grant, or not – March 2020

If you’re curious to learn how this application procedure works in practice,
check back in a few weeks. Your comments and suggestions on the ideas above and the pre-proposal are most welcome!

Leave a comment

Filed under altruism, charitable organizations, data, economics, empathy, experiments, fundraising, happiness, helping, household giving, incentives, methodology, open science, organ donation, philanthropy, politics, principle of care, psychology, regression analysis, regulation, sociology, statistical analysis, survey research, taxes, trends, trust, volunteering, wealth

Uncertain Future for Giving in the Netherlands Panel Survey

By Barbara Gouwenberg and René Bekkers

At the Center for Philanthropic Studies we have been working hard to secure funding for three rounds of funding for the Giving in the Netherlands Study, including the Giving in the Netherlands Panel Survey for the years 2020-2026. During the previous round of the research, the ministry of Justice and Security has said that it would no longer fund the study on its own, because the research is important not only for the government but also for the philanthropic sector. The national government no longer sees itself as the sole funder of the research.

The ministry does think the research is important and is prepared to commit funding for the research in the form of a 1:1 matching subsidy to contributions received by VU Amsterdam from other funders. To strengthen the societal relevance and commitment for the Giving in the Netherlands study the Center has engaged in a dialogue with relevant stakeholders, including the council of foundations, the association of fundraising organizations, and several endowed foundations and fundraising charities in the Netherlands. The goal of these talks was to get science and practice closer together. From these talks we have gained three important general insights:

  • The Giving in the Netherlands study contributes to the visibility of philanthropy in the Netherlands. This is important for the legitimacy of an autonomous and growing sector.
  • It is important to engage in a conversation with relevant stakeholders before the fieldwork for a next round starts, in order to align the research more strongly with practice.
  • After the analyses have been completed, communication with relevant stakeholders about the results should be improved. Stakeholders desire more conversations about the application of insights from the research in practice.

The center includes these issues in the plans for the upcoming three editions. VU Amsterdam has been engaged in conversations with branch organizations and individual foundations in the philanthropic sector for a long time, in order to build a sustainable financial model for the future of the research. However, at the moment we do not have the funds together to continue the research. That is why we did not collect data for the 2018 wave of the Giving in the Netherlands Panel Survey. As a result, we will not publish estimates for the size and composition of philanthropy in the Netherlands in spring 2019. We do hope that after this gap year we can restart the research next year, with a publication of new estimates in 2020.

Your ideas and support are very welcome at r.bekkers@vu.nl.

2 Comments

Filed under Center for Philanthropic Studies, charitable organizations, contract research, data, experiments, foundations, fundraising, household giving, methodology, Netherlands, philanthropy, policy evaluation, statistical analysis, survey research

Closing the Age of Competitive Science

In the prehistoric era of competitive science, researchers were like magicians: they earned a reputation for tricks that nobody could repeat and shared their secrets only with trusted disciples. In the new age of open science, researchers share by default, not only with peer reviewers and fellow researchers, but with the public at large. The transparency of open science reduces the temptation of private profit maximization and the collective inefficiency in information asymmetries inherent in competitive markets. In a seminar organized by the University Library at Vrije Universiteit Amsterdam on November 1, 2018, I discussed recent developments in open science and its implications for research careers and progress in knowledge discovery. The slides are posted here. The podcast is here.

2 Comments

Filed under academic misconduct, data, experiments, fraud, incentives, law, Netherlands, open science, statistical analysis, survey research, VU University

Multiple comparisons in a regression framework

Gordon Feld posted a comparison of results from a repeated measures ANOVA with paired samples t-tests.

Using Stata, I wondered how these results would look in a regression framework. For those of you who want to replicate this: I used the data provided by Gordon. The do-file is here. Because wordpress does not accept .do files you will have to rename the file from .docx to .do to make it work. The Stata commands are below, all in block quotes. The output is given in images. In the explanatory notes, commands are italicized, and variables are underlined.

A pdf of this post is here.

First let’s examine the data. You will have to insert your local path at which you have stored the data.

. import delimited “ANOVA_blog_data.csv”, clear

. pwcorr before_treatment after_treatment before_placebo after_placebo

These commands get us the following table of correlations:

There are some differences in mean values, from 98.8 before treatment to 105.0 after treatment. Mean values for the placebo measures are 100.8 before and 100.2 after. Across all measures, the average is 101.2035.

Let’s replicate the t-test for the treatment effect.

The increase in IQ after the treatment is 6.13144 (SE = 2.134277), which is significant in this one-sample paired t-test (p = .006). Now let’s do the t-test for the placebo conditions.

The decrease in IQ after the placebo is -.6398003 (SE = 1.978064), which is not significant (p = .7477).

The question is whether we have taken sufficient account of the nesting of the data.

We have four measures per participant: one before the treatment, one after, one before the placebo, and one after.

In other words, we have 50 participants and 200 measures.

To get the data into the nested structure, we have to reshape them.

The data are now in a wide format: one row per participant, IQ measures in different columns.

But we want a long format: 4 rows per participant, IQ in just one column.

To get this done we first assign a number to each participant.

. gen id = _n

We now have a variable id with a unique number for each of the 50 participants.
The Stata command for reshaping data requires the data to be set up in such a way that variables measuring the same construct have the same name.
We have 4 measures of IQ, so the new variables will be called iq1, iq2, iq3 and iq4.

. rename (before_treatment after_treatment before_placebo after_placebo) (iq1 iq2 iq3 iq4).

Now we can reshape the data. The command below assigns a new variable ‘mIQ’ to identify the 4 consecutive measures of IQ.

. reshape long iq, i(id) j(mIQ)

Here’s the result.

We now have 200 lines of data, each one is an observation of IQ, numbered 1 to 4 on the new variable mIQ for each participant. The variable mIQ indicates the order of the IQ measurements.

Now we identify the structure of the two experiments. The first two measures in the data are for the treatment pre- and post-measures.

. replace treatment = 1 if mIQ < 3 (100 real changes made) . replace treatment = 0 if mIQ > 2
(100 real changes made)

Observations 3 and 4 are for the placebo pre- and post-measures.

. replace placebo = 0 if mIQ < 3 (100 real changes made) . replace placebo = 1 if mIQ > 2
(100 real changes made)

. tab treatment placebo

We have 100 observations in each of the experiments.

OK, we’re ready for the regressions now. Let’s first conduct an OLS to quantify the changes within participants in the treatment and placebo conditions.

The regression shows that the treatment increased IQ by 6.13144 points, but with an SE of 3.863229 the change is not significant (p = .116). The effect estimate is correct, but the SE is too large and hence the p-value is too high as well.

. reg iq mIQ if placebo == 1


The placebo regression shows the familiar decline of .6398003, but with an SE of 3.6291, which is too high (p = .860). The SE and p-values are incorrect because OLS does not take the nested structure of the data into account.

With the xtset command we identify the nesting of the data: measures of IQ (mIQ) are nested within participants (id).

. xtset id mIQ

First we run an ’empty model’ – no predictors are included.

. xtreg iq

Here’s the result:

Two variables in the output are worth commenting on.

  1. The constant (_cons) is the average across all measures, 101.2033. This is very close to the average we have seen before.
  2. The rho is the intraclass correlation – the average correlation of the 4 IQ measures within individuals. It is .7213, which seems right.

Now let’s replicate the t-test results in a regression framework.

. xtreg iq mIQ if treatment == 1

In the output below we see the 100 observations in 50 groups (individuals). We obtain the same effect estimate of the treatment as before (6.13144) and the correct SE of 2.134277, but the p-value is too small (p = .004).

Let’s fix this. We put fixed effects on the participants by adding , fe at the end of the xtreg command:

. xtreg iq mIQ if treatment == 1, fe

We now get the accurate p-value (0.006):

Let’s run the same regression for the placebo conditions.

. xtreg iq mIQ if placebo == 1, fe


The placebo effect is the familiar -.6398003, SE = 1.978064, now with the accurate p-value of .748.

Leave a comment

Filed under data, experiments, methodology, regression, regression analysis, statistical analysis, survey research

Research internship @VU Amsterdam

Social influences on prosocial behaviors and their consequences

While self-interest and prosocial behavior are often pitted against each other, it is clear that much charitable giving and volunteering for good causes is motivated by non-altruistic concerns (Bekkers & Wiepking, 2011). Helping others by giving and volunteering feels good (Dunn, Aknin & Norton, 2008). What is the contribution of such helping behaviors on happiness?

The effect of helping behavior on happiness is easily overestimated using cross-sectional data (Aknin et al., 2013). Experiments provide the best way to eradicate selection bias in causal estimates. Monozygotic twins provide a nice natural experiment to investigate unique environmental influences on prosocial behavior and its consequences for happiness, health, and trust. Any differences within twin pairs cannot be due to additive genetic effects or shared environmental effects. Previous research has investigated environmental influences of the level of education and religion on giving and volunteering (Bekkers, Posthuma and Van Lange, 2017), but no study has investigated the effects of helping behavior on important outcomes such as trust, health, and happiness.

The Midlife in the United States (MIDUS) and the German Twinlife surveys provide rich datasets including measures of health, life satisfaction, and social integration, in addition to demographic and socioeconomic characteristics and measures of helping behavior through nonprofit organizations (giving and volunteering) and in informal social relationships (providing financial and practical assistance to friends and family).

In the absence of natural experiments, longitudinal panel data are required to ascertain the chronology in acts of giving and their correlates. The same holds for the alleged effects of volunteering on trust (Van Ingen & Bekkers, 2015) and health (De Wit, Bekkers, Karamat Ali, & Verkaik, 2015). Since the mid-1990s, a growing number of panel studies have collected data on volunteering and charitable giving and their alleged consequences, such as the German Socio-Economic Panel (GSOEP), the British Household Panel Survey (BHPS) / Understanding Society, the Swiss Household Panel (SHP), the Household, Income, Labour Dynamics in Australia survey (HILDA), the General Social Survey (GSS) in the US, and in the Netherlands the Longitudinal Internet Studies for the Social sciences (LISS) and the Giving in the Netherlands Panel Survey (GINPS).

Under my supervision, students can write a paper on social influences of education, religion and/or helping behavior in the form of volunteering, giving, and informal financial and social support on outcomes such as health, life satisfaction, and trust, using either longitudinal panel survey data or data on twins. Students who are interested in writing such a paper are invited to present their research questions and research design via e-mail to r.bekkers@vu.nl.

René Bekkers, Center for Philanthropic Studies, Faculty of Social Sciences, Vrije Universiteit Amsterdam

References

Aknin, L. B., Barrington-Leigh, C. P., Dunn, E. W., Helliwell, J. F., Burns, J., Biswas-Diener, R., … Norton, M. I. (2013). Prosocial spending and well-being: Cross-cultural evidence for a psychological universal. Journal of Personality and Social Psychology, 104(4), 635–652. https://doi.org/10.1037/a0031578

Bekkers, R., Posthuma, D. & Van Lange, P.A.M. (2017). The Pursuit of Differences in Prosociality Among Identical Twins: Religion Matters, Education Does Not. https://osf.io/ujhpm/ 

Bekkers, R., & Wiepking, P. (2011). A Literature Review of Empirical Studies of Philanthropy: Eight Mechanisms That Drive Charitable Giving. Nonprofit and Voluntary Sector Quarterly, 40: https://doi.org/10.1177/0899764010380927

De Wit, A., Bekkers, R., Karamat Ali, D., & Verkaik, D. (2015). Welfare impacts of participation. Deliverable 3.3 of the project: “Impact of the Third Sector as Social Innovation” (ITSSOIN), European Commission – 7th Framework Programme, Brussels: European Commission, DG Research. http://itssoin.eu/site/wp-content/uploads/2015/09/ITSSOIN_D3_3_The-Impact-of-Participation.pdf

Dunn, E. W., Aknin, L. B., & Norton, M. I. (2008). Spending Money on Others Promotes Happiness. Science, 319(5870): 1687–1688. https://doi.org/10.1126/science.1150952

Van Ingen, E. & Bekkers, R. (2015). Trust Through Civic Engagement? Evidence From Five National Panel Studies. Political Psychology, 36 (3): 277-294. https://renebekkers.files.wordpress.com/2015/05/vaningen_bekkers_15.pdf

Leave a comment

Filed under altruism, Center for Philanthropic Studies, data, experiments, happiness, helping, household giving, Netherlands, philanthropy, psychology, regression analysis, survey research, trust, volunteering