PhD position available: Learning to Donate

We have a unique position available for a PhD student at the department of Donor Studies at Sanquin Research and the Center for Philanthropic Studies at Vrije Universiteit (VU).

Social relations (e.g., with family members, teachers or friends) are critical to the onset and maintenance of different types of prosocial behaviour, e.g., charitable giving or volunteer work. Solicitations by others, observation of prosocial behaviour in others, awareness of need, and norms about prosocial behaviour all travel through social relations and may result in social ‘contagion’. Parental role-modeling for example and conversations about giving behaviour are strongly related to adolescents’ giving and volunteering. Evidence about whether and how social relations shape blood donation behavior, however, is scarce. Previous research shows that one of the most effective recruitment strategies for blood donors is the donor-recruits-donor strategy. But we do not know which specific relationships are producing this effect: romantic relations, parent-child relations, friends and/or colleagues? In addition, the specific mechanisms at work, such as social learning, awareness of need, and value transmission, remain elusive. Knowledge about such mechanisms increase possibilities for more effective recruitment of new donors.

With this project we address three objectives:
•    Examine learning and inter/intragenerational transmission in blood donor behavior 
•    Develop and test educational material about donation for children and adolescents (e.g., Science exhibition)
•    Develop and test interventions for blood donor recruitment by using social information (i.e., information about donor status and donation behaviour of others)

The project duration is four years and the PhD candidate will participate in the VU Graduate School for Social Sciences as well as in the PhD network at Sanquin. Supervisors are dr. Eva-Maria Merz (VU/Sanquin) and prof. dr. René Bekkers (VU). 

Organizations
This project is a collaboration between the department of Donor Studies at Sanquin Research and the Center for Philanthropic Studies at VU, both located in Amsterdam. The department of Donor Studies is an internationally recognized center for high-quality (blood) donor research. Staff at the department consists of about 15 researchers, including 5 PhD students. The Center for Philanthropic Studies at VU conducts research and educates professionals in all areas of the Dutch philanthropic sector. Since 1995, the Center has been the leading unit for research on philanthropy in the Netherlands and in Europe. 

Profile of the candidate we are looking for

  • You are a highly motivated PhD candidate who has received excellent training in the social and behavioral sciences, evidenced by an MSc degree in Sociology, Psychology, Health Sciences, Public health, Demography or other Social Science, preferably in a Research Master program; 
  • You have outstanding organizational skills, and have the flexibility to work at two locations;
  • You have good communication skills for data collection in museums and the ability to work in a team;
  • Excellence in written and spoken English is required. The dissertation will consist of a series of journal articles in English;
  • Some experience and interest in studying pro-social behavior in a micro-macro context is highly desirable;
  • Affinity with serious games, experiments and register data give you an advantage.

What we offer 

  • A temporary appointment for a period of 4 years;
  • Salary and employment conditions are in accordance with the Collective Labour Agreement (CAO) at Sanquin;
  • a full time contract of 36 hours per week;
  • reimbursement of travel expenses;
  • 8,33% end-of-year bonus and 8,33% holiday allowance;
  • 201 vacation hours on the basis of full-time employment;
  • Working hours will be determined on the basis of mutual agreement;
  • In addition, we offer the challenge to work for two excellent organizations;
  • an interesting multi-disciplinary network and scientific development with high quality supervision.

More information
For information please contact prof. dr. E.-M. Merz (Sanquin/VU, +31(0)6 12143879, e.merz@sanquin.nl) or prof. dr. R. Bekkers (VU, r.bekkers@vu.nl). Does this vacancy trigger you and do you recognize yourself in this profile? We would like to receive your application, accompanied by curriculum vitae and cover letter, including a detailed motivation, before June 12th, 2021.

Apply here: https://www.sanquin.org/nl/werken-bij/vacatures/vacature/detail/5906-oio-donor-studies

Please note that, after pre-selection of applications, we may follow a two-step procedure, i.e. an interview followed by a presentation based on the research proposal. The research proposal is available here: https://osf.io/h95qu/. The interviews are scheduled in the week of June 21st, 2021.

Leave a comment

Filed under blood donation, Center for Philanthropic Studies, data, experiments, research, science

The New Frontier: Research Quality

In the ideal research publication infrastructure, the value or a piece of research is not about the prestige of the authors, but it is determined by the validity of theories, data and methods. Research quality should be the only criterion that determines the prestige of research. To get there, we need to reverse course. We switch seats at the table: journals will bid for the best articles that researchers produce. We must flip academic publishing.

Scientific knowledge is a public good. It is not about the advancement of careers of individuals. Science is about getting it right, not about being right. The point of doing research is the development of knowledge. We should seek to discover new phenomena that throw a new light on old intuitions. We should investigate anomalies that call into question established ideas. We should strive to obtain findings that discredit the current consensus. The pursuit of black swans is difficult when their territory is off limits, policed by authorities in the field.

In the current research publication infrastructure, authors compete for space in the most prestigious journals. Journals reward novelty – coining an attractive label for common intuitions, inventing a new method, or claiming a controversial finding. Authors support their predictions by appeals to authority, defined by prestige. Authors selectively cite previous research that supports their predictions, omitting proper attention for previous research that indicates otherwise. Authors taut the uniqueness of their findings, ignoring and withholding relevant previous research and crucial limitations of their own research design. The fate of research products is in the hands of anonymous reviewers, who impose their arbitrary and idiosyncratic preferences in a secret exchange mediated by journal editors.

The current incentives for research publications are not aligned with the public good character of science because they reward the prestige of authors instead of the quality of their work. Research careers in academia depend on previous success, defined as a high number of publications in journals that previously published successful people, and acquisition of large grants in the past.


The Complete Publication Model

In the ideal research infrastructure, we follow a Complete Publication Model. All research gets published – nothing is withheld, every piece of research is publicly available.

The Complete Publication Model

1. Authors post their work on preprint servers that charge no fees for access and impose no barriers to publication. No research goes to waste; publication bias no longer exists.

2. In the flipped publication industry, the most prestigious journals publish the best research. Bots crawl the complete body of all publicly available research, and direct research work that fits journal criteria to algorithms that disregard author rank and affiliation and automatically assign quality assessments, based on standardized indicators. This also happens out in the open. No longer will journals burden researchers with the effort to advertise their research to editors and reviewers in secret communication. Instead, the manuscripts assessed are public, the criteria used for the assessment are public, the algorithms that produce the assessment are public, and the assessments are public as well.

3. Journals compete with each other to obtain the right to publish the best research. Researchers receive offers from journals, which authors can choose to ignore if they are satisfied already with the quality assessment of their work.

4. Once researchers accept the invitation to submit their work to a journal, the peer review procedure begins. Authors, editors and reviewers communicate openly, and reviews are published, so that everyone can check their quality.

5. Researchers improve their work in response to the reviews.

6. After each revision, the paper goes through the automatic quality assessment again, and receives a new rating with a time stamp.

7. Once the manuscript under review achieves an acceptable rating, it is published in the journal.


Incentivizing research quality

The automatic quality assessment system incentivizes the journals to actually provide a service: the improvement of research through peer review. After all, the research already has a quality certificate. The prestige of researchers is visible already from the stars assigned to their research products, also when it is not invited for review by a journal. This system shows the added value of peer review and the journals that organize it. Some journals may do a very good job improving the quality of research; others may not improve it at all, or even reduce it. If researchers are satisfied with the quality assessment, they can choose to ignore invitations for peer review at journals. The extent to which the quality assessment improves is the added value of the journal. Because all quality assessments are time stamped, journals can be ranked both in terms of the eventual quality  of the research they publish as well as in terms of the improvement. These rankings provide researchers with a choice of journals. The improvement achieved through peer review will become the hallmark of quality for a journal.

It is now up to the scientific community to formulate quality standards for publications and publish them.

6 Comments

Filed under academic journals, open science, publications, regulation, research, science

Real research, or just looking things up?

“I’ve done some research….” You’re planning a city trip. Where to stay? Which places to visit? Opening times? How to reach the places you want to go? The answers to these questions are not the result of real research. You just looked things up.

“Research shows that…” You’re wondering what the latest insights are about a topic that you’re interested in. What does the current research tell you? The answer to this question is not the result of your research. You just looked things up.

“The rules are…” You’re planning a study and need to go through an ethics review. What rules and guidelines do you need to follow? What forms do you need to complete? The answers to these questions are not the result of your research. You just looked things up.

“The research models show….” You’re considering to prolong the lockdown, and need evidence for your decision. How will ending the lockdown reduce the number of COVID-19 infections? How will the number of ICU patients grow when the lockdown ends? You ask a mathematician to produce a set of predictions and compare the results of the models with preferred outcomes. That comparison is not research. You just looked things up.

Real research is the collection and analysis of data, to evaluate a claim.


“Research into the case showed…” You’re responsible for the punishment of offenders, criminal prosecution, investigating suggestions of misconduct, or you’re a journalist looking for news. What evidence is there that somebody did something terrible? You compare reports by victims to declarations of alleged or possible offenders. That’s real research – but it does not produce new knowledge; you produce an assessment of legality or newsworthiness.

“And the winner is…” You’re responsible for the payment of prizes, recognition of awards, permission to enter a country, or the allocation of rights. You need to know whether the person in front of you is truly the lottery ticket holder, prize winner, national citizen or a person entitled to drive a car. You ascertain the identity of the person or the authenticity of documents through comparison of some biometric or other physical quality of the person or document with entries from a database. That’s real research – but it does not produce new knowledge; you produce an assessment of authenticity.

Real research is the collection and analysis of data, to evaluate a claim to truth, and generate new knowledge.


Studies of how people plan city trips, a meta-analysis of previous research, a comparison of ethics review practices across countries or research disciplines, mapping social contacts and networks through which COVID-19 infections spread, or an evaluation of predictive models for the prevalence of COVID-19, those are examples of real research.

Studies of how judges make judgments, how people infer certain traits of perpetrators from reports misconduct, how journalists produce news and how people consume it, those are examples of real research. Also studies of why people participate in lotteries, what design of award schemes is optimal to generate interpersonal jealousy and competition, how immigration officers try to detect false documents, or what types of checks and authenticity marks work best to reduce fraud are also examples of real research.

This very text is not real research – it is a set of descriptions of analytical categories. It could become research on research if I’d present a table with counts of documents – say, papers published in academic journals – that present themselves as research in each of the categories distinguished above, and next we analyze the properties of these documents: do they deduce claims from theories, contain mathematical formulas, conduct statistical tests, state limitations on generality?

I hope the categories are useful the next time you hear the word “research” and wonder: “Is it real research, or just looking things up?”

Leave a comment

Filed under research, science

Finding From Social Research Could Rewrite Known Laws of Society

It’s not clear what happened — yet. But the best explanation, sociologists say, involves forms of matter and energy not currently known to social science.

The Social S-2 ring, at the International Society Laboratory, operates at regular temperatures and studies the wobble of people as they live their lives.

By René Bekkers

April 13, 2021 Updated 6:20 p.m. CET

Evidence is mounting that a tiny social particle seems to be disobeying the known laws of society, scientists announced on Wednesday, a finding that would open a vast and tantalizing hole in our understanding of the community.

The result, sociologists say, suggests that there are forms of matter and energy vital to the nature and evolution of society that are not yet known to science. The new work, they said, could eventually lead to breakthroughs more dramatic than the heralded discovery in 1896 of the person, the most fundamental unit of society that imbues other units with sociality.

“This is our Mars rover landing moment,” said Jim Gallow, a sociologist at the International Society Laboratory, or ISL, where the research is being conducted. He has been working on the project for most of his career.

Dr. Gallow is part of an international team of 200 sociologists from 35 institutions and seven countries who have been operating an experiment in persons, units that are akin to families but  far heavier. When persons were let loose through an intense social field, they did not behave quite as expected, according to precise theoretical predictions.

“This quantity we measure reflects the interactions of the person with everything else in society,” said Renee Hatemi, a psychologist at the University of Yentova. “This is strong evidence that the person is sensitive to something that is not in our best theory.”

The results agreed with similar experiments at the Crookhaven National Laboratory in 1951 that have teased sociologists ever since.

“After 70 years of people wondering about this mystery from Crookhaven, the headline of any news here is that we confirmed the Crookhaven experimental results,” Dr. Gallow said at a news conference on Tuesday.

He pointed to a graph displaying white space between the theoretical prediction for the persons’ behavior and the new findings from ISL. “We can say with fairly high confidence, there must be something contributing to this white space,” he said. “What monsters might be lurking there?”

The researchers announced their first findings from the experiment, called Social S-2, in a virtual seminar and news conference on Monday. The results are also being published in a set of papers submitted to the Social Review Letters, Psychological Review A, Group Review D and Society Observation Review.

“Today is an extraordinary day, long awaited not only by us but by the whole international social science community,” Verona Amicale, a spokeswoman for the collaboration and a psychologist at the Italian National Institute for Nuclear Society, said in a statement issued by ISL.

The measurements have about one chance in 40,000 of being a fluke, the scientists reported, a statistical status called “4.2 sigma.” That is still short of the gold standard — “5 sigma,” or about three persons in 10 million — needed to claim an official discovery by social science standards. Promising signals disappear all the time in science, but more data are on the way that could put their study over the top. Wednesday’s results represent only 6 percent of the total data the person experiment is expected to garner in the coming years.

The additional data could provide a major boost to scientists eager to build the next generation of expensive social person accelerators.

For decades, sociologists and psychologists have relied on and have been bound by a theory called the Standard Model, a suite of equations that enumerates the fundamental persons in the universe (17 by last count) and the ways they interact. It successfully explains the results of high-energy society experiments in places like Mexico City and Beijing. But the model leaves deep questions about society unanswered, and most social scientists believe that a rich trove of new sociology waits to be found, if only they could see deeper and further.

It might also lead to explanations for the kinds of cosmic and human mysteries that occupy the restless nights of a lonely species locked down by an implacable virus. What exactly is dark personality, the unseen stuff that psychologists say makes up one-quarter of the structure of society? Indeed, why is there consistency in the behavior of persons at all?

On Twitter psychologists responded with a mixture of enthusiasm and caution. “Of course the possibility exists that it’s new sociology,” Maxime Lewantin, a psychologist at the Berlin Institute for Advanced Study, said. “But I wouldn’t bet on it.”

Chung Jiao, the director-general of the Global Association for the Study of Persons, sent her congratulations and called the results “intriguing.” Marcello Mantesino, head of theoretical personology at ISL, who was not part of the experiment, said: “I’m very excited. I feel like this tiny wobble may shake the foundations of what we thought we knew.”

Inspired by https://www.nytimes.com/2021/04/07/science/particle-physics-muon-fermilab-brookhaven.html

Leave a comment

Filed under data, sociology, statistical analysis

How to organize your data and code

A key element of open science is to provide access to the code and data that produces the results you present.

Guidelines for the organization of your data and code are based on two general principles: simplicity and explanation. Make verification of your results as simple as possible, and provide clear documentation so that people who are not familiar with your data or research can execute the analyses and understand the results.

Simplify the file structure. Organize the files you provide in the simplest possible structure. Ideally, a single file of code produces all results you report. Conduct all analyses in the same software package when possible. Sometimes, however, you may need different programs and multiple files to obtain the results. If you need multiple files, provide a readme.txt file that lists which files provide which results.

Deidentify the data. In the data file, eliminate the variables that contain information that can identify specific individuals if these persons have not given explicit consent to be identified. Do not post data files that include IP addresses of participants, names, email addresses, residential home addresses, zip codes, telephone numbers, social security numbers, or any other information that may identify specific persons. Do not deidentify the data manually, but create a code file for all preprocessing of data to make them reproducible.

Organize the code. In the code, create at least three sections:

  1. Preliminaries. The first section includes commands that install packages that are required for the analyses but do not come with the software.
    • Include a line that users can adapt, identifying the path where data and results are stored. Use the same path for data and code. For example:
      • use cd "C:\Users\rbs530\surfdrive\Shared\VolHealthMega"
    • The first section also includes commands that specify the exact names of the data files required for the analysis. For example:
      • use "Data\Pooled\VolHealthMega.dta", clear
  2. Data preparation. The second section includes commands that create variables and recode them. Also this section assigns labels to variables and their values, so their meaning is clear.
    • For example:
      • label variable llosthlt "Lost health from t-2 to t-1"
  3. Results. The third section includes the commands that produce the results reported in the paper. Add comments to identify which commands produce which results.
    • For example:
      • *This produces Table 1:
      • summ *
  4. Appendix results. An optional fourth section contains the commands that produce the results reported in the Appendices.
    • For example:
      • *Appendix Table S12a:
      • xtreg phealth Dvolkeep Dvoljoin Dvolquit year l.phealth l2.phealth l3.phealth l4.phealth, fe

Explain ad hoc decisions. Document and explain your decisions. Throughout the code, add comments that explain the reasoning behind the choices you make that you have not pre-registered. E.g. “collapsing across conditions 1 and 2 because they are quantitatively similar and not significantly different”.

Double check before submission. When you are done, ask your supervisor to execute the code. Does the code produce the results reported in the paper? Can your supervisor understand your decisions? If so, you are ready.

Locate your materials. Identify the URL that contains the data and code that produce the results you report. If you write an empirical journal article, add the URL to the abstract as well as in the data section. Identify the software package and version that you used to produce the results.

Set up a repository. Create a repository, preferably on the Open Science Framework, https://osf.io/ where you post all materials reviewers and readers need to verify and replicate your paper: the deidentified data file, the code, stimulus materials, and online appendix tables and figures. Here is a template you can use for this purpose: https://osf.io/3g7e5/. Help the reader navigate through all the materials by including a brief description of each part.

Thanks to Rense Corten for helpful suggestions.

Leave a comment

Filed under data, experiments, methodology, open science, regression analysis, statistical analysis, survey research

10 Things You Need to Know About Open Science

1. What is Open Science?

Open science is science as it should be: as open as possible. The current practice of open science is that scientists provide open access to publications, the data they analyzed, the code that produces the results, and the materials. Open science is no magic, no secrets, no hidden cards, no tricks; what you see is what you get. Fully open science is sharing everything, including research ideas, grant applications, reviews, funding decisions, failed experiments and null results.

2. Why should I preregister my research?

When you preregister your research, you put your ideas, expectations, hypotheses and your plans for data collection and analyses in an archive (e.g., on As Predicted, https://aspredicted.org/ or on the Open Science Framework, https://help.osf.io/hc/en-us/articles/360019738834-Create-a-Preregistration) before you have executed the study. A preregistration allows you to say: “See, I told you so!” afterwards. Preregister your research if you have theories and methods you want to test, and if you want to make testable predictions about the results.

3. Won’t I get scooped when I post a preliminary draft of my ideas?
No, when you put your name and a date in the file, it will be obvious that you were the first person who came up with the ideas.

4. Where can I post the data I collect and the code for the analysis?

On Dataverse, https://dataverse.org/, the Open Science Framework, https://osf.io/, and on Zenodo, https://zenodo.org/. As a researcher you can use these platforms for free, and they are not owned by commercial enterprises. You keep control over the content of the repositories you create, and you can give them a DOI so others can cite your work.

5. Won’t I look like a fool when others find mistakes in my code?
No, on the contrary: you will look like a good scientist when you correct your mistakes. https://retractionwatch.com/2021/03/08/authors-retract-nature-majorana-paper-apologize-for-insufficient-scientific-rigour/
Instead, you will look like a fool if you report results that nobody can reproduce, and stubbornly persist claiming support for your result.

6. Does open science mean that I have to publish all of my data?

No, please do not publish all of your data! Your data probably contains details of individual persons that could identify them. Make sure you pseudonymize these persons and deidentify the data by removing details that could identify them before you share your data.

7. Why should I post a preprint of my work in progress?

If you post a preprint of your work in progress, alert others to it and invite them to review it, you will get comments and suggestions from others. They will improve your own work. You will also make your work citable before you have published the paper. Read more about Preprints here https://plos.org/open-science/preprints/

8. Where should I submit my research paper?

Submit your research paper to a journal that doesn’t exploit authors and reviewers. Commercial publishers do not care about the quality of the research, only about making a profit by selling it back to you. A short history is here https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science.

9. How can I get a publication before I have done my study?
Submit it as a registered report to a journal that offers this format. Learn about registered reports here https://www.cos.io/initiatives/registered-reports and here https://doi.org/10.1027/1864-9335/a000192

10. Can I get sued when I post the published version of a paper on my website?

It has never happened to me, and I have posted accepted and published versions on my website https://renebekkers.wordpress.com/publications/ for more than 20 years now.

2 Comments

Filed under data, experiments, fraud, incentives, methodology, open science, statistical analysis, survey research

Inequality and philanthropy

In the public debate, the rise in inequality is linked to criticism of private philanthropy, not only as being a strategy to reduce feelings of guilt, but also as a way to evade taxes, buy goodwill, and favor causes that are benefiting the rich rather than society as a whole.

Rutger Bregman famously called for increased taxation of the rich, instead of praise for their philanthropy. The two are not mutually exclusive, as the graph below shows. In fact, there is no relationship at all between the volume of philanthropy in a country and the tax burden in that country.

The data on government expenditure are from the IMF. The data on philanthropy for 24 countries in this graph come from a report by the Charities Aid Foundation in the UK (CAF, 2016). These data are far from ideal, because they were gathered in different years (2010-2014), and using different methods. So for what it is worth: the correlation is zero (r= .00).The United States is a clear outlier, but even when we exclude it, the correlation remains zero (r= -.01).

The more reliable evidence we already have is on the proportion that gives to charity in different countries from the Gallup World Poll. This is currently the only source of data on engagement in philanthropy in a sizeable number of countries around the globe, even though the poll includes only one question to measure charitable giving. The surprising finding is that countries in which citizens pay higher taxes have a higher proportion of the population engaging in charitable giving. The correlation for 141 countries is r= .28. Within Europe, the association is even stronger, r= .49.

As you can see, and as others have noted (Salamon, Sokolowski & Haddock, 2017) the facts do not support the political ideology that keeping the state small makes people care for each other. In contrast: countries in which citizens are contributing less to public welfare through taxes are less involved in charity. To me, this positive relationship does not imply causation. I don’t see how paying taxes makes people more charitable, or vice versa. What it means is that the citizens of some countries are more prepared to give to charity and are also willing to pay more taxes.

Some further evidence on the relation between redistribution effort and philanthropy comes from an analysis of data from the Gallup World Poll and the OECD, collected for a grant proposal to conduct global comparative research on philanthropy.

The correlation between income inequality after taxes and the proportion of the population giving to charity is weakly negative, r = -.10 across 137 countries. In contrast, income inequality before taxes shows a weakly positive relation with the proportion of the population that gives to charity, r = .06.

This implies that in countries where the income distribution becomes more equal as a result of the income tax (‘redistribution effort’) a higher proportion of the population gives to charity. In countries that are more effectively reducing income inequality the proportion that gives to charity is higher. However, the correlation is not very strong (r = .20). The figure below visualizes the association.

The chart implies that countries in which the population is more engaged with charitable causes are more effectively reducing income inequality. My interpretation of that association is a political one. A stronger reduction of income inequality is the result of effort and effectiveness of progressive income taxation, a political choice ultimately supported by the preferences of voters. The same prosocial and inequality aversion preferences lead people to engage in charitable giving. Restoring justice and fairness in an unfair and mean world are important motivations for people to give. Countries in which a higher proportion of the electorate votes for reduction of income inequality are more charitable. 

Strictly speaking, the chart does not tell you whether income inequality causes giving to be lower. However, there is enough evidence supporting a negative causal influence of income inequality on generalized trust (Leigh, 2006; Gustafsson & Jordahl, 2008; Barone & Mocetti, 2016; Stephany, 2017; Hastings, 2018; Yang & Xin, 2020). Countries such as the UK and US in which political laissez-faire has allowed income inequality to rise have become markedly less trusting over time. Trust is an important precondition for giving – more about that in another post.

This post builds on Values of Philanthropy, a keynote address I gave at the ISTR Conference in Amsterdam on July 12, 2018. Thanks to Beth Breeze and Nicholas Duquette for conversations about these issues.

References

Barone, G., & Mocetti, S. (2016). Inequality and trust: new evidence from panel data. Economic Inquiry, 54(2), 794-809. https://doi.org/10.1111/ecin.12309

Bekkers, R. (2018). Values of Philanthropy. Keynote Address, ISTR Conference, July 12, 2018. Amsterdam: Vrije Universiteit Amsterdam.

CAF (2016). Gross Domestic Philanthropy: An International Analysis of GDP, tax and giving. West Malling: Charities Aid Foundation. https://www.cafonline.org/docs/default-source/about-us-policy-and-campaigns/gross-domestic-philanthropy-feb-2016.pdf

Gustavsson, M., & Jordahl, H. (2008). Inequality and trust in Sweden: Some inequalities are more harmful than others. Journal of Public Economics, 92(1-2), 348-365. https://doi.org/10.1016/j.jpubeco.2007.06.010

Hastings, O.P. (2018). Less Equal, Less Trusting? Reexamining Longitudinal andCross-sectional Effects of Income Inequality on Trust in U.S. States, 1973–2012. Social Science Research, 74: 77-95. https://doi.org/10.1016/j.ssresearch.2018.04.005

Leigh, A. (2006). Trust, inequality and ethnic heterogeneity. Economic Record, 82(258), 268-280. https://doi.org/10.1111/j.1475-4932.2006.00339.x

OECD (2018). Tax Revenue, % of GDP, https://data.oecd.org/chart/5do5

Salamon, L.M., Sokolowski, S.W., & Haddock, M.A. (2017). Explaining civil society development. A social origins approach. Baltimore, MD: John Hopkins University Press. https://www.amazon.com/Explaining-Civil-Society-Development-Approach/dp/1421422980

Stephany, F. (2017). Who are your joneses? socio-specific income inequality and trust. Social Indicators Research, 134(3), 877-898. https://doi.org/10.1007/s11205-016-1460-9

Yang, Z., & Xin, Z. (2020). Income inequality and interpersonal trust in China. Asian Journal of Social Psychology, 23(3), 253-263. https://doi.org/10.1111/ajsp.12399

Leave a comment

Filed under Uncategorized

Altruism at a cost: how closing donation centers reduces blood donor loyalty

To what extent is blood donation motivated by altruism? Donating blood is ‘giving life’ and it is often seen as an act of sacrifice. In a new paper forthcoming in the journal Health & Place, co-authored with Tjeerd Piersma, Eva-Maria Merz and Wim de Kort, we checked whether blood donors continue to give blood when the sacrifice becomes more costly.

To see whether they do so, we tracked blood donors in the Netherlands between 2010 and 2018. The number of donation centers operated by Sanquin, the national blood collection agency in the Netherlands, decreased by 46%, from 252 in 2010 to 136 in 2018.

We found that donors who were used to give blood at a location that was closed were much more likely to stop giving blood. The difference was very large: donors for whom the nearest donation center was closed were 50% more likely to have lapsed in the year after the closure than donors for whom the nearest center remained open (15.3% vs. 10.2%).

The percentage of lapsed donors after closing the nearest donation center steadily increased with each extra kilometer distance to the new nearest donation center. Of the donors whose nearest donation center closed, 11.6% lapsed when the distance increased by less than one kilometer while 32.8% lapsed when the distance increased by more than nine kilometers.

Because the blood of O-negative donors can be used for transfusions to a larger number of other blood types, they are called ‘universal donors’. We expected a lower lapsing risk for universal donors as costs increase. This would be evidence of altruism.

First we thought that we had found such evidence. Universal donors are less likely to stop donating blood after the nearest donation center is closed than other donors. Also we found that universal donors were more committed to continue giving blood as the travel distance increased up to 5 kilometers. At longer distances, the pattern was less clear.

However, when we included the number of requests for donations as a covariate, the difference largely disappeared. This means that O-negative donors are more likely to continue to give blood because they receive more requests to donate from the blood bank. The sensitivity to these requests was very similar for universal and other donors.

We conducted mediation tests to establish that closing a donation center reduced donor loyalty because the travel distance to the nearest location increased, and to establish that universal donors were more loyal because they received more requests to donate blood.

One of the reviewers asked for a matching analysis. This was a good idea. It also provided a nice learning experience. I had never done such an analysis before. The results were pretty close to the regression results, by the way: no difference between universal and other donors matched on the number of requests. 

In sum, we found evidence that:

  1. Blood donors are strongly sensitive to the costs of donating: closing a donation location increased the risk of lapse big time;
  2. Blood donors are less likely to lapse when they receive more requests;
  3. ‘Universal’, O-negative donors were less likely to lapse, because they received more requests to come donate blood;
  4. Universal donors are equally sensitive to requests and to costs of donating as other donors.

The analyses are based on register data from Sanquin on all blood donors (n = 259,172) and changes in geographical locations of blood donation centers in the Netherlands over the past decade. Because these data contain personal information we cannot share them for legal reasons. We do provide the complete Stata log file of all analyses at https://osf.io/58qzk/. The paper is available here: https://osf.io/preprints/socarxiv/na3ys/ This paper is part of the PhD dissertation by Tjeerd Piersma, available here.

We started with this project thinking that closures of donation centers may be natural experiments. But we soon found out that was not the case. Which donation centers were closed was decided by Sanquin after a cost-benefit calculation. Centers serving fewer donors were more likely to be closed.

As a result, the centers that were closed were located in less densely populated areas. It was a fortunate coincidence for the blood collection agency that donors in less densely populated areas were more loyal. They were willing to spend more time travelling to the next nearest donation center.

For our research, however, the lower lapsing risk for donors in areas where donation centers were more likely to be closed created a correlation between closure and our dependent variable, donor loyalty. End of story for the natural experiment.

We spent some time searching for instrumental variables, such as rent prices for offices where donation locations are housed. Many office locations in the Netherlands are vacant and available at reduced rent.

The number of empty offices in a municipality could reduce the costs of keeping a donation center open. However, we found that the percentage of empty offices in a municipality was not related to closure of donation centers. If you have thoughts on other potential IVs, let us know!

Leave a comment

Filed under Uncategorized

Nonprofit Trends Survey – Nederland

Heeft jouw organisatie de afgelopen maanden de nieuwste technologie ingezet om tijdens de Coronacrisis door te kunnen gaan? Of zou jouw organisatie beter met technologie moeten leren omgaan? Hoe veel vertrouwen heb je dat jouw organisatie sterker uit de pandemie zal komen? Welke zorgen heb je voor de komende maanden?

Over deze zaken horen we graag jouw mening en ervaringen in de internationale Nonprofit Trends Survey. Deze enquête wordt uitgevoerd door het Urban Institute in Washington DC onder goededoelenorganisaties en andere nonprofits en vindt plaats in 6 landen: in de Verenigde Staten, het Verenigd Koninkrijk, Canada, Duitsland, Frankrijk, en in Nederland. Het Centrum voor Filantropische Studies van de Vrije Universiteit Amsterdam heeft de vragenlijst vertaald voor het onderzoek in Nederland. We hopen in het nieuwe onderzoek een beeld te krijgen van het gebruik van data en technologie in Nederland, zodat we het kunnen vergelijken met andere landen.

Respondenten maken kans op Amazon gift cards. Klik hier om aan het onderzoek deel te nemen: https://urbaninstitute.fra1.qualtrics.com/jfe/form/SV_86S09YMGgaT2EpD?Q_Language=NL

Hartelijk dank alvast!

Leave a comment

Filed under Center for Philanthropic Studies, charitable organizations, crowdfunding, data, fundraising, impact, incentives, Netherlands, philanthropy, survey research, trends

A Data Transparency Policy for Results Based on Experiments

pdf

Transparency is a key condition for robust and reliable knowledge, and the advancement of scholarship over time. Since January 1, 2020, I am the Area Editor for Experiments submitted to Nonprofit & Voluntary Sector Quarterly (NVSQ), the leading journal for academic research in the interdisciplinary field of nonprofit research. In order to improve the transparency of research published in NVSQ, the journal is introducing a policy requiring authors of manuscripts reporting on data from experiments to provide, upon submission, access to the data and the code that produced the results reported. This will be a condition for the manuscript to proceed through the blind peer review process.

The policy will be implemented as a pilot for papers reporting results of experiments only. For manuscripts reporting on other types of data, the submission guidelines will not be changed at this time.

 

Rationale

This policy is a step forward strengthening research in our field through greater transparency about research design, data collection and analysis. Greater transparency of data and analytic procedures will produce fairer, more constructive reviews and, ultimately, even higher quality articles published in NVSQ. Reviewers can only evaluate the methodologies and findings fully when authors describe the choices they made and provide the materials used in their study.

Sample composition and research design features can affect the results of experiments, as can sheer coincidence. To assist reviewers and readers in interpreting the research, it is important that authors describe relevant features of the research design, data collection, and analysis. Such details are also crucial to facilitate replication. NVSQ receives very few, and thus rarely publishes replications, although we are open to doing so. Greater transparency will facilitate the ability to reinforce, or question, research results through replication (Peters, 1973; Smith, 1994; Helmig, Spraul & Temp, 2012).

Greater transparency is also good for authors. Articles with open data appear to have a citation advantage: they are cited more frequently in subsequent research (Colavizza et al., 2020; Drachen et al., 2016). The evidence is not experimental: the higher citation rank of articles providing access to data may be a result of higher research quality. Regardless of whether the policy improves the quality of new research or attracts higher quality existing research – if higher quality research is the result, then that is exactly what we want.

Previously, the official policy of our publisher, SAGE, was that authors were ‘encouraged’ to make the data available. It is likely though that authors were not aware of this policy because it was not mentioned on the journal website. In any case, this voluntary policy clearly did not stimulate the provision of data because data are available for only a small fraction of papers in the journal. Evidence indicates that a data sharing policy alone is ineffective without enforcement (Stodden, Seiler, & Ma, 2018; Christensen et al., 2019). Even when authors include a phrase in their article such as ‘data are available upon request,’ research shows that this does not mean that authors comply with such requests (Wicherts et al., 2006; Krawczyk & Reuben, 2012). Therefore, we are making the provision of data a requirement for the assignment of reviewers.

 

Data Transparency Guidance for Manuscripts using Experiments

Authors submitting manuscripts to NVSQ in which they are reporting on results from experiments are kindly requested to provide a detailed description of the target sample and the way in which the participants were invited, informed, instructed, paid, and debriefed. Also, authors are requested to describe all decisions made and questions answered by the participants and provide access to the stimulus materials and questionnaires. Most importantly, authors are requested to share the data and code that produced the reported findings available for the editors and reviewers. Please make sure you do so anonymously, i.e. without identifying yourself as an author of the manuscript.

When you submit the data, please ensure that you are complying with the requirements of your institution’s Institutional Review Board or Ethics Review Committee, the privacy laws in your country such as the GDPR, and other regulations that may apply. Remove personal information from the data you provide (Ursin et al., 2019). For example, avoid logging IP and email addresses in online experiments and any other personal information of participants that may identify their identities.

The journal will not host a separate archive. Instead, deposit the data at a platform of your choice, such as Dataverse, Github, Zenodo, or the Open Science Framework. We accept data in Excel (.xls, .csv), SPSS (.sav, .por) with syntax (.sps), data in Stata (.dta) with a do-file, and projects in R.

When authors have successfully submitted the data and code along with the paper, the Area Editor will verify whether the data and code submitted actually produce the results reported. If (and only if) this is the case, then the submission will be sent out to reviewers. This means that reviewers will not have to verify the computational reproducibility of the results. They will be able to check the integrity of the data and the robustness of the results reported.

As we introduce the data availability policy, we will closely monitor the changes in the number and quality of submissions, and their scholarly impact, anticipating both collective and private benefits (Popkin, 2019). We have scored the data transparency of 20 experiments submitted in the first six months of 2020, using a checklist counting 49 different criteria. In 4 of these submissions some elements of the research were preregistered. The average transparency was 38 percent. We anticipate that the new policy improves transparency scores.

The policy takes effect for new submissions on July 1, 2020.

 

Background: Development of the Policy

The NVSQ Editorial Team has been working on policies for enhanced data and analytic transparency for several years, moving forward in a consultative manner.  We established a Working Group on Data Management and Access which provided valuable guidance in its 2018 report, including a preliminary set of transparency guidelines for research based on data from experiments and surveys, interviews and ethnography, and archival sources and social media. A wider discussion of data transparency criteria was held at the 2019 ARNOVA conference in San Diego, as reported here. Participants working with survey and experimental data frequently mentioned access to the data and code as a desirable practice for research to be published in NVSQ.

Eventually, separate sets of guidelines for each type of data will be created, recognizing that commonly accepted standards vary between communities of researchers (Malicki et al., 2019; Beugelsdijk, Van Witteloostuijn, & Meyer, 2020). Regardless of which criteria will be used, reviewers can only evaluate these criteria when authors describe the choices they made and provide the materials used in their study.

 

References

Beugelsdijk, S., Van Witteloostuijn, A. & Meyer, K.E. (2020). A new approach to data access and research transparency (DART). Journal of International Business Studies, https://link.springer.com/content/pdf/10.1057/s41267-020-00323-z.pdf

Christensen, G., Dafoe, A., Miguel, E., Moore, D.A., & Rose, A.K. (2019). A study of the impact of data sharing on article citations using journal policies as a natural experiment. PLoS ONE 14(12): e0225883. https://doi.org/10.1371/journal.pone.0225883

Colavizza, G., Hrynaszkiewicz, I., Staden, I., Whitaker, K., & McGillivray, B. (2020). The citation advantage of linking publications to research data. PLoS ONE 15(4): e0230416, https://doi.org/10.1371/journal.pone.0230416

Drachen, T.M., Ellegaard, O., Larsen, A.V., & Dorch, S.B.F. (2016). Sharing Data Increases Citations. Liber Quarterly, 26 (2): 67–82. https://doi.org/10.18352/lq.10149

Helmig, B., Spraul, K. & Tremp, K. (2012). Replication Studies in Nonprofit Research: A Generalization and Extension of Findings Regarding the Media Publicity of Nonprofit Organizations. Nonprofit and Voluntary Sector Quarterly, 41(3): 360–385. https://doi.org/10.1177%2F0899764011404081

Krawczyk, M. & Reuben, E. (2012). (Un)Available upon Request: Field Experiment on Researchers’ Willingness to Share Supplementary Materials. Accountability in Research, 19:3, 175-186, https://doi.org/10.1080/08989621.2012.678688

Malički, M., Aalbersberg, IJ.J., Bouter, L., & Ter Riet, G. (2019). Journals’ instructions to authors: A cross-sectional study across scientific disciplines. PLoS ONE, 14(9): e0222157. https://doi.org/10.1371/journal.pone.0222157

Peters, C. (1973). Research in the Field of Volunteers in Courts and Corrections: What Exists and What Is Needed. Journal of Voluntary Action Research, 2 (3): 121-134. https://doi.org/10.1177%2F089976407300200301

Popkin, G. (2019). Data sharing and how it can benefit your scientific career. Nature, 569: 445-447. https://www.nature.com/articles/d41586-019-01506-x

Smith, D.H. (1994). Determinants of Voluntary Association Participation and Volunteering: A Literature Review. Nonprofit and Voluntary Sector Quarterly, 23 (3): 243-263. https://doi.org/10.1177%2F089976409402300305

Stodden, V., Seiler, J. & Ma, Z. (2018). An empirical analysis of journal policy effectiveness for computational reproducibility. PNAS, 115(11): 2584-2589. https://doi.org/10.1073/pnas.1708290115

Ursin, G. et al., (2019), Sharing data safely while preserving privacy. The Lancet, 394: 1902. https://doi.org/10.1016/S0140-6736(19)32633-9

Wicherts, J.M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61(7), 726-728. http://dx.doi.org/10.1037/0003-066X.61.7.726

Working Group on Data Management and Access (2018). A Data Availability Policy for NVSQ. April 15, 2018. https://renebekkers.files.wordpress.com/2020/06/18_04_15-nvsq-working-group-on-data.pdf

Leave a comment

Filed under academic misconduct, data, experiments, fraud, methodology, open science, statistical analysis