Category Archives: experiments

Tools for the Evaluation of the Quality of Experimental Research

pdf of this post

Experiments can have important advantages above other research designs. The most important advantage of experiments concerns internal validity. Random assignment to treatment reduces the attribution problem and increases the possibilities for causal inference. An additional advantage is that control over participants reduces heterogeneity of treatment effects observed.

The extent to which these advantages are realized in the data depends on the design and execution of the experiment. Experiments have a higher quality if the sample size is larger, the theoretical concepts are more reliably measured, and have a higher validity. The sufficiency of the sample size can be checked with a power analysis. For most effect sizes in the social sciences, which are small (d = 0.2), a sample of 1300 participants is required to detect it at conventional significance levels (p < .05) and 95% power (see appendix). Also for a stronger effect size (0.4) more than 300 participants are required. The reliability of normative scale measures can be judged with Cronbach’s alpha. A rule of thumb for unidimensional scales is that alpha should be at least .63 for a scale consisting of 4 items, .68 for 5 items, .72 for 6 items, .75 for 7 items, and so on. The validity of measures should be justified theoretically and can be checked with a manipulation check, which should reveal a sizeable and significant association with the treatment variables.

The advantages of experiments are reduced if assignment to treatment is non-random and treatment effects are confounded. In addition, a variety of other problems may endanger internal validity. Shadish, Cook & Campbell (2002) provide a useful list of such problems.

Also it should be noted that experiments can have important disadvantages. The most important disadvantage is that the external validity of the findings is limited to the participants in the setting in which their behavior was observed. This disadvantage can be avoided by creating more realistic decision situations, for instance in natural field experiments, and by recruiting (non-‘WEIRD’) samples of participants that are more representative of the target population. As Henrich, Heine & Norenzayan (2010) noted, results based on samples of participants in Western, Educated, Industrialized, Rich and Democratic (WEIRD) countries have limited validity in the discovery of universal laws of human cognition, emotion or behavior.

Recently, experimental research paradigms have received fierce criticism. Results of research often cannot be reproduced (Open Science Collaboration, 2015), publication bias is ubiquitous (Ioannidis, 2005). It has become clear that there is a lot of undisclosed flexibility, in all phases of the empirical cycle. While these problems have been discussed widely in communities of researchers conducting experiments, they are by no means limited to one particular methodology or mode of data collection. It is likely that they also occur in communities of researchers using survey or interview data.

In the positivist paradigm that dominates experimental research, the empirical cycle starts with the formulation of a research question. To answer the question, hypotheses are formulated based on established theories and previous research findings. Then the research is designed, data are collected, a predetermined analysis plan is executed, results are interpreted, the research report is written and submitted for peer review. After the usual round(s) of revisions, the findings are incorporated in the body of knowledge.

The validity and reliability of results from experiments can be compromised in two ways. The first is by juggling with the order of phases in the empirical cycle. Researchers can decide to amend their research questions and hypotheses after they have seen the results of their analyses. Kerr (1989) labeled the practice of reformulating hypotheses HARKING: Hypothesizing After Results are Known. Amending hypotheses is not a problem when the goal of the research is to develop theories to be tested later, as in grounded theory or exploratory analyses (e.g., data mining). But in hypothesis-testing research harking is a problem, because it increases the likelihood of publishing false positives. Chance findings are interpreted post hoc as confirmations of hypotheses that a priori  are rather unlikely to be true. When these findings are published, they are unlikely to be reproducible by other researchers, creating research waste, and worse, reducing the reliability of published knowledge.

The second way the validity and reliability of results from experiments can be compromised is by misconduct and sloppy science within various stages of the empirical cycle (Simmons, Nelson & Simonsohn, 2011). The data collection and analysis phase as well as the reporting phase are most vulnerable to distortion by fraud, p-hacking and other questionable research practices (QRPs).

  • In the data collection phase, observations that (if kept) would lead to undesired conclusions or non-significant results can be altered or omitted. Also, fake observations can be added (fabricated).
  • In the analysis of data researchers can try alternative specifications of the variables, scale constructions, and regression models, searching for those that ‘work’ and choosing those that reach the desired conclusion.
  • In the reporting phase, things go wrong when the search for alternative specifications and the sensitivity of the results with respect to decisions in the data analysis phase is not disclosed.
  • In the peer review process, there can be pressure from editors and reviewers to cut reports of non-significant results, or to collect additional data supporting the hypotheses and the significant results reported in the literature.

Results from these forms of QRPs are that null-findings are less likely to be published, and that published research is biased towards positive findings, confirming the hypotheses, published findings are not reproducible, and when a replication attempt is made, published findings are found to be less significant, less often positive, and of a lower effect size (Open Science Collaboration, 2015).

Alarm bells, red flags and other warning signs

Some of the forms of misconduct mentioned above are very difficult to detect for reviewers and editors. When observations are fabricated or omitted from the analysis, only inside information, very sophisticated data detectives and stupidity of the authors can help us. Also many other forms of misconduct are difficult to prove. While smoking guns are rare, we can look for clues. I have developed a checklist of warning signs and good practices that editors and reviewers can use to screen submissions (see below). The checklist uses terminology that is not specific to experiments, but applies to all forms of data. While a high number of warning signs in itself does not prove anything, it should alert reviewers and editors. There is no norm for the number of flags. The table below only mentions the warning signs; the paper version of this blog post also shows a column with the positive poles. Those who would like to count good practices and reward authors for a higher number can count gold stars rather than red flags. The checklist was developed independently of the checklist that Wicherts et al. (2016) recently published.

Warning signs

  • The power of the analysis is too low.
  • The results are too good to be true.
  • All hypotheses are confirmed.
  • P-values are just below critical thresholds (e.g., p<.05)
  • A groundbreaking result is reported but not replicated in another sample.
  • The data and code are not made available upon request.
  • The data are not made available upon article submission.
  • The code is not made available upon article submission.
  • Materials (manipulations, survey questions) are described superficially.
  • Descriptive statistics are not reported.
  • The hypotheses are tested in analyses with covariates and results without covariates are not disclosed.
  • The research is not preregistered.
  • No details of an IRB procedure are given.
  • Participant recruitment procedures are not described.
  • Exact details of time and location of the data collection are not described.
  • A power analysis is lacking.
  • Unusual / non-validated measures are used without justification.
  • Different dependent variables are analyzed in different studies within the same article without justification.
  • Variables are (log)transformed or recoded in unusual categories without justification.
  • Numbers of observations mentioned at different places in the article are inconsistent. Loss or addition of observations is not justified.
  • A one-sided test is reported when a two-sided test would be appropriate.
  • Test-statistics (p-values, F-values) reported are incorrect.

With the increasing number of retractions of articles reporting on experimental research published in scholarly journals the awareness of the fallibility of peer review as a quality control mechanism has increased. Communities of researchers employing experimental designs have formulated solutions to these problems. In the review and publication stage, the following solutions have been proposed.

  • Access to data and code. An increasing number of science funders require grantees to provide open access to the data and the code that they have collected. Likewise, authors are required to provide access to data and code at a growing number of journals, such as Science, Nature, and the American Journal of Political Science. Platforms such as Dataverse, the Open Science Framework and Github facilitate sharing of data and code. Some journals do not require access to data and code, but provide Open Science badges for articles that do provide access.
  • Pledges, such as the ‘21 word solution’, a statement designed by Simmons, Nelson and Simonsohn (2012) that authors can include in their paper to ensure they have not fudged the data: “We report how we determined our sample size, all data exclusions (if any), all manipulations, and all measures in the study.”
  • Full disclosure of methodological details of research submitted for publication, for instance through org is now required by major journals in psychology.
  • Apps such as Statcheck, p-curve, p-checker, and r-index can help editors and reviewers detect fishy business. They also have the potential to improve research hygiene when researchers start using these apps to check their own work before they submit it for review.

As these solutions become more commonly used we should see the quality of research go up. The number of red flags in research should decrease and the number of gold stars should increase. This requires not only that reviewers and editors use the checklist, but most importantly, that also researchers themselves use it.

The solutions above should be supplemented by better research practices before researchers submit their papers for review. In particular, two measures are worth mentioning:

  • Preregistration of research, for instance on org. An increasing number of journals in psychology require research to be preregistered. Some journals guarantee publication of research regardless of its results after a round of peer review of the research design.
  • Increasing the statistical power of research is one of the most promising strategies to increase the quality of experimental research (Bakker, Van Dijk & Wicherts, 2012). In many fields and for many decades, published research has been underpowered, using samples of participants that are not large enough the reported effect sizes. Using larger samples reduces the likelihood of both false positives as well as false negatives.

A variety of institutional designs have been proposed to encourage the use of the solutions mentioned above, including reducing the incentives in careers of researchers and hiring and promotion decisions for using questionable research practices, rewarding researchers for good conduct through badges, the adoption of voluntary codes of conduct, and socialization of students and senior staff through teaching and workshops. Research funders, journals, editors, authors, reviewers, universities, senior researchers and students all have a responsibility in these developments.

References

Bakker, M., Van Dijk, A. & Wicherts, J. (2012). The Rules of the Game Called Psychological Science. Perspectives on Psychological Science, 7(6): 543–554.

Henrich, J., Heine, S.J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33: 61 – 135.

Ioannidis, J.P.A. (2005). Why Most Published Research Findings Are False. PLoS Medicine, 2(8): e124. http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124

Kerr, N.L. (1989). HARKing: Hypothesizing After Results are Known. Personality and Social Psychology Review, 2: 196-217.

Open Science Collaboration (2015). Estimating the Reproducibility of Psychological Science. Science, 349. http://www.sciencemag.org/content/349/6251/aac4716.full.html

Shadish, W.R., Cook, T.D., & Campbell, D.T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Simmons, J.P., Nelson, L.D., & Simonsohn, U. (2011). False positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22: 1359–1366.

Simmons, J.P., Nelson, L.D. & Simonsohn, U. (2012). A 21 Word Solution. Available at SSRN: http://ssrn.com/abstract=2160588

Wicherts, J.M., Veldkamp, C.L., Augusteijn, H.E., Bakker, M., Van Aert, R.C & Van Assen, M.L.A.M. (2016). Researcher degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers of Psychology, 7: 1832. http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01832/abstract

1 Comment

Filed under academic misconduct, experiments, fraud, incentives, open science, psychology, survey research

The Fishy Business of Philanthropy

Breaking news today: the essential amino acid L-Tryptophan (TRP) makes people generous! Three psychologists at the University of Leiden, Laura Steenbergen, Roberta Sellara, and Lorenza Colzato, secretly gave 16 participants in an experiment a dose of TRP, solved in a glass of orange juice. The 16 other participants in the study drank plain orange juice, without TRP. The psychologists did not write where the experiment was conducted, but describe the participants as 28 female and 4 male students in southern Europe – which is likely to be Italy, given the names of the second and third authors. Next, the participants were kept busy for 30 minutes with an ‘attentional blink task that requires the detection of two targets in a rapid visual on-screen presentation’. After they had completed a task, they were given a reward of €10. Then the participants were given an opportunity to donate to four charities: Unicef, Amnesty International, Greenpeace, and World Wildlife Fund. And behold the wonders of L-Tryptophan: the 0,8 grams of TRP more than doubled the amount donated from €0.47 (yes, that is less than five percent of the €10 earned) to €1.00. Even though the amount donated is small, the increase due to TRP is huge: +112%.

Why is this good to know? Why does tryptophan increase generosity? Steenbergen, Sellara and Colzato reasoned that TRP influences synthesis of the neurotransmitter serotonin (called 5-HT), which has been found to be associated with charitable giving in several economic experiments. The participants in the experiment were not tested for serotonin levels, but the results are consistent with these previous experiments. The new experiment takes us one step further into the biology of charity, by showing that the intake of food enriched by tryptohan is making female students in Italy more generous to charity.

Tryptophan is an essential amino acid, commonly found in protein-rich foods such as chocolate, eggs, milk, poultry, fish, and spinach. Rense Corten, a former colleague of mine, asked on Twitter: how much spinach the participants would have had to digest to obtain a TRP intake that would make them give an additional €1 to charity? Just for fun I computed this: it is about 438 grams of spinach. Less than the 1161 grams of chocolate it would take to generate the same dose of TRP as the participants got in their orange juice.

The fairly low level of giving in the experiment is somewhat surprising given the overall level of charitable giving in Italy. According to the Gallup World Poll some 62% of Italians made donations to charity in 2011, ranking the country 14th in the world. But wait – Italians eat quite some fish, don’t they? If there is a lot of tryptophan in fish, Italians should be more generous than inhabitants of other countries that consume less fish. Indeed the annual fish consumption per capita in Italy (some 25 kilograms, ranking the country 14th in the world) is much higher than in the Czech Republic (10 kilograms; rank: 50), and the Czech population is less likely to give to charity (31%, rank: 30).

Of course this comparison of just two countries in Europe is not representative of the any part of the world. And yes, it is cherry-picked: an initial comparison with Austria (14 kilograms of fish per year, much less than in Italy) did not yield a result in the same direction (69% gives, more than in Italy). But lining up all countries in the world for which there are data on fish consumption and engagement in charity does yield a positive correlation between the two. Here is the excel file including the data. The relationship is modest (r = .30), but still: we now know that inhabitants of countries that consume more fish per capita are somewhat more likely to give to charity.

fishconsumption_givingtocharities

Leave a comment

Filed under experiments, household giving, methodology, philanthropy

THE CURIOUS EVENT OF THE MONEY AT BROAD DAYLIGHT

This post in pdf

One day I cycled back home from work when I suddenly found myself in a curious situation. Shimmering in the gutter lay a folded €20 bill. It was just lying there, between the fallen leaves, in front of one of those expensive homes that I passed by everyday. It was as if the bill called out to me: ‘Pick me up!’ I saw nobody coming from the house. But the road was quite busy with cyclists. There was a student a few meters behind me – I had just passed her – and I saw a man a little bit further behind me. I did not know the student, nor the man, who looked like a fellow academic.

I slowed down, and looked over my shoulder. The student and the man behind me slowed down too, but had not noticed the bill. I pulled over and picked it up. The student stopped cycling and got off her bike. The young woman looked me in the eye and smiled. I realized that I had been the lucky person to find the money, but that I was no more entitled to take it home than she was. “Is this yours?” I joked.

“Ehhm…no”, she said. Of course the money wasn’t hers. I had just asked her whether the money was hers to make me feel more entitled to take the money myself. It did not work. The money was not mine and I knew it. I had to find an excuse not to share the money. I bluffed. I held the bill in the air, made a ripping gesture and said: “We could split it…?” The man who was behind us had slowed down and looked at us. The student laughed and said: “Well, do you have a €10?” I realized I was trapped. Before I knew it I replied: “You never know”. I knew I did have a €10 bill in my wallet. I flipped it open, took out the €10 and gave it to her. The man frowned as he passed by. He certainly looked like an academic and seemed puzzled. I tucked away the €20 in my wallet. The student smiled and said “Thank you. Enjoy your day!” And I did. The sun shone brighter that day.

Later I realized that the incident with the money at broad daylight is curious not just because it was such a unique event. It was also curious because it is similar to a situation that I thought only existed in artificial experimental situations. Even on the day of the event I had been reading articles about ‘dictator game’ experiments. In these experiments, often conducted in psychological laboratories with students sitting alone in small cubicles, participants think they participate in a study on ‘decision making’ or ‘emotions’ but then suddenly get $10 in $1 bills. The students have not done anything to get the money. They just showed up at the right time at the right place, usually in exchange for a smaller ‘show up’ fee of $5. Their task in the experiment with the $10 is to decide how much of the $10 they would like to keep and how much they will give to an ‘anonymous other participant’. The receiver cannot refuse the money – that is why economists call the experiment a ‘Dictator Game’. The participant has the power to donate any desired amount, from $0 to $10. The payout happens in a separate room after the experiment. All participants enter the room individually and receive an envelope containing the money that their dictator has donated – if any. An ingenious procedure ensures that nobody (except the dictator, of course) will know who donated the money she receives. The recipient will not know who her dictator was.

Despite the unfavorable circumstances, participants in dictator games typically give away at least some of the money that they have received. In fact, the proportion of participants giving away nothing at all averages at a little over a third. Almost two thirds of the participants in these experiments donate at least $1. When I had first read about these experiments, I found the results fascinating and puzzling. Why would anyone give anything? There’s no punishment possible for not donating because the receiver has no power to refuse the money and because – except feelings of guilt. Without realizing that I had been in a real life dictator game, I had behaved as many students do in the laboratory.

Another reason why the incident with the money was curious was that it made me think again about theories on generosity that I had learned from reading articles in scientific journals. I thought I had gained some insights on why people give from these theories. But now that I had been in a real life dictator game, the ‘Generosity Puzzle’ seemed more difficult to solve. Why on earth do people give away money to people they don’t know? Why do people give money to people that they will probably never meet again, and who will not be able to give back what they have been given?

Because of the incident, these questions suddenly became personal questions. Why had I myself given away half of the money to a student that I did not know, and would probably never see again? Was it her smiling face when she asked whether I had a €10 bill? What if she had become angry with me and demanded half of the money? If she had not had the nerve to ask whether I had a €10 bill, I would probably have left with €20 instead of a €10. Or what if the student had been male? Would I have shared the money with him? And what if the man cycling behind us had joined our conversation? He had slowed down but had kept cycling. Though there is no easy way to split €20 into three equal amounts, there is also no good reason why the man had not asked for an equal share.

Perhaps a more remote influence had made me split the money with the student? Was it my parents who taught me the value of sharing? I remember a family holiday in Scandinavia with my parents and my brother when I was young. We paused on a parking lot and I walked around looking for stones. Suddenly I found three bills lying on the ground next to large truck. The money was a small fortune to me. Just as I had done when I found the €20 bill, I tried to find the owner, but there was nobody in the truck or anywhere on the parking lot. I gave the money to my mother. Upon our return to the parking lot at the end of the day, we found a parking fine on our car. The money I found went to the Oslo police.

Of course I also played a role in the event of the money myself. I could have just taken the money without saying anything. If I had not asked whether the money was hers, the student had probably gone home without any money from me. I offered to split the money because I felt lucky but not entitled to keep the money. You can keep money that you have worked for. If I had not endorsed this principle and if I had not felt lucky finding the money I would probably have kept it.

The incident of the money could have ended quite differently if the circumstances had been different and if the people involved had been different. Research on generosity shows that almost anything in the incident influenced the level of generosity that eventually took place. Though the incident was quite unique, it does share a fundamental property of generosity in being the product of a wide range of factors. It is not just the outcome of the values and personalities of the people involved – my gratitude, the justice principle, and the boldness of the student. Also more transient factors such as a good mood after a productive day’s work have an influence on generosity. Even seemingly meaningless characteristics of the situation such as the weather, the smile of a stranger and eye contact with a passer-by can have a profound impact on generosity. These factors have been studied by scholars in many different scientific disciplines who often work in mutual isolation. I hope my research efforts provide some useful pieces to the Generosity Puzzle.

Leave a comment

Filed under altruism, empathy, experiments, helping, principle of care

Philanthropic Studies: Two Historical Examples

This post was published earlier in the newsletter of the European Research Network on Philanthropy

The 20th century has seen a tremendous growth of scientific enterprise. The increasing productivity of scientists has been accompanied by a proliferation of academic disciplines. While it is hard to determine an exact time and place of birth, the emergence of a separate field of research on philanthropy – Philanthropic Studies – took place largely in the 1980s in the United States of America (Katz, 1999). Looking back further in time, philanthropy American Style obviously has European roots. My favorite example to illustrate these origins – admittedly slightly patriotic – is the way the hallmark of capitalism was financed, documented by Russell Shorto in his book The Island at the Center of the World. Wall Street was built as a defense wall by the Dutch colonists against the Indians, the Swedes and the English, funded by private contributions of the citizens of New Amsterdam. The contributions were not altruistic in the sense that they benefited the poor or in the sense that they were motivated by concern for the welfare of all. Neither were these contributions totally voluntary. There was no system of taxes in place at the time, but Peter Stuyvesant went around the richest inhabitants of the city with his troops to collect contributions, in monetary or material form. I imagine the appeal to self-interest was occasionally illustrated by a show of guns when contributions were not made spontaneously.

Mannados

Today the study of philanthropy is spread over a large number of disciplines. It is not just sociologists, economists and psychologists who examine causes, consequences and correlates of philanthropy, but also scholars in public administration, political science, communication science, marketing, behavioral genetics, neurology, biology, and even psychopharmacology. Ten years ago, when Pamala Wiepking and I were writing a literature review of research on philanthropy, we gathered as many empirical research papers on philanthropy that we could find. We categorized the academic disciplines in which the research was published. The graph below displays the results of this categorization (for details, see our blog Understanding Philanthropy). The emergence of a separate field of philanthropic studies is visible, along with an increasing attention to philanthropy in economics.

After we had concluded our literature review, I detected a new classic. I would like to share this gem with you. It is an astonishing paper written by Pitirim Sorokin, a Russian sociologist who was exiled to the US in 1922. He founded the department of sociology at Harvard University in the 1930s. Before that, he conducted experiments at the University of Minnesota, and some of them examined generosity. The paper was published in German in 1928, in the Zeitschrift für Völkerpsychologie und Soziologie. It was not easy to obtain a copy of the paper. I managed to get one with the generous help of the staff at the University of Saskatchewan, where the complete works of Sorokin are archived; see http://library2.usask.ca/sorokin/. I have posted a pdf of the paper here: https://renebekkers.files.wordpress.com/2014/10/sorokin_28_full.pdf

Sorokin_28

Working with two colleagues, Sorokin asked students at the University of Minnesota how much money they were willing to donate to a fund for talented students, which would allow them to buy mathematical equipment (‘diagrams and a calculator’), and varied the severity of need and social distance to the students. The experiment showed that willingness to give declined the with the severity of need and with social distance. Students were willing to donate more for fellow students who were closer to them but needed less financial assistance.

Sorokin also gave the participants statements expressing egalitarian and justice concerns, to see whether the students acted in line with their attitudes. The attitudes were much more egalitarian than the responses in the hypothetical giving experiment. He was careful enough to note that the results of the experiment could not easily be generalized and needed replication in other samples, a critique repeated forcefully by Henrich et al. (2010). Sorokin saw his experiment as the beginning of a series of studies. However, the paper seems to have been forgotten entirely – Google Scholar mentions only 7 citations, extending to 1954. This is unfortunate. The experiment is truly groundbreaking both because of its methodology and its results. More than 8 decades later, economists are conducting experiments with dictator games that are very similar to the experiment Sorokin conducted. Perhaps this brief description brings his research back onto the stage.

References

Bekkers, R. & Wiepking, P. (2011). ‘A Literature Review of Empirical Studies of Philanthropy: Eight Mechanisms that Drive Charitable Giving’. Nonprofit and Voluntary Sector Quarterly, 40(5): 924-973.

Henrich, J., Heine, S.J., & Norenzayan, A. (2010). ‘The weirdest people in the world?’ Behavioral and Brain Sciences 33: 61–83.

Katz, S.N. (1999). ‘Where did the serious study of philanthropy come from, anyway?’ Nonprofit and Voluntary Sector Quarterly, 28: 74-82.

Sorokin, P. (1928). ‘Experimente Zur Soziologie’. Zeitschrift für Völkerpsychologie und Soziologie, 1(4): 1-10.

1 Comment

Filed under altruism, data, Europe, experiments, helping, history, Netherlands, philanthropy

VU University Amsterdam is seeking applications for a fully funded PhD dissertation research position on ‘Philanthropic Crowdfunding for the Cultural Heritage Sector’

The PhD project will focus on characteristics of individual crowdfunders and of crowdfunding projects that influence donation behavior. Specifically, the research investigates the effects of online context characteristics on motivations and giving behavior of crowdfunders as well as the organizational arrangements in which crowdfunding campaigns are embedded. The central question of this research project is: which crowdfunders’ or project characteristics affect donation behaviour and will contribute to more effective donation-based crowdfunding projects?

 

Tasks

The PhD student is expected to:

• Collaborate in a multidisciplinary research team;

• Organize large scale field experiments;

• Analyze behaviour in crowdfunding projects with multiple quantitative research methods;

• Write articles for international peer reviewed scientific journals;

• Write a PhD thesis;

• Contribute to some teaching tasks of the Department.

 

Requirements

• MSc in social and/or behavioral sciences with a focus on organizational and/or philanthropic     studies;

• Strong interest in field experiments;

• The PhD research candidate needs to be proficient in spoken and written English.

 

Further particulars

Job title:  PhD-position Organization Science ‘Philanthropic Crowdfunding for the Cultural Heritage Sector’

Fte: 0.8-1.0

VU unit: Faculty of Social Sciences
Vacancy number: 14127
Date of publication: April 3, 2014
Closing date: April 24, 2014

 

The initial appointment will be for 1 year. After satisfactory evaluation of the initial appointment, it can be extended for a total duration of 4 years. The candidate will participate in the PhD programme of the Faculty of Social Sciences. The research will be supervised by Prof. Dr. Marcel Veenswijk, Dr. Irma Borst (Organization Sciences) and Prof. Dr. René Bekkers (Center for Philanthropic Studies).

 

The preferred starting date is the 1st of June 2014 and no later than September 2014. You can find information about our excellent fringe benefits of employment at www.workingatvu.nl like:

• remuneration of  8,3% end-of-year bonus and  8% holiday allowance;

• a minimum of 29 holidays in case of full-time employment;

• generous contribution (70%) commuting allowance based on public transport;

• discounts on collective insurances (healthcare- and car insurance).

 

Salary

The salary is € 2083,00 gross per month in the first year, increasing to € 2664,00 (salary scale 85) in the fourth year based on full-time employment.

 

About the VU Amsterdam Faculty of Social Sciences

VU University Amsterdam is one of the leading institutions for higher education in Europe and aims to be inspiring, innovative, and committed to societal welfare. It comprises twelve faculties and has teaching facilities for 25.000 students.

The Faculty of Social Sciences (FSS) is one of the larger faculties of the VU-University. Over 2700 students and more than 300 employees are engaged in teaching and research on social-science issues. The faculty has 5 bacherlor- and 7 masterprogramme’s, which are characterized by their broad and often multidisciplinary character.

 

The department of Organization Sciences focuses on the processes and phenomena that result in effective and efficient functioning of organizations. Among the topics studied are entrepreneurship, innovation, university-industry cooperation and valorization of research (results). For this specific research project, the department of Organization Sciences and the Center for Philanthropic Studies received a grant from the Netherlands Organization for Scientific research (NWO).

For additional information, please contact Dr. Irma Borst (e-mail: w.a.m.borst@vu.nl), Prof. Dr. Marcel Veenswijk (e-mail: m.b.veenswijk@vu.nl) or Prof. Dr. René Bekkers (e-mail: r.bekkers@vu.nl)

 

Application

Applicants are requested to write a letter in which they describe their abilities and motivation, accompanied by a curriculum vitae and one or two references. The written applications, mentioning the vacancy number in the e-mail header or at the top left of the letter and envelope, should be submitted before April 24, 2014 to:

VU University Amsterdam
Faculty of Social Sciences
to the attention of Mrs. Dr. J.G.M.Reuling, managing director
De Boelelaan 1081
1081 HV Amsterdam, The Netherlands

Or preferably by e-mail: vacature.org.fsw@vu.nl

Leave a comment

Filed under Center for Philanthropic Studies, crowdfunding, economics, experiments, household giving, incentives, philanthropy

Overheid vermindert giften aan ontwikkelingssamenwerking door bezuinigingen

Drie nieuwe onderzoeksresultaten verminderen de hoop dat burgers de overheidsbezuinigingen op internationale hulporganisaties zullen compenseren door meer giften:

  1. Bezuinigingen verminderen de investeringen van hulporganisaties in fondsenwerving;
  2. Mensen geven liever aan doelen die anderen ook steunen;
  3. Er zijn meer Nederlanders die zeggen dat ze mee zullen bezuinigen dan Nederlanders die meer zullen geven als de overheid bezuinigt.

Deze drie resultaten presenteerde ik vandaag op een seminar van NCDO in Den Haag. Meer details staan hier.

Leave a comment

Filed under altruism, charitable organizations, disaster relief, experiments, household giving, law, politics

Haiyan Typhoon Relief Donations: Research Insights

To address the needs of people affected by the Super Typhoon Haiyan – locally known as Yolanda – that hit the Philippines on November 8, 2013 international relief organizations in the Netherlands are collectively raising funds on Monday, November 18, 2013. Commercial and public national TV and radio stations work together in the fundraising campaign. In the past week many journalists have asked the question “Will the campaign be a success?” Because it is strange to give references to academic research papers in  interviews here are some studies that looked at determinants of giving to disaster relief campaigns.

Update, December 2, 2013:

When asked to make a prediction about the total amount raised in a TV interview, I replied that the Dutch would give between €50 and €60 million. That prediction was a ‘hunch’, it was not based on a calculation of data. It turned out to be way too positive. The total amount raised by November 25 is €30 million.
 Filippijnen3

In retrospect, the declining donor confidence index could have prevented such an optimistic estimate. In almost every year since its inception in 2005 we see an increase in donor confidence in the final quarter. The year 2013 is as bad as the crisis year 2009: we see a decline in donor confidence. It may be even worse: in 2009 donor confidence declined along with consumer confidence. In 2013, however, donor confidence declined in the final quarter despite an increase in consumer confidence.
13_donateursvertrouwen

 

1 Comment

Filed under altruism, charitable organizations, disaster relief, empathy, experiments, household giving, philanthropy, psychology