In the prehistoric era of competitive science, researchers were like magicians: they earned a reputation for tricks that nobody could repeat and shared their secrets only with trusted disciples. In the new age of open science, researchers share by default, not only with peer reviewers and fellow researchers, but with the public at large. The transparency of open science reduces the temptation of private profit maximization and the collective inefficiency in information asymmetries inherent in competitive markets. In a seminar organized by the University Library at Vrije Universiteit Amsterdam on November 1, 2018, I discussed recent developments in open science and its implications for research careers and progress in knowledge discovery. The slides are posted here.
Using Stata, I wondered how these results would look in a regression framework. For those of you who want to replicate this: I used the data provided by Gordon. The do-file is here. Because wordpress does not accept .do files you will have to rename the file from .docx to .do to make it work. The Stata commands are below, all in block quotes. The output is given in images. In the explanatory notes, commands are italicized, and variables are underlined.
First let’s examine the data. You will have to insert your local path at which you have stored the data.
. import delimited “ANOVA_blog_data.csv”, clear
. pwcorr before_treatment after_treatment before_placebo after_placebo
These commands get us the following table of correlations:
There are some differences in mean values, from 98.8 before treatment to 105.0 after treatment. Mean values for the placebo measures are 100.8 before and 100.2 after. Across all measures, the average is 101.2035.
Let’s replicate the t-test for the treatment effect.
The decrease in IQ after the placebo is -.6398003 (SE = 1.978064), which is not significant (p = .7477).
The question is whether we have taken sufficient account of the nesting of the data.
We have four measures per participant: one before the treatment, one after, one before the placebo, and one after.
In other words, we have 50 participants and 200 measures.
To get the data into the nested structure, we have to reshape them.
The data are now in a wide format: one row per participant, IQ measures in different columns.
But we want a long format: 4 rows per participant, IQ in just one column.
To get this done we first assign a number to each participant.
. gen id = _n
We now have a variable id with a unique number for each of the 50 participants.
The Stata command for reshaping data requires the data to be set up in such a way that variables measuring the same construct have the same name.
We have 4 measures of IQ, so the new variables will be called iq1, iq2, iq3 and iq4.
. rename (before_treatment after_treatment before_placebo after_placebo) (iq1 iq2 iq3 iq4).
Now we can reshape the data. The command below assigns a new variable ‘mIQ’ to identify the 4 consecutive measures of IQ.
. reshape long iq, i(id) j(mIQ)
Here’s the result.
We now have 200 lines of data, each one is an observation of IQ, numbered 1 to 4 on the new variable mIQ for each participant. The variable mIQ indicates the order of the IQ measurements.
Now we identify the structure of the two experiments. The first two measures in the data are for the treatment pre- and post-measures.
. replace treatment = 1 if mIQ < 3 (100 real changes made) . replace treatment = 0 if mIQ > 2
(100 real changes made)
Observations 3 and 4 are for the placebo pre- and post-measures.
. replace placebo = 0 if mIQ < 3 (100 real changes made) . replace placebo = 1 if mIQ > 2
(100 real changes made)
. tab treatment placebo
We have 100 observations in each of the experiments.
OK, we’re ready for the regressions now. Let’s first conduct an OLS to quantify the changes within participants in the treatment and placebo conditions.
The regression shows that the treatment increased IQ by 6.13144 points, but with an SE of 3.863229 the change is not significant (p = .116). The effect estimate is correct, but the SE is too large and hence the p-value is too high as well.
. reg iq mIQ if placebo == 1
The placebo regression shows the familiar decline of .6398003, but with an SE of 3.6291, which is too high (p = .860). The SE and p-values are incorrect because OLS does not take the nested structure of the data into account.
With the xtset command we identify the nesting of the data: measures of IQ (mIQ) are nested within participants (id).
. xtset id mIQ
First we run an ’empty model’ – no predictors are included.
. xtreg iq
- The constant (_cons) is the average across all measures, 101.2033. This is very close to the average we have seen before.
- The rho is the intraclass correlation – the average correlation of the 4 IQ measures within individuals. It is .7213, which seems right.
Now let’s replicate the t-test results in a regression framework.
. xtreg iq mIQ if treatment == 1
In the output below we see the 100 observations in 50 groups (individuals). We obtain the same effect estimate of the treatment as before (6.13144) and the correct SE of 2.134277, but the p-value is too small (p = .004).
Let’s fix this. We put fixed effects on the participants by adding , fe at the end of the xtreg command:
. xtreg iq mIQ if treatment == 1, fe
. xtreg iq mIQ if placebo == 1, fe
“What do people misunderstand about your research?” A great question that allows me to correct a few popular ideas about our research on philanthropy.
1. Who pays you? The first misunderstanding is that charities pay for our research on philanthropy. We understand that you would think that, because for charitable organizations it is useful to know what makes people give. After all, they are in the business of fundraising. On the other hand, you would not assume that second hand car dealers or diamond traders fund research on trust or that ski resort owners would fund climate change research. We are talking to foundations and fundraising organizations about the insights from our work that may help them in their business, but the work itself is funded primarily by the Ministry of Justice and Security of the government of the Netherlands and by the DG Research & Innovation of the European Commission.
2. What is the best charity? The second misunderstanding is that we vet charities and foundations, like we are some sort of philanthropy police. We don’t rate effective charities or give prizes for the best foundations, nor do we keep lists of bad apples in the philanthropy sector. We don’t track the activities that charities spend their funds on, or how much is ‘actually going to the cause’. If you need this kind of information, check the annual reports of organizations. We do warn the public that raising money costs money and that organizations saying they have no overhead costs are probably doing something wrong.
3. What is altruism? The third misunderstanding is that altruism is a gift that entails a sacrifice. You can hear this when people give each other compliments like: “That is very altruistic of you!” When people give to others despite the fact that they have little themselves and giving is costly, we tend to think this gift is worth more than a relatively small gift by a wealthy person. The term you are looking for here is generosity, not altruism. Altruism is a gift motivated by a concern for the well-being of the recipient. How much of the giving we see is altruism is one of the key questions on philanthropy. Which conditions make people give out of altruism, and what kind of people are more likely to do so, is a very difficult question to answer, because it is so difficult to isolate altruism from egoistic motivations for giving.
4. Crowding-in. The fourth misunderstanding is that less government implies more philanthropy. You can hear this in statements like “Americans give so much because the government there does so little”. The desire to have a small government is a political goal in itself, not an effective way to increase philanthropy. As government spending increases, citizens do not give less, and conversely, as government spending decreases, citizens do not give more. In the past decades, giving in the USA as a proportion of GDP is essentially a flat line with some fluctuation around 2%, even though government spending has increased enormously in this period. Also countries in which government spending as a proportion of GDP is higher are not necessarily countries in which people give more. In Europe, we even see a negative relationship: as citizens pay more taxes, a higher proportion of the population gives to charity. Learn more about this by reading my lecture ‘Values of Philanthropy’ at the 13th ISTR Conference we organized at VU Amsterdam.
PS – It was the tweet below (link here) that prompted this post:
The Department of Sociology of the Faculty of Social Sciences at the Vrije Universiteit Amsterdam is looking for a professor in the area of charity lotteries. The professor is expected to conduct research on the relations between charity lotteries, nonprofit organizations, and the government. The chair is embedded in the Department of Sociology of the VU and is closely connected to the center of expertise in teaching and research on philanthropy, the Center for Philanthropic Studies of the Faculty of Social Sciences. The Dutch Postcode Lottery (Nationale Postcode Loterij) is financing the chair.
Through scientific research, the chair will contribute to the production of knowledge on the societal significance of charity lotteries. By doing so, the chair will also contribute to the development of new scientific insights in philanthropy. The chair will disseminate results of research through publications, lectures and workshops to both academic audiences and applied audiences (professionals as well as the general public).
The chair has three objectives:
(1) the expansion of knowledge about the societal significance of charity lotteries, in a direct relationship with the philanthropic sector;
(2) the dissemination of this knowledge;
(3) the expansion of collaboration with researchers both within the VU and beyond who study charity lotteries and philanthropic behavior.
A more elaborate description of the envisioned activities of the chair is available upon request.
The chair holder meets the following requirements:
• PhD degree in the social sciences, preferably for a study on philanthropy;
• knowledge of recent developments in the philanthropic sector;
• has published in national and international journals;
• is interested in international developments in lotteries and philanthropy;
• has demonstrable skills as a research leader;
• is a skilled educator with experience teaching in academic programs;
• ability to inspire and lead a team of academic researchers;
• experience supervising PhD candidates;
• proven ability to attract external funding for research and is able to attract funding from other sources for dissertation research in the field of the chair.
We would like our department to reflect our diverse student population and therefore especially encourage international, female and ethnic minority candidates to apply.
The chair is a part-time appointment of 0.2 fte, initially for a duration of 5 years.
You can find information about our excellent fringe benefits of employment via https://www.vu.nl/en/employment/ like:
• remuneration of 8,3% end-of-year bonus and 8% holiday allowance;
• a minimum of 29 holidays in case of full-time employment;
• discounts on collective insurances (healthcare- and car insurance).
The salary will be in accordance with university regulations for academic personnel, and depending on experience, range from a minimum of € 5,440.00 gross per month up to a maximum of € 7,921.00 gross per month (salary scale H2) based on a fulltime employment.
For additional information please contact Professor René Bekkers via e-mail: firstname.lastname@example.org.
Applications should be sent in pdf by e-mail before 1 September 2018 to Secretariaat.SOC.FSW@vu.nl, to the attention of prof. dr. Rene Bekkers, mentioning “application: Professor Societal significance of charity lotteries”.
By René Bekkers & Pamala Wiepking
In decisions on academic careers, the societal impact that researchers have with their research is gaining importance. This is an addition to incentives for academic impact. Relevant indicators for academic impact are how often the researcher has been cited (the so-called H-index) and the impact factor (IF) of the journals in which the researcher has published.
What the Journal Impact Factor is not
It is widely believed that it is more difficult to get published in journals with a higher IF because they are more attractive and can afford to desk reject a larger proportion of the submissions. However, journals with higher IFs do not necessarily publish research of higher quality. Because scientists in some countries can quickly advance their careers and even receive monetary bonuses by publishing in high IF journals, they also attract a lot of low-quality research. Once in a while, a piece of garbage slips through because the peer review process is not perfect.
Also publishing an article in a high IF journal does not necessarily imply that the article will be more widely read or receive a higher number of citations. The overwhelming majority of articles in high IF journals receive low numbers of citations, but a few articles that are highly cited determine the IF.
Article impact: citations
The highest impact paper we have published is a literature review of empirical studies on philanthropy. It was originally written as a background paper for a request for proposals of the John Templeton Foundation (JTF), and later published in three separate, standalone articles: one in Nonprofit & Voluntary Sector Quarterly (NVSQ) and two in Voluntary Sector Review (VSR). As a self-archived working paper, the JTF paper already attracted some attention, but the NVSQ article quickly received large numbers of citations, and continues to do so. The journals in which we published the papers were not the highest IF journals in which we published throughout our careers. In fact, papers we published in higher IF journals have attracted way fewer citations.
From what we have heard from readers the paper is useful to many people because it provides a map of the landscape of research on philanthropy. Our review pointed them to studies that are relevant to their specific research questions. Studies that they would otherwise not have found. Often this impact is invisible because researchers do not cite our review, but instead cite the papers we reviewed. So the impact we have with the paper is visible not so much in its number of citations, but in the number of readers. There is no way to track whether people actually read it, but paper has been downloaded thousands of times.
Did we achieve the objectives of the paper?
The number of downloads is a quantifiable, generic indicator of impact. But a high number of downloads is not why we wrote this paper. If you see research as an intervention, its impact should be evaluated relative to intention: did it achieve the intended goals?
We wrote the paper to provide “a reference resource for classical intuitions” (p.925) for researchers who have an idea for a study, but do not know what previous research has found. We hoped that our review would reduce “the lack of awareness of research in distant times and disciplines” (p.945). We wanted to acquaint researchers in different fields with each other’s work. Two goals that we did not explicitly state in the paper because they seemed overly ambitious were to integrate these relatively isolated bodies of research and ultimately to establish a common knowledge base for research on philanthropy.
It is very difficult to determine to what extent we have been successful with this paper. Perhaps in a decade or two it may become clear how useful our contribution has been in establishing a common knowledge base for research on philanthropy. Also it would be difficult to quantify whether our review actually integrated different fields. One could count the number of citations to research in other disciplines before and after our review was published, and in studies that cite our review and those that do not. According to our own standards, however, those numbers would still not prove impact.
Yet we do believe we have made a difference. Many academics who were new to the topic of philanthropy have told us that our review was helpful in finding their way in the literature. We also know that our readers are more aware of research in other disciplines than the citations demonstrate. Word limits for journal articles lead researchers to omit relevant references, and they focus on work published in the same journal and discipline.
Did we change the practice of fundraising?
What about the societal impact? As we wrote in our paper (p. 926), we hoped that our review would not only be useful for an academic audience but also for practitioners. What impact did we have on the practice of fundraising? We could count the number of talks and seminars we were invited to give and actually gave and the number of attendees at these events. But this is not measuring impact, and not why we wrote the paper. We hoped that fundraisers would take advantage of the insights gained in the studies we reviewed to increase fundraising effectiveness (p. 926). We learned from conversations with fundraisers and other philanthropy professionals who read our paper or attended our talks that insights from our literature review indeed have influenced the way they look at donors and fundraising. The insights from academia may have helped them to better understand why their donors choose to donate to their organization, and how they can use this information to build better and stronger relationships with those donors. We feel happy to have contributed to this, but also believe the effectiveness of fundraising, especially in relation to donor satisfaction, can always be improved further. We look forward to keep providing academic insights to support fundraisers and other philanthropy professionals in this challenge!
Working with scholars from other disciplines can be a challenge. The people you meet speak the same language, but the words they use sometimes mean different things. It takes time to learn the vocabulary, even though you know the words. Like the song: I’m an alien. I’m a legal alien: I’m an Englishman in New York.
Curiously, as a quantitative empirical sociologist attending academic research conferences in economics or psychology, I often feel like an amateur anthropologist. I observe customs with which I am unfamiliar, and try to blend in, participating in rituals and ceremonial celebrations of heroes unknown.
A common purpose binds us: the curiosity of a phenomenon unexplained, an intriguing puzzle, unsolved. Or the objective to get an article published in a journal that – before recent discoveries – was largely uncharted territory. Yes, there were dragons. But the joy of having slayed Reviewer 2!
Public debates on philanthropy link charitable giving to wealth. In the media we hear a lot about the giving behavior of billionaires – about the giving pledge, the charitable foundations of the wealthy, how the causes they support align their business interests, and how they relate to government programs. Yes – the billions of tech giants go a long way. Imagine a world without support from foundations created by wealthy. But we hear a lot less about the everyday philanthropy of people like you and me. The media rarely report on everyday acts of generosity. The force of philanthropy is not only in its focus and mass, but also in its breadth and popularity.
It is one of the common remarks I hear when family, friends and colleagues return from holidays in ‘developing countries’ like Moldova, Myanmar or Morocco: “the people there have nothing, but they are so kind and generous!” The kindness and generosity that we witness as tourists are manifestations of prosociality, the very same spirit that is the ultimate foundation of everyday philanthropy. And also within our own nations, we find that most people give to charity. Why are people in Europe so strongly engaged in philanthropy?
The answer is trust
In Europe we are much more likely to think that most people can be trusted than in other parts of the world. It is this faith in humanity that is crucial for philanthropy. We can see this in a comparison of countries within Europe. The figure combines data from the World Giving Index reports of CAF from 2010-2017 on the proportion of the population giving to charity with data from the Global Trust Research Consortium on generalized social trust. The figure shows that citizens of more trusting countries in Europe are much more likely to give to charities (you can get the data here, and the code is here). The correlation is .52, which is strong.
Egalité et fraternité
One of the reasons why citizens in more trusting countries are more likely to give to charity is that trust is lower in more unequal countries. Combining the data on trust with data from the OECD on income inequality (GINI) reveals a substantial negative correlation of -.37. The larger the differences in income and wealth in a country become, the lower the level of trust that people have in each other. As the wealth of the rich increases, the poor get increasingly envious, and the rich feel an increasing urge to protect their wealth. In such a context, conspiracy theories thrive and institutions that should be impartial and fair to all are trusted less. The criticism that wealthy donors face also stems from this foundation: those concerned with equality and fairness fear the elite power of philanthropy. Et voila: here is the case why it is in the best interest of foundations to reduce inequality.