By René Bekkers & Pamala Wiepking
In decisions on academic careers, the societal impact that researchers have with their research is gaining importance. This is an addition to incentives for academic impact. Relevant indicators for academic impact are how often the researcher has been cited (the so-called H-index) and the impact factor (IF) of the journals in which the researcher has published.
What the Journal Impact Factor is not
It is widely believed that it is more difficult to get published in journals with a higher IF because they are more attractive and can afford to desk reject a larger proportion of the submissions. However, journals with higher IFs do not necessarily publish research of higher quality. Because scientists in some countries can quickly advance their careers and even receive monetary bonuses by publishing in high IF journals, they also attract a lot of low-quality research. Once in a while, a piece of garbage slips through because the peer review process is not perfect.
Also publishing an article in a high IF journal does not necessarily imply that the article will be more widely read or receive a higher number of citations. The overwhelming majority of articles in high IF journals receive low numbers of citations, but a few articles that are highly cited determine the IF.
Article impact: citations
The highest impact paper we have published is a literature review of empirical studies on philanthropy. It was originally written as a background paper for a request for proposals of the John Templeton Foundation (JTF), and later published in three separate, standalone articles: one in Nonprofit & Voluntary Sector Quarterly (NVSQ) and two in Voluntary Sector Review (VSR). As a self-archived working paper, the JTF paper already attracted some attention, but the NVSQ article quickly received large numbers of citations, and continues to do so. The journals in which we published the papers were not the highest IF journals in which we published throughout our careers. In fact, papers we published in higher IF journals have attracted way fewer citations.
From what we have heard from readers the paper is useful to many people because it provides a map of the landscape of research on philanthropy. Our review pointed them to studies that are relevant to their specific research questions. Studies that they would otherwise not have found. Often this impact is invisible because researchers do not cite our review, but instead cite the papers we reviewed. So the impact we have with the paper is visible not so much in its number of citations, but in the number of readers. There is no way to track whether people actually read it, but paper has been downloaded thousands of times.
Did we achieve the objectives of the paper?
The number of downloads is a quantifiable, generic indicator of impact. But a high number of downloads is not why we wrote this paper. If you see research as an intervention, its impact should be evaluated relative to intention: did it achieve the intended goals?
We wrote the paper to provide “a reference resource for classical intuitions” (p.925) for researchers who have an idea for a study, but do not know what previous research has found. We hoped that our review would reduce “the lack of awareness of research in distant times and disciplines” (p.945). We wanted to acquaint researchers in different fields with each other’s work. Two goals that we did not explicitly state in the paper because they seemed overly ambitious were to integrate these relatively isolated bodies of research and ultimately to establish a common knowledge base for research on philanthropy.
It is very difficult to determine to what extent we have been successful with this paper. Perhaps in a decade or two it may become clear how useful our contribution has been in establishing a common knowledge base for research on philanthropy. Also it would be difficult to quantify whether our review actually integrated different fields. One could count the number of citations to research in other disciplines before and after our review was published, and in studies that cite our review and those that do not. According to our own standards, however, those numbers would still not prove impact.
Yet we do believe we have made a difference. Many academics who were new to the topic of philanthropy have told us that our review was helpful in finding their way in the literature. We also know that our readers are more aware of research in other disciplines than the citations demonstrate. Word limits for journal articles lead researchers to omit relevant references, and they focus on work published in the same journal and discipline.
Did we change the practice of fundraising?
What about the societal impact? As we wrote in our paper (p. 926), we hoped that our review would not only be useful for an academic audience but also for practitioners. What impact did we have on the practice of fundraising? We could count the number of talks and seminars we were invited to give and actually gave and the number of attendees at these events. But this is not measuring impact, and not why we wrote the paper. We hoped that fundraisers would take advantage of the insights gained in the studies we reviewed to increase fundraising effectiveness (p. 926). We learned from conversations with fundraisers and other philanthropy professionals who read our paper or attended our talks that insights from our literature review indeed have influenced the way they look at donors and fundraising. The insights from academia may have helped them to better understand why their donors choose to donate to their organization, and how they can use this information to build better and stronger relationships with those donors. We feel happy to have contributed to this, but also believe the effectiveness of fundraising, especially in relation to donor satisfaction, can always be improved further. We look forward to keep providing academic insights to support fundraisers and other philanthropy professionals in this challenge!