Debate over the Impact and Meaning of Academic Citations
Debates continue over the validity of using journal citation reports and impact factors in the ways they are currently used to assess the value of scholarship, determine the best candidates for faculty positions and prioritise researchers for grants and other funding. In the midst of this debate, I was struck by a couple of recent postings on the Scholarly Kitchen blog that focus on what is actually meant by the words ‘citation’ and ‘impact’ in this context (http://bit.ly/1JsEqw7 and http://bit.ly/1GruJ1s).

There are certainly significant differences in the nature and purpose of scholarly citations, and what constitutes impact is at times difficult to define. Certainly, one piece of scholarly research and writing influences another when the first provides ‘the intellectual foundation, the inspiration, or vital information for the argument’ of the second, as the author suggests, but scholarly impact can be much more subtle, even negative, and still constitute a viable intellectual influence. Even the sort of citation the author of the postings questions as valid impact – when ‘the primary source is faked or misdated’ or ‘the quantitative work is off’ – has made an impact, if in no other way than forcing the scholar citing the work to point out its errors. Often the mistakes of others help to clarify the thinking of the author who catches the errors, and checking the facts or confirming one’s own results when assessing the situation can be a useful exercise as well.

I have no problem, then, with considering a citation a signifier of influence. Where the essential problem lies to my mind is in rewarding the author of a cited article for his or her deception or poor scholarship as the current system does, and that is only one example of the complications associated with citations. Without a means to assess the precise nature of citations before counting them as career credit, it is virtually impossible to distribute that credit appropriately. As the author of the posts asks, ‘if we don’t really know what individual citations mean, why do we think we can draw important meaning from their aggregation?’ As interesting as the information we gain from such quantitative methods of tracking citations and impact may be, it is incomplete without detailed analysis of what it actually means. Such methods are also prey to biases and unethical practices much as qualitative assessment methods are, and they are not comprehensive, with citations to obscure or primary sources often neglected.

The ability to track citations is a wonderful resource, but only if what we do with it is fair and reasonable. It is simply poor scholarship to draw conclusions without fully understanding the data obtained, and it is sad to think that quality scholarship that may be of great value to many readers might be considered insignificant as a result.