A few months ago, Patrick Dunleavy published a post on the London School of Economics Impact Blog describing “a huge gulf between many STEM scientists… and scholars in other disciplines in how often they cite other people’s research.” After providing some statistics in support of this claim, some of which placed citation rates in the humanities an entire order of magnitude below those in the natural and life sciences, he offered some advice:
Those social science and humanities academics who go through life anxious to prove themselves a ‘source’ (of citations) but not a ‘hub’ (of cites to others) are not practicing the reputation-building skills that they fondly imagine… Their practice is self-harming not just to themselves, but to all who read their works or follow after them as students at any level. Others who perhaps accept such attitudes without practicing them themselves – for example, as departmental colleagues or journal reviewers – are also failing in their scholarly obligations, albeit in a minor way.
Initially I felt chastened by Dunleavy’s article. “It’s a shame,” I thought, “that we in the humanities operate in such backwards ways that our citation rates are an entire order of magnitude lower than the citation rates of researchers in the sciences. We’d really better start citing each other more — being ‘hubs’ instead of ‘sources’!” But after giving some careful thought to what that means, I’ve concluded that this issue deserves more investigation than it received in Dunleavy’s post.
But I want to begin by describing my initial thought process upon reading Dunleavy’s post. I think it’s important to take seriously the kind of informal reasoning that might lead to skepticism in cases like this. Humanists aren’t often trained to make complex mathematical arguments, but that training isn’t always necessary to see when those arguments have problems. We don’t all need more mathematical training. We just need to get more practice subjecting mathematical arguments to sniff tests. These tests often involve paying attention to the order of magnitude of a value. If the number you get from an argument is in the low five digits, and the number you expect has at least six digits, then something’s probably wrong.
Off by Inches or by Feet?
What first bothered me about Dunleavy’s argument was the mismatch between incoming and outgoing citation rates. Although the first chart he displayed shows an entire order of magnitude difference between incoming citation rates in the humanities and the sciences, I had never noticed such a dramatic difference in the number of outgoing citations per article. Being in a self-critical mood, my first instinct was to consider my own work. I thought about how many secondary sources I cited in my most recent publication — just about twenty out of a total of fifty including primary sources1. That didn’t seem great — about average for the field at best, and probably even below average. But as I continued to think about it, that also seemed about average for most of the computer science papers I had read. In fact, a lot of those papers seemed to cite between fifteen and twenty other papers. But that was a vague hunch — it called for investigation. So I thought of a computer science paper that I know has been cited many times: the original paper describing Latent Dirichlet Allocation by David Blei, Andrew Ng, and Michael Jordan.2
It only cites seven papers!
Then I started to feel a little better about my own citation practices. At least I was doing better than rockstars like David Blei and Andrew Ng. (And that was back before they were rockstars.) What about people who cited their paper? I did a couple of random checks. One cited twenty-five papers; one cited eighteen. One cited 332 — that threw me for a loop until I realized it was a book-length document. But then it seemed about in line with the secondary bibliographies of most humanities monographs that I’ve seen — higher than average, perhaps, but certainly not by an order of magnitude.
Even based on such a small and unsystematically collected sample, I think it’s reasonable to conclude that humanists and scientists probably aren’t citing wildly different numbers of fellow scholars and researchers. Clearly this is an assertion that demands a much more thorough investigation. But we can expect that if humanists were generating an entire order of magnitude fewer citations, it would be obvious at first glance. And it’s not obvious. Humanists seem to be generating a roughly similar number of outgoing citations per article on average — optimistically, about the same number, and pessimistically, perhaps two thirds or three fifths as many. But not a tenth as many.
So what are the citation practices that we need to change? Dunleavy’s argument was that if we work harder at being ‘hubs,’ we’ll also have more success — potentially an order of magnitude more — as ‘sources.’ In other words, we should include far more secondary citations in our bibliographies. This question about outgoing citations is a pretty good test of that claim, and it didn’t do very well.3
That suggests that outgoing citations by humanists are being lost by these statistics, or incoming citations by scientists are being magnified somehow. Or perhaps the data is outright biased. At worst we need to be citing different articles — not ten times as many. Later in his article, Dunleavy suggested that we need to be doing more thorough literature reviews. Could that be causing the problem somehow? Maybe it’s a matter of citing more recent work, or more obscure work. Or perhaps we’re citing people outside our own field too frequently — perhaps we should be keeping citations inside our own circles.4 Or maybe something else is happening.
It’s difficult to tell exactly why there’s such a dramatic difference, but Dunleavy’s article suggested one intriguing explanation — only to reject it. After talking about low incoming citation rates, Dunleavy went on to talk about the h5 scores of journals in various fields. Then he gave us this graph:
Even now, I look at that and have to quash little twinges of insecurity. My field is all the way at the bottom! Panic! But cooler heads prevail. Dunleavy followed the chart with a sliver of analysis before moving towards conclusions, asking “What can or should be done?” Only after asking that question did he address the possibility that publication volume might have something to do with the discrepancy: “The greater volume of STEM science publications is cited as if it explains things — it doesn’t, most individual STEM science papers are not often cited.”
As I read that sentence, my first thought was “that sentence belongs way earlier in the post.” It’s part of the analysis, not a conclusion. And what’s the logic behind it? Dunleavy didn’t explain. That means we have to do some math.
Let’s begin by considering the statistics that Dunleavy discussed: citation rate and h5 index. The h5 index is easier to specify, so I’ll start with that. Google offers this definition: “h5-index is the h-index for articles published in the last 5 complete years. It is the largest number h such that h articles published in 2009-2013 have at least h citations each.” Articles older than five years effectively expire for the purpose of this statistic, and their citations are no longer counted.
Citation rate is a bit harder to work out. Dunleavy didn’t explicitly state what kind of citations the citation rate statistic counts, but if it were counting outgoing citations, then the chart he begins his post with would be hard to take seriously. I am quite confident that there are more than four outgoing citations per publication in all of the fields listed on that chart. So it must count incoming citations. Dunleavy also didn’t specify which citations it counts. For consistency, I’ll take five years as the cutoff for this statistic as well: articles older than five years expire.
Now let’s test Dunleavy’s claims on a simple, artificial example. Say you have two fields, A and B. Field A produces one thousand articles per year; field B produces ten thousand. Dunleavy’s first claim was that given similar citation practices, the increase in volume will not significantly affect the citation statistics he’s talking about.5 And his second was that most of the articles in either field will not be cited.
“Most” is a little vague, so let’s say that in either field, all outgoing citations in a particular year will be evenly distributed among the top ten percent of articles from the previous year. Dunleavy also didn’t remark on the number of journals in each field, so let’s suppose that field A has fifty journals and field B has two hundred. And to keep things simple, let’s say the top articles are distributed evenly across all journals. We’ll also assume that all articles cite just ten articles from the previous year. These are unrealistic assumptions, but they aren’t totally outlandish, and they should at least help us learn some things about what Dunleavy has claimed.
Let’s start with field A. For any given year, there will be one thousand articles published in the field. They will generate ten citations each, for a total of ten thousand citations. Those citations will be evenly distributed across the top ten percent of articles from the previous year — one hundred articles receiving one hundred citations each. Those articles will be evenly distributed across all fifty journals — two each. So for that year, there will be just two articles per journal with at least two citations; they will both count towards the journal’s h5 score. Over five years, there will be four such pairs of articles (because the articles from the most recent year won’t be cited until next year). Written out numerically, there are citations per cited article, and there are cited articles per journal. So that’s a total of eight articles published per journal that received at least eight citations in the last five years in field A, giving an h5 score of eight for every journal in the field.
The calculation for the citation rate is slightly different. Every year, a set of ten thousand citations are generated and distributed evenly among last year’s journals, and four sets will be produced that count for a given five-year span. That’s a total of forty thousand citations, divided evenly among fifty journals. Those journals together produce five thousand articles over five years, and so the citation rate is eight.
On to field B. For any given year, there will be ten thousand articles in this field, each citing ten papers, for a total of one hundred thousand citations. Those citations will be divided among the previous year’s top ten percent of papers — one thousand papers this time, each receiving one hundred citations. This time, those thousand articles are divided among two hundred journals. That’s five articles per journal, and four sets of five per five-year period. Numerically, there are citations per cited article, and cited articles per journal. That gives us twenty articles published per journal with at least twenty citations each, for an h5 score of twenty for every journal in the field.
And now for the citation rate: every year, one hundred thousand citations are generated, with four sets produced over five years. That’s a total of four hundred thousand citations, divided evenly among two hundred journals. Those journals will produce fifty thousand articles in total, and so the citation rate is again eight.
So publication volume has indeed affected the h5 statistic, though perhaps in a slightly different way than Dunleavy was talking about. The change in the number of articles published per year had no effect. But the change in the number of articles published per journal had a dramatic effect. Had the number of journals in field B also gone up by a full order of magnitude, to five hundred, there would have been no difference in either statistic; had the number of journals in field B only doubled to one hundred, the difference in their h5 statistics would have been even more noticeable. This might seem a bit like cheating: I didn’t scale all the values equally. But that’s arguably more realistic. A larger field will support — and may even require — larger journals that publish more frequently.
Now consider the fact that whereas Nature publishes weekly issues that each contain between ten and twenty articles and “letters” (with full bibliographies), even a very large, respected humanities journal like PMLA might publish only four or five issues a year, each containing between ten and twenty articles and other papers with full bibliographies. That’s a minimum of five hundred articles per year from Nature, compared to a minimum of fifty per year from PMLA. An order of magnitude difference.
Once you’ve worked through the mathematics, it’s not surprising. Journals that publish more articles will naturally capture more citations, all else being equal. And it’s a pattern that you can see in real data. Consider SCIMAGO‘s list of top journals. The correlation between the “Total Docs” statistic and the “H index” statistic is immediately noticeable. Try sorting the output by “H index” — the first journal with fewer than five hundred publications over three years is ranked fifty-ninth. Sixty three of the first hundred have more than a thousand citable documents over three years, and many have more than three thousand. Most humanities journals have fewer than two hundred. In total, the SCIMAGO database contains more than a thousand journals with a thousand citable documents over three years. None of them are dedicated to the humanities.6
At one point, Dunleavy wrote “the gulf charted here isn’t the product of just a few science super-journals.” What about a thousand science super-journals?
Simulating Citation Networks
But let’s assume that’s all just a coincidence. It might not hold up to further scrutiny. And recall that the assumptions I made for the simple calculations above are highly artificial. What would happen if we used a more realistic set of assumptions? I decided to try creating a citation simulator to see. Rather than trying to work out some kind of probabilistic closed-form h5 equation, I wrote a script that simulates thirty years of publication, displaying h5 values and citation rates for each year. I found that its behavior was unpredictable, and sensitive in complex ways to various inputs. But the results also seemed reasonable — they looked like the kinds of statistics one sees browsing through Google Scholar.
It still makes simplifying assumptions that are not realistic, but it does a much better job imitating the particular kind of rich-get-richer power law behavior of citation networks. There are no arbitrary values determining which articles will be cited and which will not be, but articles that already have citations will be more likely to receive additional citations. Here’s an enumeration of the assumptions the simulator makes, and the values it allows you to tune:
- A set number of journals publish articles in a given field. The number is tunable.
- Each journal publishes a set number of issues per year. The number is tunable.
- Each journal publishes a set number of articles per issue. The number is tunable.
- Each article cites a set number of other articles. The number is tunable.
- Citations for each article are chosen randomly, but with a bias towards articles that have already received citations. The probability that a given article will receive a new citation is proportional to the number of citations it has already received, plus one. The code provides a tunable skew parameter that strengthens or weakens the bias towards oft-cited articles.7 Articles become available for citation in the issue cycle after they are published.
- Between each issue cycle, some articles are forgotten or randomly superseded by others, and expire for the purpose of citation. The probability that an article will expire in a given cycle is the same for all articles, and is tunable.
To the degree that these parameters correspond to actual scholarly practices, a number of them are likely to vary widely between disciplines. For example, the speed with which articles expire in the humanities will probably be lower, and so older articles will be cited more often. And the number of issues published per year will often be lower. As it happens, those are two values that the h5 index is very sensitive to. It’s often less sensitive to the number of citations per article. For example, given the simulator’s initial default settings, if you multiply the number of issues per year by ten, the top journal’s h5 index increases almost threefold, for an average increase of about four percent per additional issue. But given those same initial defaults, if you multiply the number of outgoing citations per article by ten, the top journal’s h5 index changes by just thirty-five percent — an average increase of about four tenths of a percent per additional citation.8 A field that wanted to double its h5 numbers under these circumstances could publish twenty or twenty-five more issues of each journal per year — or cite two hundred more sources per article on average.
The sensitivity of the index to the number of outgoing citations depends partially on the bias parameter; when the bias towards famous articles is lower, increasing the number of outgoing citations has a greater effect. But the bias has to be quite low — distributing citations almost evenly among articles that haven’t been forgotten or superseded — before changes in outgoing citation rate are as significant as changes in the number of issues per journal. This pattern also makes sense in light of the calculations above. The citations were far too concentrated on the top articles; had they been spread out among other articles, the resulting h5 scores would have been higher. The bias and decay values also influence the relationship between outgoing citations and the field-wide citation rate; for some values, the field-wide citation rate can be as low as five percent of the outgoing citation rate, because so many of the outgoing citations are going to older articles that have expired for the purpose of the calculation.
There are a number of phenomena the simulator does not try to model at all. For example, it assumes that there is no particular bias in favor of one journal over another. Arguably even a mild bias could skew the results dramatically. In its current form, the simulator tends to produce fields that balance citations relatively evenly across all journals. A more realistic simulation might distribute the majority of citations over the top thirty or forty percent of journals; this would probably drive those journals’ h5 indices even higher.
But my goal is not to produce a perfectly realistic simulator. My goal is to show that a simulator that approaches even a moderate level of realism produces complex, unpredictable, nonlinear relationships between many different variables. Suppose we assume for the moment that the numbers that Google Scholar produces for humanities journals are as reliable as the numbers it produces for the sciences.9 And suppose we assume that we really should want the h5 indices of our journals to go up. We can’t expect to get a straightforward linear response by citing more articles and crossing our fingers. Given some very reasonable assumptions about publication conventions in the humanities, there’s a good chance that citing more articles will have only a small effect. The effect will certainly not be large enough to address the score gap between the humanities and the sciences. Other decisions about citation will matter more: which articles we cite, how recently they were published, which journals they were published in, and the number of articles those journals publish.
The assumptions that lead to those conclusions are not based on any evidence. This simulator can’t tell us anything about the actual state of the humanities. Perhaps a field-wide increase in outgoing citation rates would dramatically boost incoming citation rates and h5 scores. We can’t be certain without more careful investigation.
However, that means being doubly skeptical of hasty conclusions that reinforce popular stereotypes about the humanities and the sciences. I was troubled at times while reading Dunleavy’s post — especially when he implied that fields with lower citation rates are more likely to harbor scholars who are “ignoring other views, perspectives and contra-indications from the evidence.” The humanists I know make special effort to do just the opposite, because they know that the kind of research we do is often more vulnerable to ideological bias than research in the sciences. And we can’t shift the burden of objectivity onto our methods; we can’t pretend to be passive spectators, as some scientists might. To do good intellectual work, we have to confront our bias directly, paying careful attention to conflicting evidence from multiple perspectives. That’s challenging, certainly, but I’m not at all convinced that we draw our conclusions in more biased ways than scientists do.
It also troubled me when Dunleavy cited an op-ed by Mark Bauerlein suggesting that literary scholars should give up doing research altogether. Bauerlein is building his reputation as an ivory tower provocateur, and some have used even stronger language to describe his recent output — words like “trolling” and “clickbait” come to mind. The fact that he and Dunleavy might agree about this doesn’t exactly give me confidence that Dunleavy’s perspective is unbiased.
Despite those issues, I remain sympathetic to his call for citation reform. I would not have written this post if his hadn’t called for a thoughtful response. His suggestion of adopting systematic review deserves serious consideration; it might even address some of the factors that could be leading to the apparent dearth of incoming citations in the humanities, because it concerns not only the number of outgoing citations, but also their distribution over time and across the field. Although the literary scholars I know conduct secondary research in thorough and systematic ways, they each do it a little differently. It would be helpful to articulate a clearer set of field-wide standards for secondary research and citation practices.
But if we choose to do that, we shouldn’t worry about increasing the h5 statistics of our journals. We shouldn’t worry about the impact our work has within some arbitrary time frame. We should worry about creating better literary scholarship.
- “Common Knowledge: Epistemology and the Beginnings of Copyright Law,” forthcoming in PMLA. ↩
- It has been cited exactly 11111 times as of this writing, reports Google Scholar. ↩
- Unless, that is, it turns out that by citing secondary sources fifty percent more, we could increase our citation rate by four or five times. That seems unlikely. ↩
- I cited several historians of philosophy in my paper — those citations were “lost” for my field. When this occurred to me I thought “Oops! Well, C’est la vie.” Never mind that this directly contradicts Dunleavy’s advice to avoid “discipline-siloed” citation practices. ↩
- To be perfectly explicit, I am interpreting Dunleavy’s claim as entailing the contrapositive of the following premise: if higher publication volume significantly increases h5 statistics, then higher publication volume explains at least part of the gap between the humanities and the sciences. ↩
- The size of these journals is surely related to the publication incentive structures at work in the sciences. And at least one Nobel-winning scientist, Randy Schekman, has argued that scientific incentive structures are broken: “Mine is a professional world that achieves great things for humanity. But it is disfigured by inappropriate incentives.” ↩
- The citation sampler selects papers using a bin-based sampling process that’s fast and works intuitively, but that has zero theoretical justification. The papers are placed in an array of bins; papers with more citations appear towards the beginning of the array, and get more bins. Then a random number between zero and one is chosen and multiplied by the number of bins. That number is used as an index into the array of bins. The bias parameter is applied in one of two ways. If it’s greater than one, then the random number is raised to the power of the bias parameter before being multiplied by the number of bins. So if the bias parameter is two, then the square of the number is used. This pushes the values downwards — recall that the square of one half is one quarter — towards the most often cited papers. If the bias parameter is less than one, then the number of bins allocated to each paper is raised to the power of the bias parameter. So if the bias parameter is one half, then a paper that already has sixteen citations gets only four bins. This strategy produces a smooth transition between the two kinds of bias, and produces fewer bins than citations when possible, but never more. ↩
- The script is set with the following defaults: five hundred journals, five issues per year, ten articles per issue, and ten citations per article. The bias parameter is set to one (a standard rich-get-richer bias), and the decay parameter is set to seventy-five percent. ↩
- There are strong reasons to doubt this. Google Scholar has some fairly specific requirements for inclusion. Do as many humanities journals as science journals worry about meeting those requirements? Almost certainly not. And Google’s coverage for journals in my field looks incredibly dodgy. I don’t blame Google for that — but it certainly should have some bearing on the kind of reform we aim for. Let’s work on getting our journals properly indexed before we start overhauling our entire field. ↩