Statistical snake oil, again
Or: where is Darrell Huff when you need him? The Chronicle of Higher Education, drawing on the services of Academic Analytics LLC, presents lists of departments and institutions ranked by “productivity”. Here are the 2007 and 2006 rankings for philosophy:
The measure is based on statistics concerning publications, grants, awards and honors, and so forth. These are normalized and weighted to yield the composite scores you see above. It should be clear that although the scores are significant, the rankings aren’t. They’re too volatile. Only three departments manage to remain in the top ten from 2006 to 2007.
It’s true that in sports the standings from one year to the next can vary just as much. They, however, are based on the unimpeachable won-lost record. A perusal of the puffery for the “Faculty Scholarly Productivity Index™ (FSP Index)” shows that arbitrariness enters into the formulation of the Index not only in the weighting of its various components but also in the methods used to calculate those components. One book is given the weight of five articles, and so on. Institutions can buy the raw data. But I wonder how many administrators, pressed for funds, will do so, and spend more money to have the data analyzed again. Yet that’s what you’d need to do to know how robust a measure the FSP is.
Unfortunately, the numbers will be used to make distinctions they cannot rightly be said to justify. Academic Analytics claims that “more universities than ever are using FSP on their campus”, and I believe them. What I don’t believe is that the FSP is as objective as they claim. Carnegie-Mellon, which ought to know better, highlights its no. 5 ranking in 2007; but a year earlier, as you can see, they weren’t even in the top ten. Michigan State’s index plunged from 2 to below 1.31 in one year. Were they exhausted after their stellar season?
It’s too bad that the Chronicle is lending its prestige to this dubious enterprise. An antidote to the FSP can be found at the International Mathematical Union which has produced a Report () on citation statistics. Here are the conclusions:
- Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics.
- While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations.
- The sole reliance on citation data provides at best an incomplete and often shallow understanding of research—an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments.
The last item, I think, may be irrelevant to the people who are most likely to be customers of Academic Analytics. I have in mind administrators or the people who hire them, people who think that a university should be run like a business. The point of appealing to the FSP and measures like it is to avoid judgment, substituting for it the authority of numbers.
(For more statistical snake oil, see “How to mislead with statistics”, 7 Dec 2004.)
“A ‘Nixon going to China’ moment”Geomblog, 14 June 2008.
Abraham Mahshie. “Former executive spooks some, but not all, faculty” Columbia Daily Tribune, (21 Dec 2007) p. 1A. Also here ().
TrackBack URL for this entry:
Weblogs that refer to “Statistical snake oil, again”: