James Phillips and John Yoo have just published a thoughtful analysis critiquing Brian Leiter’s approach to ranking faculty relevance. They suggest that what we should be looking at is all-stars, not superstars. If you measure a school based on their all-star line-up rather than their superstars, the results are dramatically different. Here’s how they put it:
Faculty can be thought of in two ways—all-stars and super-stars. All-stars are one of the best in their area, and a well-rounded faculty, like a well-rounded baseball team, has as many all-stars in as many positions as possible. Just like baseball all-stars, professors need to be evaluated against their peers in their area (or position), and not against professors in other areas (to compare the homerun totals of a second baseman with a first baseman would not be fair as the latter are expected to hit more homeruns while the former are expected to have a higher batting average and steal more bases). Super-stars are the elite, beyond just all-star status, a Roy Halladay for the Philadelphia Phillies or Tom Brady for the New England Patriots. Like a baseball team, they may be bunched in just one or two positions—often the hottest or most attractive, such as constitutional law or law and economics. There is probably a higher degree of correlation between winning and the number of all-stars than the number of super-stars, though both are nice to have…. This study argues that the all-star rankings is a more solid method of ranking faculties than the super-star method, average citations counts (either Leiter or this paper’s version), or the U.S. News’s academic ranking based on peer perception because it measures faculties more broadly, has less bias regarding attributes such as faculty age or size (Leiter method), takes into account peer-reviewed scholarship, and is objective rather than subjective (U.S News).
Analyzing the top sixteen law schools, Phillips and Yoo have devised a new and interesting approach that differs from the Leiter methodology in two important respects. First, they use a simple citations per professor per year average calculated by adding up all of the citations for the faculty and dividing by the number of years of experience for the faculty. This approach, they argue, “diminishes bias in favor of longevity and prolificacy, bias against immediacy, the disregarding of citation rate half-lifes, and ignoring interdisciplinary impacts.”
Second, they include citation counts from non-law journals using the Web of Science, which includes the Science Citation Index Expanded, the Social Sciences Citation Index, and the Arts & Humanities Citation Index. They argue that “as the legal academy has been evolving for some time regarding the educational pedigree of professors (more JD/PhDs) and the focus of its scholarship (more interdisciplinary work), and citation studies need to be modernized to reflect this trend.”
So what are the results based on their new methodology? Based on the Phillips and Yoo survey, here are the results for the best law schools for international law and comparative law:
Here are the international law and comparative law all-star faculty members from the top sixteen law schools:
UPDATE: Brian Leiter responds to Phillips and Yoo here. Here’s the crux of his response:
The two most interesting things they do are consult citations in the “Web of Science” database (to pick up citations for interdisciplinary scholars–this database includes social science and humanities journals) and calculate a citations-per-year score for individual faculty. A couple of caveats: (1) they look at only the top 16 schools according to the U.S. News reputation data, so not all law schools, and not even a few dozen law schools; and (2) they make some contentious–bordering in some cases on absurd–choices about what “area” to count a faculty member for. (This is a dilemma, of course, for those who work in multiple areas, but my solution in the past was to try to gauge whether three-quarters of the citations to the faculty member’s work were in the primary area in question, and then to also include a list of highly cited scholars who did not work exclusively in that area.) Many of those decisions affect the ranking of schools by “area.” The limitation to the top 16 schools by reputation in U.S. News also would affect almost all these lists. See also the comments here.
I liked their discussion of “all stars” versus “super stars,” but it was a clear error to treat the top fifty faculty by citations per year as “super stars”–some are, most aren’t. Citations measures are skewed, first off, to certain areas, like constitutional law. More importantly, “super stars” should be easily appointable at any top law school, and maybe a third of the folks on the top fifty list are. Some aren’t appointable at any peer school. And the citations per year measure has the bizarre consequences that, e.g., a Business School professor at Duke comes in at #7 (Wesley Cohen, whom I suspect most law professors have never heard of), and very junior faculty who have co-authored with actual “super stars” show up in the top 50.
A couple of readers asked whether I thought, per the title of the Phillips & Yoo piece, that their citation study method was “better.” I guess I think it’s neither better nor worse, just different, but having different metrics is good, as long as they’re basically sensible, and this one certainly is. On the plus side, it’s interesting to see how adding the Web of Science database affects things, and also how citations per year affects results. On the negative side, a lot of “impact” that will be picked up in the Web of Science database may be of dubious relevance to the impact on law and legal scholarship. And the citations-per-year measure has the odd result of elevating very junior faculty with just a year or two in teaching into elevated positions just because they may have co-authored a piece with a senior scholar which then got a few dozen citations. No metric is perfect (what would that even mean?), but this one certainly adds interesting information to the mix.