07 Sep New Study Analyzes the Best Law Schools and Top Faculty for Comparative and International Law
James Phillips and John Yoo have just published a thoughtful analysis critiquing Brian Leiter’s approach to ranking faculty relevance. They suggest that what we should be looking at is all-stars, not superstars. If you measure a school based on their all-star line-up rather than their superstars, the results are dramatically different. Here’s how they put it:
Faculty can be thought of in two ways—all-stars and super-stars. All-stars are one of the best in their area, and a well-rounded faculty, like a well-rounded baseball team, has as many all-stars in as many positions as possible. Just like baseball all-stars, professors need to be evaluated against their peers in their area (or position), and not against professors in other areas (to compare the homerun totals of a second baseman with a first baseman would not be fair as the latter are expected to hit more homeruns while the former are expected to have a higher batting average and steal more bases). Super-stars are the elite, beyond just all-star status, a Roy Halladay for the Philadelphia Phillies or Tom Brady for the New England Patriots. Like a baseball team, they may be bunched in just one or two positions—often the hottest or most attractive, such as constitutional law or law and economics. There is probably a higher degree of correlation between winning and the number of all-stars than the number of super-stars, though both are nice to have…. This study argues that the all-star rankings is a more solid method of ranking faculties than the super-star method, average citations counts (either Leiter or this paper’s version), or the U.S. News’s academic ranking based on peer perception because it measures faculties more broadly, has less bias regarding attributes such as faculty age or size (Leiter method), takes into account peer-reviewed scholarship, and is objective rather than subjective (U.S News).
Analyzing the top sixteen law schools, Phillips and Yoo have devised a new and interesting approach that differs from the Leiter methodology in two important respects. First, they use a simple citations per professor per year average calculated by adding up all of the citations for the faculty and dividing by the number of years of experience for the faculty. This approach, they argue, “diminishes bias in favor of longevity and prolificacy, bias against immediacy, the disregarding of citation rate half-lifes, and ignoring interdisciplinary impacts.”
Second, they include citation counts from non-law journals using the Web of Science, which includes the Science Citation Index Expanded, the Social Sciences Citation Index, and the Arts & Humanities Citation Index. They argue that “as the legal academy has been evolving for some time regarding the educational pedigree of professors (more JD/PhDs) and the focus of its scholarship (more interdisciplinary work), and citation studies need to be modernized to reflect this trend.”
So what are the results based on their new methodology? Based on the Phillips and Yoo survey, here are the results for the best law schools for international law and comparative law:
Here are the international law and comparative law all-star faculty members from the top sixteen law schools:
UPDATE: Brian Leiter responds to Phillips and Yoo here. Here’s the crux of his response:
The two most interesting things they do are consult citations in the “Web of Science” database (to pick up citations for interdisciplinary scholars–this database includes social science and humanities journals) and calculate a citations-per-year score for individual faculty. A couple of caveats: (1) they look at only the top 16 schools according to the U.S. News reputation data, so not all law schools, and not even a few dozen law schools; and (2) they make some contentious–bordering in some cases on absurd–choices about what “area” to count a faculty member for. (This is a dilemma, of course, for those who work in multiple areas, but my solution in the past was to try to gauge whether three-quarters of the citations to the faculty member’s work were in the primary area in question, and then to also include a list of highly cited scholars who did not work exclusively in that area.) Many of those decisions affect the ranking of schools by “area.” The limitation to the top 16 schools by reputation in U.S. News also would affect almost all these lists. See also the comments here.
I liked their discussion of “all stars” versus “super stars,” but it was a clear error to treat the top fifty faculty by citations per year as “super stars”–some are, most aren’t. Citations measures are skewed, first off, to certain areas, like constitutional law. More importantly, “super stars” should be easily appointable at any top law school, and maybe a third of the folks on the top fifty list are. Some aren’t appointable at any peer school. And the citations per year measure has the bizarre consequences that, e.g., a Business School professor at Duke comes in at #7 (Wesley Cohen, whom I suspect most law professors have never heard of), and very junior faculty who have co-authored with actual “super stars” show up in the top 50.
(…)
A couple of readers asked whether I thought, per the title of the Phillips & Yoo piece, that their citation study method was “better.” I guess I think it’s neither better nor worse, just different, but having different metrics is good, as long as they’re basically sensible, and this one certainly is. On the plus side, it’s interesting to see how adding the Web of Science database affects things, and also how citations per year affects results. On the negative side, a lot of “impact” that will be picked up in the Web of Science database may be of dubious relevance to the impact on law and legal scholarship. And the citations-per-year measure has the odd result of elevating very junior faculty with just a year or two in teaching into elevated positions just because they may have co-authored a piece with a senior scholar which then got a few dozen citations. No metric is perfect (what would that even mean?), but this one certainly adds interesting information to the mix.
The authors’ methodology seems generally sound, except for one obvious flaw: in international law, unlike in Hollywood, all publicity is not good publicity. I seriously doubt that John Yoo’s high citation count reflects a widespread admiration for his work on international law. Indeed, we all know what it really reflects.
Kevin,
I don’t think they make any claims to be measuring admiration. They are measuring relevance based on a particular methodology and detailed analysis of citations.
Of course those on the right and left will not find the work of various scholars on this list admirable. I think that is a given.
Best,
Roger
Roger,
I was not trying to make a political point; I was critiquing the methodology. Ray Halladay and Tom Brady are not All Stars because they get discussed a great deal in the media; they are All Stars because their skills help their teams win. John Yoo’s high citation count does not improve the reputation of Berkeley; it harms it, because the overwhelming majority of those citations are negative.
Again, this is not intended to be about politics. I would offer a related criticism of David Scheffer’s inclusion in the list of All Stars. i have nothing but admiration for Scheffer’s tireless efforts to promote international criminal justice, and those of us who support the ICC are in his debt. But I don’t think he is widely viewed as a significant international-law scholar, and I imagine most of the citations to his work concern his efforts as a diplomat, not his more academic scholarship. So I don’t think his presence at Northwestern greatly improves its reputation for international law.
Kevin makes a good point that an important inquiry would be WHY someone is cited, although that would take much more effort. For example, someone might obtain a lot of “but see” cites or someone might be identified as having set forth a radical revisionist claim that is ahistorical and in error (as some who may share some of John Yoo’s general philosophic preferences have written (without the “radical revisionist” label). I bet that the name Hitler is used more often than Truman, Eisenhower, Jimmy Carter, and Ronald Reagan. Additionally, who decided what were the “top 16” schools, using what criteria, etc.; and why only 16? Might a well-recognized scholar in international law teach at a law school that is not yet ranked in the top 16? And why the bias against “longevity,” whatever that means? Is that partly age discrimination? Does it leave out John Jackson and Michael Reisman who, for example, appear in the top ten international law scholars in the U.S. in the last two Leiter studies? And what about Lou Henkin, who has more citations in Westlaw/JLR (e.g., in U.S. and a few foreign law journals) than John Yoo even after his death in October 2010? … Read more »
Jordan, I’m not sure the answers to your questions, but here are a few thoughts: 1. Why someone is cited matters, but whether a particular scholar enhances or detracts from a particular law school will be subject to debate. Does a prominent member of the revisionist school of thought add to or substract from the reputation of the law school? Some would say yes, others would say no. Or to take another example from another field, does a prominent member from the Critical Legal Studies world add or subtract to a law schools reputation? Again, some would say yes and others would say no. In many cases (not always) the answer to the question will depend on your ideological starting point. 2. I agree that the study should be expanded beyond the U.S. News top sixteen law schools, but these studies are incredibly data intensive and I would surmise they limited it to these schools because a more comprehensive study would be overwhelming. But just as Greg Sisk supplemented Brian Leiter by looking at lower-ranked schools, so too should someone supplement this approach by looking at lower-ranked schools. 3. As for longevity, check out the Phillips & Yoo article discussing… Read more »
From the perspective of someone working outside of the US, this project seems absurd. I’ve always found Leiter’s non-academic work ridiculous, and this just perpetuates the problem. Yes ranking law schools/academics is fun, but as Kevin’s comments have demonstrated, it’s meaningless. It’s like listing the 50 best Christmas songs, we can try to apply objective criteria (volume of sales/radio plays etc), but we’ll still all have reasons for picking one over another.
I’m aware that ranking is an ever-present part of academic life now, but this makes it no less pointless and absurd.
[…] morning I came across a post on the excellent Opinio Juris blog about some work done by James Phillips and John Yoo (both at Berkeley in the US) on the ranking of […]
Thank God we now know where “Matt” stands on all this. Really helpful to get his musings, that’s for sure.
Whatever the other merits of this methodology — and I think it has some merit — the focus on just the top 16 school is a severe limitation. While the authors acknowledge this, they nonetheless seek to identify top individuals in each field, even though this will overlook many folks. In my own area of specialty, in environmental law, it results in the exclusion of people like Carol Rose (now at Arizona) and J.B. Ruhl (Vanderbilt). This is a particular problem because one value of the methodology is that it is less “biased” against younger faculty who are having a major impact in their fields, as many such younger faculty will (still) be at lower ranked schools, and it is their impactful scholarship that may eventually result in their being at another school.
I think Jonathan’s point is an excellent one. I like that the authors rank by citations per year, as opposed to number of citations simpliciter, in order to avoid biasing the results toward established scholars. But Jonathan is absolutely right that many respected and oft-cited young international law scholars will not be at a top-16 school — think Robert Sloane (BU), Tony Colangelo (SMU), or Harlan Cohen (Georgia).
Response…
My main point re: Jackson, Reisman, and Henkin (considering him as if he had been alive when the study was made) is that it excludes people because of years of teaching and that the results, which allegedly lead to the identification of the “top professors” in an area (not “top younger professors” which should be the label attached) seem quite strange. For example, they lead to an exclusion of Jackson and Reisman although they are in the last two Leiter studies of top professors in international law.
Professor Heller,
Do you have any proof for this claim re: Yoo: “the overwhelming majority of [his] citations are negative.” I have seen that argument made elsewhere, but I have yet to see anybody back it up. Can you back it up, because I am afraid it is an information cascade — one person says something, then it keeps getting repeated, then accepted as the truth.
This is not my field, but something in which I do have a growing professional interest. Is there any similar scheme or suggested criteria for ranking/evaluating teaching ability?
Further to Ian’s question, I am wondering if one or more schemes or criteria have emerged that correlate excellence in teaching with research distinctives among faculty and/or students for particular schools?
Jordan,
Good point. It looks like the Phillips and Yoo methodology measures who is making an immediate impact, whereas the Leiter approach takes a longer view. It is odd to say that folks like Reisman and Jackson are not among the top international law scholars currently teaching.
Roger
Ian and Leslie,
There are not objective criteria and rankings for the measurement of teaching, which is indeed disappointing. On the other hand, within law schools there usually are absolute and relative measurements of teaching quality through student evaluations, and this information is relevant for tenure and promotion, as well as lateral hiring. Students have a bit of an information gap in choosing professors to take, but most law schools are small enough that word-of-mouth reputation helps in the selection of professors to take and avoid.
Roger Alford
Response…
p.s. on Leiter’s last list was Harold Koh! and others
Roger,
Thank you for the reply. Do you mind if I email you direct at nd.edu to ask a couple of questions that I should not pursue in the public domain?
Ian