Final Thoughts on Journal Submissions

by James Tierney

At the end of my last post I alluded to preemption as an obstacle to receiving an offer. Many times authors will be able to determine for themselves whether their article is novel or whether someone else had the same idea in 1986. Other times information about possibly preempting articles is more asymmetrical. My impression is that there are a couple of hot topics every year that grab the attention of many authors, usually based on what issues are being litigated or are in the news. For more than a year, topics like “fixing the mortgage crisis,” “immigration preemption,” “health care,” “international tax,” and “targeted killing” have all received a lot of attention from scholars. This is not to suggest that there is nothing left to be said about these topics, for new facts on the ground and new arguments can reframe the terms of the debate in ways that move scholarship forward. I look forward to seeing how work on targeted killing will continue to develop in the next year or so, for example.

But trendy topics like these carry strong first-mover advantages given the long time-horizon of the publication schedule. My sense is that an article completed today is not likely to be published sooner than January 2012, and articles published today were likely completed not much later than September 2010. There may be several articles about the same topic in the pipeline, none of which may be known to any of the other articles’ authors. Considering an example purely at random—sorry for readers who are working on this topic—the IL community is likely to see a large number of pieces on direct participation in hostilities come out sometime next spring. Authors don’t know what other authors are writing about, leading to an information gap.

This gap is asymmetrical, for on the other side of ExpressO, editors may see a glut of 10, 20, or 60 (gasp!) articles on a given topic. Even if editors cannot conclude formally that an article is preempted, they might think it carries a higher risk of preemption given the uncertainty of what other articles are in the pipeline. So too might they experience burnout on these topics, akin to the intertemporal preference-shifting effect I described in my first post. Even if alternative submissions expected to come in throughout the year might be higher quality, an article reviewed in April may preempt the better-executed article reviewed eight months later. Yet as these articles start to get published and the time horizon continues past a single volume’s publication cycle, authors start to see the glut of articles and may determine that it’s no longer profitable to write on that once-trendy topic, given fewer opportunities to carve out a novel argument for themselves. This may explain why, as a commenter noted after my first post, we haven’t seen many articles about torture in the past few years; early movers in 2003 to 2006 have made it difficult to come up with something new to talk about.

I’ll close my guest stint with a few brief thoughts, although I can try to respond to further questions or comments below.
• Five years ago my journal did not accept electronic submissions, but now all but a handful are electronic. Electronic submissions mean I can read your article on a smartphone in the gym, or on the train, without carrying around a stack of paper. It means I can distribute your article to editors who have summer jobs in far-flung cities. It means I am not killing trees. I, for one, welcome our new Internet overlords.
• To briefly wade into the debate about peer review: if professors really thought student-edited journals were not institutionally competent to review scholarship, they would sort into their own peer-reviewed journals. PRSM may be a start, but ultimately I don’t see control over the submissions process (the fun part) shifting to professors, until the latter start handling the citechecking and production process (the not-so-fun part). In this way, “calls for abandoning law reviews are counterproductive unless faculty are committed to occupying the field.”
• Readers skeptical about some of the claims I’ve made–such as editors having enough expertise to evaluate scholarship in a specialty area–might check out a post by Lisa Larrimore Ouellette, a YLJ articles editor, that I just now came across and that makes some of the same points.
• I suggested last time that articles generally place into the right category for their quality. This isn’t a hard and fast rule, and isn’t meant to suggest something like a “merit pyramid,” as a commenter alluded to in a recent thread on PrawfsBlawg. I’m sure I’m not the only person who has read an article and thought its quality didn’t necessarily match up with how it placed (in either direction). Anomalous preferences, internal board politics, and informational asymmetries can throw wrenches into the placement process, giving us pause before thinking rankings are anything but a highly imperfect proxy for “quality.”
• That said, it may still be a proxy. Getting to a final board read at a “top” journal may mean that an article is among the best 30 to 100 articles they are considering. Even if a publication offer doesn’t follow, if journals disclose that a submission got to a final board read, authors might conclude from that signal that they’re doing something right.
• Journals will have different policies about whether they treat “articles” and “essays” in the same process. Many collapse the distinction, unless of course they specifically say otherwise in their submissions instructions.
• The consensus seems to be that many journals have closed up shop for this volume. Authors might consider off-season submissions for journals that do year-round submissions.

Thanks again to Roger and everyone at Opinio Juris for hosting me during a week that turned out to have some important news for those interested in international law. For everyone who has placed articles this year, congratulations; for those currently working on articles with an eye toward submitting later this year, best of luck!

Common Pitfalls and Diamonds in the Rough

by James Tierney

In earlier posts I’ve talked about some of the gatekeeping points in the submissions process, although I’ve largely described them from a procedural angle. Today’s post deals with the substantive angle.

Take the cursory review stage, when editors might sort incoming submissions into reject and consider further piles. Each journal and each editor will have their own policies (and thus statistics) about how many articles are seriously considered past this first stage; for illustrative purposes, imagine that one-half of submissions are culled out at this stage. Editors might cull articles for various reasons. Perhaps the article doesn’t satisfy formal submissions criteria—no articles from students, no “comments” or “notes,” no book reviews, etc. Depending on editors preferences, there will also be informal submissions criteria—no partisan hack jobs, no articles about cases written by that case’s counsel, no math, etc. Perhaps the editor gives the article a quick read, and it turns out it’s not well argued, not well researched, or simply not “interesting.”

Except for the spring article submissions dump when boards usually turn over, editors will usually have good information about their co-editors’ preferences. They should be able to predict where an article would fall on a unidimensional preference spectrum of the kind I described last time. They should be able to predict whether a coalition of editors, sufficient to vote for making an offer, would form. When it is substantially certain that other editors would not vote to publish, cursory review marks the end of the road for the article, as editors are rational, time is scarce, and no one wants to spend time discussing pieces that will not be published. When there is uncertainty about other editors’ preferences, or when those preferences are sufficiently predictable to think that a coalition would vote to publish, the article moves ahead.

I alluded above to some of the pitfalls that increase the chances that an article will not move ahead for further review, and in my last post I suggested that most submissions are undertheorized early drafts. What I mean is that editors will adjust the attention and care they give to reviewing an article to their impression of the attention and care the author has given to the article. Obvious early drafts get rejected. For example, including very few footnotes suggests the author wants our editors and staff to complete the library research; this is a job for the author’s RA, not an unpaid journal staffer. It also undercuts an author’s credibility, suggesting that there is little or no support for her arguments. On the other hand, it’s of course possible to have citation overkill, so authors should try to find balance here.

The papers that caught my eye are those that are well argued and well written, but very few people are able to pull this off. I don’t think it’s coincidence that the people who do these things well have usually also presented their article in workshops and job talks, or may have received feedback from a range of colleagues. I do not mean to suggest that a footnote with a long list of acknowledgements is a proxy for article quality; I would usually pay it no heed since an acknowledgment might mean someone pointing out a misplaced comma. (If I had ever paid attention to acknowledgments I might be worried about perverse effects similar to the “name recognition” effect that some invoke as a reason in favor of moving to blind review.) Either way, the point is much broader than this. Authors’ attitude toward soliciting feedback from colleagues and at workshops should be “the more the merrier”—the quasi-Condorcetian idea that as more people offer feedback, the aggregated feedback is more likely to have identified aspects of the article that need to be revised.

The value of that feedback comes when it is actually incorporated into the article itself, which is ultimately the basis on which editors decide whether to extend a publication offer. Moreover, the process of soliciting feedback works when readers offer serious substantive criticism of arguments, with an eye toward the shared goal of improving an article; it doesn’t work when authors solicit comments from colleagues whose views overlap with the author’s own. Improving an article may mean stepping out of one’s comfort zone, asking for comments from a broader range of voices, and then actually incorporating that feedback into the substance of the article. This can actually be helpful for authors in relatively “specialized” fields, for whom soliciting feedback from colleagues in other areas may help show how an article can be framed to make sense for a wider audience (including student editors and non-specialist faculty). This may also mean situating one’s argument within a broader context—statutory change or doctrinal evolution, scholarly debates, connections with other disciplines, or connections with other areas of law. Authors can complain all they want about how much or little editors know about their specialized area of law, but in doing so they may forget that a mainline journal’s readership may be similarly unfamiliar with a given specialty area. All authors would like to think that their article might revolutionize its area of estate law, antitrust law, or immigration law, but unless it is accessible to non-specialists, editors and readers may not be able to pick out its potential.

Two more points about soliciting advice and workshopping articles. First, after a journal makes an offer and the author accepts, the author may want to incorporate further input from post-submissions presentations. This is rarely a “problem” from editors’ perspective, unless a tight production schedule precludes substantial further revisions. But it should be a “problem” from authors’ perspective, since waiting to solicit comments works at cross purposes with the signaling functions of the submissions process. Articles editors may attend workshops, or they may be familiar with how they work; in any event, a meeting in which editors discuss an article is likely to track the style of a particularly tough-minded workshop. Articles become more impressive as they become more polished, but in order to convince editors to extend an offer, the author has to apply the polish before submitting the article. In other words, it’s better to have the merits and demerits of an article fleshed out well before the article is in front of an editorial board; it’s better to have editors not be able to identify possible demerits because they’ve already been addressed. Authors wondering why they are not placing in as highly-ranked journals as they would like might consider whether their articles are in publishable shape before submission. Publishable shape is not the same as “could be accepted by a board”; what I mean is that authors serious about making a positive impression on journal editors should aim to have every aspect of their article as fleshed out as would appear in a final published edition.

Second, consider the revise-and-resubmit. Some journals, probably a minority of them, may have policies against considering articles that they or an earlier board rejected. At other journals, a rejection is an implied invitation to revise and resubmit—not an invitation to resubmit without revising. In this age of electronic submissions it’s easy to tell how extensively an author has revised an article, using something like Microsoft Word’s “compare documents” feature. Nothing would annoy me more as an editor, and make me want to summarily reject the article, than a resubmitted article with little or no revisions. An author unhappy with a rejection should take the time to solicit candid advice from colleagues, students, RAs, etc.—not law review editors—about how an article should be strengthened. She should not simply resubmit with the hope that a less-discriminating later board will overlook whatever flaws the previous board found.

Those are a few ideas about how to make articles better—how to save them from the reject pile. But just making an article better will usually not be enough. The review at intermediate and full-board review stages will be far more exacting. It bears repeating the obvious point that to get an offer an article must be among the very best considered all year. Remember that of the thousands of submissions top mainline journals receive every year, they may make somewhere on the order of twelve offers (or up to fifty offers, at least for those journals that lose articles more frequently to expedited review elsewhere). Limited page counts mean journals cannot publish as many articles as they might like, but that category is also pretty small, extending to probably no more than the best fifty or so articles that come across an editors desk. I suspect that while there are some informational inefficiencies in the placement process—expedite signals may be too strong or too weak, for example—articles are usually placed in the right “tier” for their quality. My own experience was learning quickly (but not too quickly) upon first reading an article whether it “fit” the kind of article our journal would publish. This intuition, separate from the substantive review factors I described above, is hard to describe. It could simply collapse into the kind of prediction about coalition-building and co-editors’ preferences I described in my last post. But even before editors learn their co-editors’ preferences, they will have been staffers and will be exposed to the kind of articles the journal publishes. There is also a subjective, know-it-when-I-see-it quality to an identifying well-written and well-argued articles; usually, it means not only that the author is executing the technical aspects skillfully, but also that she is addressing my objections, questions, or concerns as I’m reading the article. Editors may try to fill in these gaps during the post-offer revision process, but it’s better when there are no gaps at all. Authors looking to break into a certain group of journals should look at what those journals are publishing, and should make their articles as polished as the final products before submitting them for review.

An article’s subjective quality is not the only factor in the review decision. Journals have more or less exacting standards for preemption, but they likely take such standards very seriously. In my experience, too many authors play fast and loose with prior work; preemption is a very common reason why boards will not extend publication offers. Obvious preemption—for example, publishing as an article a book chapter that has already been published elsewhere—is sure to elicit a rejection. An author who declines to cite to previous work (her own or others’) on which her article is based may get beyond the cursory review stage, but a forthcoming preemption check will probably show the board why it should not allocate scarce volume space to an argument that is not “new.” Even if article #2 is not based on article #1, editors may not take seriously an author who does not make a serious attempt to show why article #2′s argument is novel and not derivative; the omission may make them presume that it was based on article #1.

I ought to mention a few other minor pitfalls to close out this post. First, too many authors overstate their arguments but never deliver. This is reason to be skeptical about claims from authors (as mentioned above) who think their article will revolutionize its area of estate law, to pick a random example. Second, editors may prickle when authors have not scrubbed their files of metadata like “track changes,” which many people use to track revisions and provide comments on drafts. The oversight might be embarrassing in itself. But when failing to scrub metadata reveals text from earlier drafts, or feedback from colleagues and RAs, the effect may be worse—especially if the author hasn’t addressed the feedback but the editors think think the feedback is materially important. Finally, there is signaling value in sending an exclusive submission to a single journal, which may get the editors’ attention. There is no similar value in sending an “exclusive” submission to the top ten journals—a strategy that is by definition not “exclusive,” and that carries only the signal that the author thinks very highly of her article.

Agenda-Setting Editors and Specialized Articles

by James Tierney

Often used to model legislative politics, positive political theory (PPT) has core insights that can be applied to the journal submissions process as well. There are important differences between the legislative and editorial processes. Like legislative action, however, editors’ deliberations and voting on submissions are a process of aggregating preferences within the constraints of voting rules and other institutional features. In this post I sketch out the outlines of such a theory. (For an application of PPT potentially of interest to the international law community, check out Josh Benson’s treatment of the “Guantanamo Game” a few years back.)

Each journal has its own set of submissions review procedures and voting rules, but decisionmaking is likely to be similar enough at each journal, and it’s likely to take the form of a sequential game. Editors play a micro game for each article, and a macro game for the entire slate of articles. We can describe the macro game simply, for our purposes, as maximizing the sum of the twelve—or however many articles in a slate—best outcomes across micro games. Each micro game starts effectively when an article comes into the pipeline, and ends either when the editors reject the article, or when the author accepts or declines their publication offer. The collegiality on journal boards may mean there are norms against openly strategic voting, and voting systems might be designed to minimize editors’ incentive to vote strategically. Nonetheless, we can assume that people will be strategic if they’re trying to maximize their preferences; they will figure out how to play the system to get the outcomes they want. And besides, it’s easier to model real-world processes by making unrealistic simplifying assumptions.

Voting rules will shape the details of the submissions process, and will also shape the leeway editors will have in bringing articles to the next stage of review, deliberation, and voting. Time is a valuable resource for editors and no one wants to waste time reviewing articles for which everyone knows the full board will not extend a publication offer. The decisions at earlier stages of review are made in the shadow of the decisions that must be made at later stages of review—and in particular, the preferences of the agenda-setters at different stages of reviews. Imagine there are three rounds of review: cursory review by a single editor, intermediate review by a five-editor panel, and full board review by nine editors. Imagine further that the voting rules require a single editor to bump an article up to intermediate review, three editors to bump up to full review, and six editors to make an offer. (The point should be similar if the journal uses a different voting rule, like weighted voting.) In the cursory and intermediate review stages the editors’ attention will be directed toward whether an article will make it to full-board review, and whether it will receive an offer. In particular, they will be attuned to the preferences of agenda-setters at each stage of review. They will also be attuned to the reputational consequences of their decisions with other members of the intermediate and final review groups: flagging an article that does not gain support in intermediate review may hurt an editor’s credibility going forward.

Agenda setters are the “swing” editors whose preferences will be pivotal and crucial in evaluating a given submission. Editors work closely together and develop senses of each other’s preferences and idiosyncrasies. Consider the question of placing international law articles in mainline journals. Editors will have different preferences along a unidimensional spectrum—essentially whether they are more or less interested in publishing an article about international law. Some editors may be very interested in international law, while others may be skeptics. If editors know each others’ preferences, the entrepreneurial editor pushing to publish a certain international-law article will look to the preferences of the sixth of the nine voting editors. The three editors whose votes she does not need aren’t likely to come into the analysis—except to the extent journals are social institutions, editors care about internal reputations, etc. Thus the only international-law articles that are likely to get serious attention past intermediate review are those that are consistent with the preferences of that pivotal sixth vote. (This same idea can be applied to other questions—whether to publish an article for which the ideological slant is more or less conservative, whether to publish a bankruptcy article, whether to publish an empirical article, etc.)

All journals are constrained in the number of articles they can publish, and thus in the number of offers they can make. In order to fill a volume with twelve articles, the board may make from twelve to many dozens of offers (depending on how many articles get expedited away to other journals). These constraints interact with the unidimensional preferences of editors on various issues, which are likely to change across the macro game as article slots fill up with articles of one type or another. So once a board makes an offer on, or places, an international law article in one of its slots, the preferences of the full board may shift up the spectrum; the sixth voter’s preferences in particular will become more stringent; and another international law article will face a higher burden of persuasion in order to elicit an offer. This preference-shifting phenomenon might help explain why authors go for the first-mover advantage, a question I puzzled over in my last post. First-mover strategies work well when journals fill up quickly. But for journals that do rolling submissions, the vote on the submission will still be oriented toward the merits of this piece in light of alternative submissions expected to come in through the year.

In short, a specialty article will often face an uphill battle in a mainline journal; in order to be taken seriously, it will have to be one of the “best” articles of that specialty that the editors anticipate seeing that year (given the preference-shifting effects of making an offer). This may mean many things: the article is interesting, provocative, and well argued; it is technically well executed and substantially finished; or it has received comments at workshops and from other professors, and thus the author has addressed many possible counterarguments. Along these dimensions, the idea of the “best” article represents one that will bring the journal the most bang for its editorial-labor buck. This idea also captures the possibility that the article’s argument is wide enough to secure votes along other preference dimensions. An article about international criminal law, or international banking regulation, may have substantive overlap with editors’ preferences in other areas. For example, this will collapse into unidimensional preferences when the editors interested in international law, and the editors interested in criminal law, all vote together as a coalition for this article—a coalition that may be enough to secure a sixth vote.

What does this mean for international-law scholars seeking to publish in mainline journals? First, specialized articles will be placed in specialized journals, while generalized articles will be placed in generalized journals. The more an article is able to draw connections to doctrines, debates, cases, examples, implications, etc., outside the narrow field in which an international-law article is operating, the more likely that the article will satisfy the pivotal agenda-setting sixth editor’s preferences. Moreover, having many high-quality specialized journals in a given area (like IL) may make it more difficult for authors to land articles in mainline journals. An editor’s conclusion that an article’s subject matter is too narrowly specialized would be shaped by the wide set of alternative journals in which the author could publish. My intuition is that if there were more criminal law specialty journals, for example, we’d probably see mainline journals substituting away from publishing as many pieces on criminal law. The take-home for authors here, though, is that appealing to a wider audience can be helpful.

A second suggestion is that authors should take advantage of the credible signal that an expedite request provides. The best bet is to play up the ladder, since the most helpful signals for mainline journals will be expedites from top secondary journals. Since (as I explained above) an IL article would have to be among the “best” IL pieces a board would see all year, the signal of an offer from a top secondary journal is a more credible signal that yours is one of the “best” pieces, than an offer from merely any secondary journal. In this sense, playing up the ladder may mean taking advantage of the secondary-journal expedite process before moving over to mainline journals. At the same time, securing offers from mainline journals helps cut back on the previous paragraph’s concerns about an article’s scope being too narrow.

A final suggestion would be to take time to get an article into the best possible shape. The vast majority of article submissions are undertheorized early drafts. The best articles are those that look as if they might be publishable immediately; for these the author has invested considerably more time and effort in polishing its arguments—including by addressing material counterarguments. Articles of this sort are probably fewer than 10% of submissions; because of their relative rarity, very well executed articles are those most likely to make it to intermediate or full-board review. Reading a well-executed IL article may favorably shift the preferences of pivotal editors who might not otherwise be interested in publishing the article just for its subject matter.

I can’t offer much more than that by way of the specific challenges IL authors face at mainline journals. It would be folly to try to identify what, in particular, articles editors at mainline journals are looking for in IL scholarship. I doubt more than one or two editors will have thought about IL enough to have developed such preferences. Relatively “expert” editors—who are agenda setters in their own right—may be tasked in reviewing IL submissions in the earlier rounds of review and thus would be positioned to choose the articles that both coincide with their subject-matter preferences and that are likely to secure the pivotal sixth vote in full-board review. My own subject-matter interests, for example, would have made me more amenable to reviewing articles about the laws of war, and less amenable to articles about international banking regulation. But the preferences of these editors and of boards in general will be hidden (unless articulated in calls-for-submission, for example), will differ widely from editor to editor and from journal to journal, and thus will not be available to authors who would want to approach the submissions process strategically.

One View of the Sausage Factory

by James Tierney

Thanks to Roger and everyone at Opinio Juris for giving me this opportunity to pen some thoughts about law journal submissions. I hope to provide an inside look at how the sausage is made—and in so doing, shed light on some trends evident from our side that might be less apparent from your side. Because journals treat submissions practices like trade secrets, I’m constrained in how candid I can be. I should also emphasize at the outset that these opinions and observations are my own, not those of the University of Chicago Law Review. They shouldn’t be taken to reflect or represent the policies, practices, or experiences, of that journal (except where specifically noted). Besides, in order to make my comments more generalizable, I’ll be abstracting away from my particular experiences as appropriate, drawing upon conversations I’ve had with editors at other journals.

I’ll start with two ugly truths about article submissions. The first is that there simply isn’t enough time to review every article with the depth and attention that they deserve. The most popular journals may receive upwards of several thousand unsolicited article submissions each year. The details are different from journal to journal, but professors seem generally to adhere to a bimodal submissions schedule (spring and fall submissions periods). The time crunch arising from this schedule only compounds the problem, as I explain at the end of this post. Of course, I expect that at most journals, editors make (and take seriously) a commitment to give every submission at least one read. Even so, this review will not always be as thorough as would be commensurate with the substantial labor investments authors make in writing articles.

If this means that there needs to be a cursory review stage early in an article’s submissions life cycle, the second ugly truth is that articles can be quickly sorted at a cursory review stage into two piles—”reject” or “consider further.” The upshot is that more articles are likely to survive initial review than one might expect. Journals have different policies about these things, and individual editors have their own idiosyncratic preferences. These will manifest in rather divergent first-cut culling rates across journals, so your mileage may vary. In future posts I hope to explain some pitfalls that lead editors to put an article in the first pile, and some factors that may make editors more likely put an article into the second pile. For now, it’s enough to admit that law review editors turn to imperfect proxies in evaluating submissions. After all, as Brian Tamanaha observed a few years back, we are “students, after all, with classes, exams, and jobs.” He also added that we have “limited knowledge about law.” This is overstated, for journal boards will aim to have different competencies represented in their articles group—someone who can evaluate bankruptcy pieces, someone who can evaluate international law pieces, etc. (I will return to this point in a later post about agenda-setters.) I disagree with Tamanaha’s suggestion that author identity—the “elite letterhead”—is the most obvious proxy of article quality. The identity of an author’s institution likely correlates, if loosely, with the quality of her prior work. But the quality of the author’s current work is what editors care about. And my claim is that even 3L editors with “limited knowledge about law” are often attuned to the quality of articles on their merits.

So let’s take elite letterhead off the table and replace it with a more likely suspect: the request for expedited review. This allows editors at the later-in-time journal to capture signaling benefits that accrue from the deliberative labor of editors at the earlier journal—all without incurring the review costs themselves. Most importantly, there are opportunity costs involved in selecting one article over another. Page space is scarce. Articles board members will rationally care about the articles they publish in their volume, either for intrinsic reasons (interest in the subject, interest in sending a message, interest in academia), or for the reputational benefits that accrue from being affiliated with any marginally higher-“quality” article. Additionally, there are costs involved with reviewing articles, deliberating upon them, and coordinating acceptance offers. Finally, there are costs associated with the uncertainty about whether any given article has actually filled an article slot or whether it’ll get “stolen” away, an uncertainty that lasts while it remains expedited to other journals.

These signals are costly. As Adam Samaha might say—channeling Eric Posner—“costly signals are credible signals.” Especially during the biannual dump periods, when editors at popular journals may be facing daily submissions rates of several dozen articles per day (or more), it’s tempting to rely on these credible signals to make the job easier. This system works well as a proxy supplementing more thorough review of submissions, but would be inadequate on its own. Under a system using that review rule exclusively, many excellent articles would inevitably slip through the cracks by not being picked up at all, or by being picked up by a journal that makes an exploding offer not amenable to further expedited review.

The signaling function of expedited review only works when earlier journals actually make offers. Over the last year I noticed a dramatic rise in authors’ use of the “soft expedite,” a message explaining that another board has informed the author that they expect to bring the article to full-board review, and expect to have a decision by some specific future date. These emails are rarely useful, although an important exception is for those few journals that notoriously make exploding offers of less than, say, 24 hours. (I don’t intend to offer detailed thoughts on the exploding offer, which strikes me as a useful if inconsiderate means of protectionism. I’d be interested in hearing reactions from other recent editors and from authors.) In that post from several years ago, Orin Kerr correctly suggests that a board may face expedited review of an exploding offer without enough time to consider it seriously on the merits. But Kerr is wrong if, as his post’s conclusion suggest, he believes that when boards must review in the shadow of an exploding offer they are likely to be swayed by shiny objects like a professor’s name and school, in lieu of considering the article’s merits.

More likely is they’ll simply decline to act on the expedite request at all (which is why I called the exploding offer a form of protectionism). Unless the article is already teed up for immediate review, boards will find it too difficult to coordinate schedules, distribute and read the article, and reach an answer in time to contact the author before the offer expires. “Soft expedite” emails are thus helpful for the narrow group of journals that are known to give very short deadlines. For journals that have normal-length offer windows, soft expedite emails are useless. The signal they carry is not costly and is thus not credible: it means only that a board is investing time in reading and meeting on an article, not accruing the much higher opportunity costs of making an offer. And as a pragmatic matter, from the perspective of an overworked editor it can be frustrating to get hundreds of emails offering (more or less) variations on this theme: my work may receive an offer in the future and you may have a normal amount of time to act upon it. So authors might think twice before sending their second, or third, soft-expedite-update email of the day.

One last thought for today’s post. There seems to be a good deal of grumbling on the blogs about the pernicious effects of the articles submissions calendar—specifically how this can lead to editors using imperfect proxies for evaluating an article’s merit. This system is, to a large extent, a function of the choices authors themselves have made. This winter I received more than a handful of emails from RAs and school administrators asking when our board turned over, some explicitly saying that the authors thought they would have better luck in front of a newer (less experienced?) board. The “first-mover” advantage that people seek to get by gaming the system this way makes sense only if journals fill up their entire volumes in the space of a month in the spring. Many journals, including Chicago’s, consider articles year-round on a rolling basis. Authors don’t take advantage of such review policies even when journals publicize them, seemingly preferring to stick to the dump-period model—again, maybe on the assumption that less experienced boards are less discriminating. Yet the submissions volume during the articles dump period makes it costlier for editors to review any given article, and more likely that an article we might otherwise like to publish will slip through the cracks. I suggest authors might be able to get around concerns about proxy decisionmaking by avoiding “gaming” the system at all. In other words, authors should move away from the “mad rush in February” model, and toward a model in which they submit no sooner and no later than when the article is “complete” enough to be published.

In future installments I hope to sketch out some thoughts about agenda-setting on articles boards (and what this means for international law scholars looking to publish in mainline journals); about recent trends in international law scholarship; and about some of the more enduring debates about the merit of student-edited journals.

James Tierney on the Law Review Submission Process

by Roger Alford

Opinio Juris is pleased to announce that James Tierney will be guest blogging with us for the next few days. James is the outgoing Executive Articles and Book Reviews Editor for the University of Chicago Law Review. He will blog about his perspectives on the law review submission process, with particular attention given to international law scholarship in mainline journals.

James and I first met through Opinio Juris, having struck up an email conversation following a post I wrote in February 2008 on the moral stages of why nations obey international law. After a series of conversations about our mutual interest in international law and psychology, we decided to co-author a book chapter (abstract available here). The finished product is a chapter entitled Moral Reasoning in International Law, to be published in Trey Childress’s forthcoming book on The Role of Ethics in International (Cambridge University Press, 2011).

James will complete his JD from the University of Chicago this spring, and then clerk for Judge Mary Schroeder for the U.S. Court of Appeals for the Ninth Circuit. He has an MA in international relations from the University of Chicago and an AB from Brown.

Welcome James!