Common Pitfalls and Diamonds in the Rough

Common Pitfalls and Diamonds in the Rough

In earlier posts I’ve talked about some of the gatekeeping points in the submissions process, although I’ve largely described them from a procedural angle. Today’s post deals with the substantive angle.

Take the cursory review stage, when editors might sort incoming submissions into reject and consider further piles. Each journal and each editor will have their own policies (and thus statistics) about how many articles are seriously considered past this first stage; for illustrative purposes, imagine that one-half of submissions are culled out at this stage. Editors might cull articles for various reasons. Perhaps the article doesn’t satisfy formal submissions criteria—no articles from students, no “comments” or “notes,” no book reviews, etc. Depending on editors preferences, there will also be informal submissions criteria—no partisan hack jobs, no articles about cases written by that case’s counsel, no math, etc. Perhaps the editor gives the article a quick read, and it turns out it’s not well argued, not well researched, or simply not “interesting.”

Except for the spring article submissions dump when boards usually turn over, editors will usually have good information about their co-editors’ preferences. They should be able to predict where an article would fall on a unidimensional preference spectrum of the kind I described last time. They should be able to predict whether a coalition of editors, sufficient to vote for making an offer, would form. When it is substantially certain that other editors would not vote to publish, cursory review marks the end of the road for the article, as editors are rational, time is scarce, and no one wants to spend time discussing pieces that will not be published. When there is uncertainty about other editors’ preferences, or when those preferences are sufficiently predictable to think that a coalition would vote to publish, the article moves ahead.

I alluded above to some of the pitfalls that increase the chances that an article will not move ahead for further review, and in my last post I suggested that most submissions are undertheorized early drafts. What I mean is that editors will adjust the attention and care they give to reviewing an article to their impression of the attention and care the author has given to the article. Obvious early drafts get rejected. For example, including very few footnotes suggests the author wants our editors and staff to complete the library research; this is a job for the author’s RA, not an unpaid journal staffer. It also undercuts an author’s credibility, suggesting that there is little or no support for her arguments. On the other hand, it’s of course possible to have citation overkill, so authors should try to find balance here.

The papers that caught my eye are those that are well argued and well written, but very few people are able to pull this off. I don’t think it’s coincidence that the people who do these things well have usually also presented their article in workshops and job talks, or may have received feedback from a range of colleagues. I do not mean to suggest that a footnote with a long list of acknowledgements is a proxy for article quality; I would usually pay it no heed since an acknowledgment might mean someone pointing out a misplaced comma. (If I had ever paid attention to acknowledgments I might be worried about perverse effects similar to the “name recognition” effect that some invoke as a reason in favor of moving to blind review.) Either way, the point is much broader than this. Authors’ attitude toward soliciting feedback from colleagues and at workshops should be “the more the merrier”—the quasi-Condorcetian idea that as more people offer feedback, the aggregated feedback is more likely to have identified aspects of the article that need to be revised.

The value of that feedback comes when it is actually incorporated into the article itself, which is ultimately the basis on which editors decide whether to extend a publication offer. Moreover, the process of soliciting feedback works when readers offer serious substantive criticism of arguments, with an eye toward the shared goal of improving an article; it doesn’t work when authors solicit comments from colleagues whose views overlap with the author’s own. Improving an article may mean stepping out of one’s comfort zone, asking for comments from a broader range of voices, and then actually incorporating that feedback into the substance of the article. This can actually be helpful for authors in relatively “specialized” fields, for whom soliciting feedback from colleagues in other areas may help show how an article can be framed to make sense for a wider audience (including student editors and non-specialist faculty). This may also mean situating one’s argument within a broader context—statutory change or doctrinal evolution, scholarly debates, connections with other disciplines, or connections with other areas of law. Authors can complain all they want about how much or little editors know about their specialized area of law, but in doing so they may forget that a mainline journal’s readership may be similarly unfamiliar with a given specialty area. All authors would like to think that their article might revolutionize its area of estate law, antitrust law, or immigration law, but unless it is accessible to non-specialists, editors and readers may not be able to pick out its potential.

Two more points about soliciting advice and workshopping articles. First, after a journal makes an offer and the author accepts, the author may want to incorporate further input from post-submissions presentations. This is rarely a “problem” from editors’ perspective, unless a tight production schedule precludes substantial further revisions. But it should be a “problem” from authors’ perspective, since waiting to solicit comments works at cross purposes with the signaling functions of the submissions process. Articles editors may attend workshops, or they may be familiar with how they work; in any event, a meeting in which editors discuss an article is likely to track the style of a particularly tough-minded workshop. Articles become more impressive as they become more polished, but in order to convince editors to extend an offer, the author has to apply the polish before submitting the article. In other words, it’s better to have the merits and demerits of an article fleshed out well before the article is in front of an editorial board; it’s better to have editors not be able to identify possible demerits because they’ve already been addressed. Authors wondering why they are not placing in as highly-ranked journals as they would like might consider whether their articles are in publishable shape before submission. Publishable shape is not the same as “could be accepted by a board”; what I mean is that authors serious about making a positive impression on journal editors should aim to have every aspect of their article as fleshed out as would appear in a final published edition.

Second, consider the revise-and-resubmit. Some journals, probably a minority of them, may have policies against considering articles that they or an earlier board rejected. At other journals, a rejection is an implied invitation to revise and resubmit—not an invitation to resubmit without revising. In this age of electronic submissions it’s easy to tell how extensively an author has revised an article, using something like Microsoft Word’s “compare documents” feature. Nothing would annoy me more as an editor, and make me want to summarily reject the article, than a resubmitted article with little or no revisions. An author unhappy with a rejection should take the time to solicit candid advice from colleagues, students, RAs, etc.—not law review editors—about how an article should be strengthened. She should not simply resubmit with the hope that a less-discriminating later board will overlook whatever flaws the previous board found.

Those are a few ideas about how to make articles better—how to save them from the reject pile. But just making an article better will usually not be enough. The review at intermediate and full-board review stages will be far more exacting. It bears repeating the obvious point that to get an offer an article must be among the very best considered all year. Remember that of the thousands of submissions top mainline journals receive every year, they may make somewhere on the order of twelve offers (or up to fifty offers, at least for those journals that lose articles more frequently to expedited review elsewhere). Limited page counts mean journals cannot publish as many articles as they might like, but that category is also pretty small, extending to probably no more than the best fifty or so articles that come across an editors desk. I suspect that while there are some informational inefficiencies in the placement process—expedite signals may be too strong or too weak, for example—articles are usually placed in the right “tier” for their quality. My own experience was learning quickly (but not too quickly) upon first reading an article whether it “fit” the kind of article our journal would publish. This intuition, separate from the substantive review factors I described above, is hard to describe. It could simply collapse into the kind of prediction about coalition-building and co-editors’ preferences I described in my last post. But even before editors learn their co-editors’ preferences, they will have been staffers and will be exposed to the kind of articles the journal publishes. There is also a subjective, know-it-when-I-see-it quality to an identifying well-written and well-argued articles; usually, it means not only that the author is executing the technical aspects skillfully, but also that she is addressing my objections, questions, or concerns as I’m reading the article. Editors may try to fill in these gaps during the post-offer revision process, but it’s better when there are no gaps at all. Authors looking to break into a certain group of journals should look at what those journals are publishing, and should make their articles as polished as the final products before submitting them for review.

An article’s subjective quality is not the only factor in the review decision. Journals have more or less exacting standards for preemption, but they likely take such standards very seriously. In my experience, too many authors play fast and loose with prior work; preemption is a very common reason why boards will not extend publication offers. Obvious preemption—for example, publishing as an article a book chapter that has already been published elsewhere—is sure to elicit a rejection. An author who declines to cite to previous work (her own or others’) on which her article is based may get beyond the cursory review stage, but a forthcoming preemption check will probably show the board why it should not allocate scarce volume space to an argument that is not “new.” Even if article #2 is not based on article #1, editors may not take seriously an author who does not make a serious attempt to show why article #2’s argument is novel and not derivative; the omission may make them presume that it was based on article #1.

I ought to mention a few other minor pitfalls to close out this post. First, too many authors overstate their arguments but never deliver. This is reason to be skeptical about claims from authors (as mentioned above) who think their article will revolutionize its area of estate law, to pick a random example. Second, editors may prickle when authors have not scrubbed their files of metadata like “track changes,” which many people use to track revisions and provide comments on drafts. The oversight might be embarrassing in itself. But when failing to scrub metadata reveals text from earlier drafts, or feedback from colleagues and RAs, the effect may be worse—especially if the author hasn’t addressed the feedback but the editors think think the feedback is materially important. Finally, there is signaling value in sending an exclusive submission to a single journal, which may get the editors’ attention. There is no similar value in sending an “exclusive” submission to the top ten journals—a strategy that is by definition not “exclusive,” and that carries only the signal that the author thinks very highly of her article.

Print Friendly, PDF & Email
Topics
General
Notify of
Placements.stuck.at.top30

This was one of the most helpful posts on legal scholarship I’ve ever read on any blog – thank you very much!  I’m challenged that I need to seek more opportunities to present drafts at workshops, etc.  The comments about preemption by prior work and exclusive submissions were also helpful.

Still hoping for some comments about strategic timing for submissions.  Editors may not be aware how many rumors there are about this aspect of submission circulating among faculties.  In the last couple years, the rumors are either 1) that there is no more Spring window, because your chances of placing well are equal throughout the year, including mid-summer; and 2) there is nothing besides the Spring window anymore, because submitting any other time of year ensures a lower placement.