Saturday, April 30, 2016

Three common sources of error in peer review, and how to minimize them

Researchers have an odd love-hate relationship with peer review.  Most think it sucks, but at the same time, necessary.  Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest.  Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through bullshit. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia — one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'.


Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice.  Some interesting data for context is provided from a recent analysis of peer-review results from the journal, Functional Ecology.  Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list.  Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers.

Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark.  This failure is undoubtedly not unique to Functional Ecology.  It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model — where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here? — the fact that author-preferred reviewers rated papers more positively? — or the fact that editor-selected reviewers rated papers more negatively?

Either one could be a product of peer review error, and at least three explanations could be involved:



(1) In some cases, there will be ‘author-imposed positive bias’ — i.e. author-preferred reviewers are more likely to recommend acceptance because authors have an incentive to suggest reviewers that they have reason to expect would review their paper positively.




gatekeeper
(2) Other cases, however, will suffer from ‘editor-imposed negative bias’ — i.e. editor-selected reviewers are more likely to recommend rejection because editors have an incentive to impose high rejection rates in order to elevate and maintain the impact factors of their journals, and thus compete with other journals for impact factor status.  Hence, in order to look like they are trying to meet the rejection rate quota imposed by their publisher or EIC, associate and subject editors are sometimes inclined to favour reviewers who they suspect are competitors, or even bitter rivals of the author, since they are more likely to recommend rejection and less likely to offer suggestions for improving the manuscript.  To achieve this, some editors even select non-preferred reviewers identified by authors.  I conducted an experiment to test for this a few years ago in a submission to a high-end ecology journal, where I named a non-preferred reviewer who was in fact a good friend (and who knew I was conducting the experiment). Sure enough, shortly after my submission, my friend contacted me to report that he had been invited to review my paper (to which he declined).

whoops
(3) Finally, in some cases there will be ‘unintended reviewer mismatch’ — i.e. editor-selected reviewers are more likely to recommend rejection because editor-selected reviewers are likely to be less equipped to understand the contribution of the manuscript, or to appreciate how or why it is interesting or important.  In some cases, this results because of editor ignorance; after all, in spite of best intentions by editors, authors will generally know better who is most qualified to review their papers, and best equipped to recommend effective revisions that can bolster the quality and impact of the paper.  In other cases (where authors choose not to name non-preferred reviewers), editors may inadvertently invite reviewers who are competitors or likely to provide a ‘retaliatory’ review, without being aware of this conflict of interest (an error that is not risked with author-preferred reviewers).  In still other cases, editors simply have little opportunity for quality control because they are forced to settle for whoever is willing to volunteer to provide a gratuitous review (and no one except the editor has knowledge of the reviewer’s identity, credibility or track record of reviewing quality).  With many traditional journals — because of low reviewer incentive — editors commonly end up sending a dozen or more requests before willing reviewers for a manuscript can be arranged, and so they are not the most ‘preferred’ — and hence not the best possible — reviewers for judging the quality of the manuscript.

Minimizing errors

All three of these peer-reviewing errors can be minimized by open, author-directed, peer review that combines identification of reviewer names in accepted papers, together with published declarations of ‘no conflict of interest’ (from both authors and reviewers), and incentives for reviewers to work together with authors to improve their papers (Aarssen and Lortie 2012):

Open review.  Some researchers prefer to be anonymous reviewers because this enables them to voice criticism and recommend rejection of a paper without fear of later retaliation by the author. These concerns may be reasonable.  But those who have them should abstain from peer-review, because these concerns are vastly outweighed by the cost — of single-blind review — to the progress of science:  by allowing reviewers to hide behind anonymity, there is no deterrent against biased and poor-quality reviews with draconian recommendations for rejection.  For many people, the reason why they volunteer their time to review is precisely because they can remain anonymous, not because they are nice people wanting to help advance science. Anonymous reviewing provides power over colleagues — power to approve manuscripts that support the reviewer’s own research and reject those that conflict with it.

In contrast, with author-directed open peer review, authors can seek and arrange review of their papers from the best reviewers and most reputable researchers in their fields — and can also avoid reviewers that the author suspects might be a ‘competitor’ or likely to provide a ‘retaliatory’ review.  [Editors, in contrast, are usually not sufficiently informed — nor as inclined — to avoid such biased reviewers].  Having the endorsement of a top quality, unbiased reviewer/researcher in hand when submitting to a journal (and acknowledged in the published paper) represents strong evidence in support of the paper’s merit.  The quality/impact of an article therefore can be judged by who the acknowledged reviewers are (combined with the article’s citation metrics), rather than relying on the usual inferior metric (the impact-factor of the publishing journal) .

No-conflict-of-interest declarations.  A conflict of interest occurs in peer-review when the quality of a review is potentially compromised because circumstances exist that could limit the ability of the reviewer to be objective and unbiased.  With NCOI declarations, signed by both authors (Fig. 1) and reviewers (Fig. 2) and published together with accepted papers, readers can be confident that the paper was peer reviewed and endorsed legitimately. Authors, in this case, will not be inclined to request reviews from close colleagues in order to avoid the perception of cronyism (and many editors and readers of published papers tend to know (or can easily discover) the identities of an author’s previous collaborators and close associates).  In addition, with reviewers’ names so identified, their reputations will be ‘on the line’.  Most, therefore, are likely to be honest, fair and rigorous in their reviews. Reviewers will not want their names used as public endorsements for inferior papers, or for papers whose publication will benefit the reviewer's own research reputation — at least not reviewers that will be regarded as having integrity with journals, authors, and readers. With this model then, reviewers have opportunity to develop reputations for high-quality, unbiased reviewing service.

Figure 1

Figure 2

Collaboration of authors and reviewers.  Most human efforts are better when people collaborate with a spirit of honesty and good will. This doesn’t always come easy, but it is guaranteed to be virtually non-existent under the traditional single-blind peer-review model.  When authors and reviewers collaborate to improve the quality of a paper, this can sometimes result in production of a reviewer response commentary that can be published alongside the author's paper, if accepted.  This can provide important inspiration for readers that exceeds that available from the reviewed paper on its own, and also gives credit to the reviewer for providing this contribution — thus, importantly, serving as incentive to participate productively in the dissemination of discovery that the author's paper represents. 


References


Aarssen LW, Lortie CJ (2012). Science Open Reviewed: An online community connecting authors with reviewers for journals. Ideas in Ecology and Evolution 5: 78-83.

Fox CW, Burns SC, Muncy AD (2016) Author-suggested reviewers: Gender differences and influences on the peer review process at an ecology journal. Functional Ecology. DOI: 10.1111/1365-2435.12665



No comments:

Post a Comment