Give me some credit! How to improve peer-review
Have you ever read a scientific paper and thought: “Holy cow, what a pile of garbage”? OK, I do this after every draft I write but luckily, I like to think that I usually get my act together before these little buggers see the light of day (convince yourself otherwise here). More worrisome is, however, that I have similar reactions to some papers that are done, dusted, and published, often in relatively prestigious journals; sometimes journals they have no business in whatsoever but were clearly chosen for impact factor rather than appropriateness. Sounds familiar? If so, continue reading…
A couple of months ago, I was having a lunchbreak rant about the matter above to my esteemed friend and colleague, who took it from there and went on an even bigger rant about the same issue. Clearly, I had struck a chord there so the following afternoon, I was wondering why it is that we see papers published that upset the good scientist in us to an extent that we even forget to eat our chicken-wraps and end up with cold risotto. The answer wasn’t too complex: if papers with serious methodological errors, flawed conclusions, faulty statistics, or meaningless results are published in journals they don’t belong in, then the review of these articles has clearly been too sloppy or inappropriately favorable (you scratch my back and I'll scratch yours -- works in primates, works in humans). God, how I'd like to know who reviewed some of these papers, for Darwin's sake.
Wait, that’s the solution, right? If a paper of inferior quality is published, I think the scientific community has the right to know who is responsible for it. At the same time, nobody is hurt if reviewers’ names are published along with a high quality paper. In fact, it would give due credit to a thorough effort (rather than the bullshit about "we thank two anonymous reviewers for their helpful comments" that nobody wants to read) and wipe out a whole range of problems from biases in the review process to the strain on reviewers' time budgets. So I spent an evening getting my thoughts straight (wishful thinking 1) and wrote a neat (wishful thinking 2) little perspective on what I think the peer review process should be like (wishful thinking 3), which was subsequently honed by said colleague with the chicken wrap. I then discovered that some other folks had the exact same thoughts a little while ago, which is both a bit disappointing and flattering (frankly, comparing my work to the Wicherts paper is a bit like comparing Brasil to Germany this morning with me being Brasil... never mind). Consequence: the words that were carefully written remained in a folder on my computer.
Over the last week, however, the debate about peer-review was stirring once more, with two paper retractions from Nature, a paper on biases in the peer-review process and an insightful blogpost I picked up on the web. I thought I’d chime in and rather than pursuing the publication of something that isn’t essentially novel, I decided to publish the little spiel here on my blog. Hopefully, it will stimulate some discussion, debate, or input. Specifically, I would like to know your opinion on why methods such as the one proposed below (or by others) do not find recognition by publishers. Would you submit/review for a journal that opens the evaluation process for the scientific community? Is it a flawed approach? Are there any insurmountable obstacles I fail to see? Or is the scientific/publishing community simply too set in its own ways (much like some industries we like to rant about) and likes to bang its head against the same brick wall over and over again? Please, go ahead and enlighten me.
Give me some credit! How to improve peer-review
Simon J. Brandl1,2 & Brett M. Taylor1,2
Peer-review is the backbone of scientific publication. Yet, it relies on the willingness of scientists to spend valuable time on reviewing (with no tangible reward) as well as their scientific rigor, integrity, and expertise (with little to no consequences if the latter is lacking) and is therefore a consistent subject of debate.
We posit that post-acceptance publication of the reviewers’ identities and comments along with every original article would provide a solution to both problems. Reviewers’ comments often offer invaluable input, significantly enhancing the quality of many manuscripts. This should be accredited on the paper. Conversely, articles of poor quality can slip through the peer-review process due to the lack of professional scrutiny or scientific integrity of reviewers, which may remain undetected by the handling editor. This should likewise be visible.
While this breaches the long-standing paradigm of reviewer anonymity post-acceptance, it does not discourage rigorous reviews resulting in rejection, as reviewers’ identities are only released upon acceptance. In turn, it allows researchers to boost their reputation within the scientific community through thorough but fair reviews (thus heightening the incentive to review), while decreasing the potential for nepotism and carelessness in the review process (thus safeguarding high quality scientific output). For conflicting reviews, suggesting acceptance and rejection, the rejecting reviewer may indicate whether or not he chooses to be named on the article if the editor decides to accept the manuscript for publication.
In times where the ever increasing number of scientific articles imposes a threefold strain on scientists’ time budgets (more articles to review, an exponentially growing library of articles to consider, and the necessity to increase personal output to remain competitive), we suggest that crediting reviewers and their input on published articles would drastically increase the merit of reviewing manuscripts, lift the qualitative bar for publication, and facilitate assessments of published articles.
1ARC Centre of Excellence for Coral Reef Studies, James Cook University, Townsville, Queensland 4811, Australia
2School of Marine and Tropical Biology, James Cook University, Townsville, Queensland 4811, Australia
P.S. This blog was peer-reviewed by Brett and my humble self. Due to inherently nepotistic predispositions, sloppiness, and biases towards ourselves, potential qualitative shortcomings and crucial flaws are preprogrammed and intended in order to make a point.