Journal Club Review – Reviewing Peer Review

Journal Club Review – Reviewing Peer Review

January’s journal club was led by Grant Hisao (Postdoc in Biochemistry). Journal club review was written by Jenny Bratburd (Graduate Student in Microbiology).

Last week’s journal club explored a key area of policy for science: how can we improve peer review? Peer review is a key practice in academia, where experts critique a paper before it can be published in a scholarly journal. Ideally, the feedback improves the paper or rejects unsound science from trusted journals. Today, editors often solicit about three reviewers to weigh in on a paper, anonymously and without compensation. With an increasing volume of publications–some estimate the global scientific output is doubling every 9 years–complaints about the inefficiency and quality of peer review abound. Many scientists and publishers are looking for a better way to peer review.

One model for peer review incorporates open reports, where reviews are published with the paper, in the hope of increasing transparency. Nature Communications has made this optional in the last few years, and if you’re the type of person who can get sucked into hours of reading anonymous comments on the internet, prepare to face the vortex. In all seriousness, these reviews provide an interesting perspective on the papers themselves and valuable models for trainees to compare how reviews should be written. They likely encourage reviewers to write more thoughtful comments or at least less vitriolic comments. Unfortunately, while the increase in transparency is laudable, the option seems as though it would have little effect on increasing the speed of the review process, as it gives no additional incentive to reviewers and potentially makes it marginally more difficult to get reviewers to agree to look at it in the first place.

Another option is open identities, in which reviewers identities are published along with the manuscript. The idea here is that reviewers could finally get credit for their work (in this case, academic credit only–although journals are experimenting with paying reviewers). Open identities and open reports can be combined to show exactly who said what. This is the model used in F1000, an open research publishing site, which encourages commenting for authors to improve their work. Though this finally allows reviewers to get credit, for most reviewers the incentives may be lacking in their jobs–will reviews, anonymous or not, be helpful in a professor’s tenure package? One staff scientist raised the question, how is someone paid on grant money supposed to carve out time for reviewing?

Another emerging peer review model is crowd review, currently being tested by Synlett. Editors assemble a group of around 80 members to review papers collectively. These papers are only available for 2-3 days, and reviewers can endorse or disagree with each other’s comments. By having many people scour the paper at once, this may provide a more thorough review in a faster time frame than traditional review. One reviewer suggests they only spend 1-2 hours on review in this system (traditional review estimates a median of 5 hours and average of 8.5 hours on a paper), and having a large pool of reviewers on hand ready to finish a paper in 72 hours may ease some timeliness issues, such as waiting for a reviewer to agree to a paper, waiting for them to find time to sit down and read it, waiting for them to write up those comments, etc. By having many reviewers, this may discourage unconstructive criticism and balance out poor quality reviews with little information with a quantity of reviewers to catch more details. On the other hand, crowd review could introduce new flaws in peer review (note that at Synlett, authors can still opt for a combination of crowd and traditional single reviewers). The idea of “wisdom of the crowds” works best when each individual acts independently. Collectively, crowds might encourage groupthink, or a scientific version of the apocryphal Kitty Genovese story, where no one reviewer puts in the effort to fully suss out the flaws. But if you want to know whether crowds can be usefully managed to curate knowledge, take a closer look at those citations linked above, from everyone’s favorite open-access, crowd-sourced reference, Wikipedia. Crowd review is the greatest departure from traditional review of those discussed here, but in my opinion offers the most promise in swift, thorough review process offering a maximal amount of helpful feedback. And of course it warms my grad student heart to imagine eighty people taking the time out of their day to read my manuscript.

Our journal club discussion raised many concerns related to publishing and peer review, including improving speed of the process, quality, transparency, and ways to diminish biases and sabotage. Though peer review may seem to be an esoteric academic process, ultimately these publishing decisions impact quality and trust in science and our ability to expand knowledge in an era with unprecedented scientific output.