Posting reviews online does not affect review quality, but it does self-select a certain type of reviewer
Daphne Zaras forwarded me this article in BMJ (British Medical Journal): "Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial" (BMJ 2010; 341:c5729 doi: 10.1136/bmj.c5729).
Excerpted from the abstract:
Objectives To see whether telling peer reviewers that their signed reviews of original research papers might be posted on the BMJ’s website would affect the quality of their reviews.
Intervention Consecutive eligible papers were randomised either to have the reviewer’s signed report made available on the BMJ’s website alongside the published paper (intervention group) or to have the report made available only to the author—the BMJ’s normal procedure (control group). The intervention was the act of revealing to reviewers—after they had agreed to review but before they undertook their review—that their signed report might appear on the website.
Results 558 manuscripts were randomised, and 471 manuscripts remained after exclusions. Of the 1039 reviewers approached to take part in the study, 568 (55%) declined. Two editors’ evaluations of the quality of the peer review were obtained for all 471 manuscripts, with the corresponding author’s evaluation obtained for 453. There was no significant difference in review quality between the intervention and control groups (mean difference for editors 0.04, 95% CI −0.09 to 0.17; for authors 0.06, 95% CI −0.09 to 0.20). Any possible difference in favour of the control group was well below the level regarded as editorially significant. Reviewers in the intervention group took significantly longer to review (mean difference 25 minutes, 95% CI 3.0 to 47.0 minutes).
Conclusion Telling peer reviewers that their signed reviews might be available in the public domain on the BMJ’s website had no important effect on review quality. Although the possibility of posting reviews online was associated with a high refusal rate among potential peer reviewers and an increase in the amount of time taken to write a review, we believe that the ethical arguments in favour of open peer review more than outweigh these disadvantages.
Fifty-five percent declining is much higher than I've experienced in my editorship at the Electronic Journal of Severe Storms Meteorology (also an open peer-review journal at which I am Assistant Editor). I suspect that some didn't want to see their names associated with the paper as reviewers. Interestingly, we have seen that on occasion at EJSSM where reviewers decline when we say that we post signed reviews online.
That the quality of the reviews was not different whether or not the reviews were open was an interesting conclusion. This is perhaps not surprising. Of the reviewers who were willing to participate in this experiment, those who felt uncomfortable or insecure about posting their reviews online probably have a different personality type than those who don't care what the world or the authors thought of them. Therefore, the high decline rate helps self-select the right people who are willing to engage in this activity. This effect might explain why the reviews are not significantly different between the two groups.