Sunday, August 28, 2011

Blog moving...

Because I've been blogging on my Eloquent Science blog more than here (often with issues relevant to peer-review process and publication), I encourage you to visit the Eloquent Science blog at http://eloquentscience.com/blog.

Saturday, November 20, 2010

Posting reviews online does not affect review quality, but it does self-select a certain type of reviewer


Daphne Zaras forwarded me this article in BMJ (British Medical Journal): "Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial" (BMJ 2010; 341:c5729 doi: 10.1136/bmj.c5729).

Excerpted from the abstract:

Objectives To see whether telling peer reviewers that their signed reviews of original research papers might be posted on the BMJ’s website would affect the quality of their reviews.

Intervention Consecutive eligible papers were randomised either to have the reviewer’s signed report made available on the BMJ’s website alongside the published paper (intervention group) or to have the report made available only to the author—the BMJ’s normal procedure (control group). The intervention was the act of revealing to reviewers—after they had agreed to review but before they undertook their review—that their signed report might appear on the website.

Results 558 manuscripts were randomised, and 471 manuscripts remained after exclusions. Of the 1039 reviewers approached to take part in the study, 568 (55%) declined. Two editors’ evaluations of the quality of the peer review were obtained for all 471 manuscripts, with the corresponding author’s evaluation obtained for 453. There was no significant difference in review quality between the intervention and control groups (mean difference for editors 0.04, 95% CI −0.09 to 0.17; for authors 0.06, 95% CI −0.09 to 0.20). Any possible difference in favour of the control group was well below the level regarded as editorially significant. Reviewers in the intervention group took significantly longer to review (mean difference 25 minutes, 95% CI 3.0 to 47.0 minutes).

Conclusion Telling peer reviewers that their signed reviews might be available in the public domain on the BMJ’s website had no important effect on review quality. Although the possibility of posting reviews online was associated with a high refusal rate among potential peer reviewers and an increase in the amount of time taken to write a review, we believe that the ethical arguments in favour of open peer review more than outweigh these disadvantages.


Fifty-five percent declining is much higher than I've experienced in my editorship at the Electronic Journal of Severe Storms Meteorology (also an open peer-review journal at which I am Assistant Editor). I suspect that some didn't want to see their names associated with the paper as reviewers. Interestingly, we have seen that on occasion at EJSSM where reviewers decline when we say that we post signed reviews online.

That the quality of the reviews was not different whether or not the reviews were open was an interesting conclusion. This is perhaps not surprising. Of the reviewers who were willing to participate in this experiment, those who felt uncomfortable or insecure about posting their reviews online probably have a different personality type than those who don't care what the world or the authors thought of them. Therefore, the high decline rate helps self-select the right people who are willing to engage in this activity. This effect might explain why the reviews are not significantly different between the two groups.

Saturday, August 21, 2010

How MWR Editors make decisions


If you want to read more about the decision-making process at MWR and how it works in a subset of 500 manuscripts, check out this recently published article in the August 2010 Scientometrics.

Schultz, D. M., 2010: Are three heads better than two? How the number of reviewers and editor behavior affect the rejection rate. Scientometrics, 84, 277-292. doi: 10.1007/s11192-009-0084-0.

Abstract

Editors of peer-reviewed journals obtain recommendations from peer reviewers as guidance in deciding upon the suitability of a submitted manuscript for publication. To investigate whether the number of reviewers used by an editor affects the rate at which manuscripts are rejected, 500 manuscripts submitted to Monthly Weather Review during 15.5 months in 2007–2008 were examined. Two and three reviewers were used for 306 and 155 manuscripts, respectively (92.2% of all manuscripts). Rejection rates for initial decisions and final decisions were not significantly different whether two or three reviewers were used. Manuscripts with more reviewers did not spend more rounds in review or have different rejection rates at each round. The results varied by editor, however, with some editors rejecting more two-reviewer manuscripts and others rejecting more three-reviewer manuscripts. Editors described using their scientific expertise in the decision-making process, either in determining the number of reviews to be sought or in making decisions once the reviews were received, approaches that differ from that of relying purely upon reviewer agreement as reported previously in the literature. A simple model is constructed for three decision-making strategies for editors: rejection when all reviewers recommend rejection, rejection when any reviewer recommends rejection, and rejection when a majority of reviewers recommend rejection. By plotting the probability of reviewer rejection against the probability of editor rejection, the decision-making process can be graphically illustrated, demonstrating that, for this dataset, editors are likely to reject a manuscript when any reviewer recommends rejection.

Sunday, August 1, 2010

Social media as a source of meteorological observations


Appearing in the August 2010 issue of Monthly Weather Review:

Otto Hyvärinen and Elena Saltikoff of the Finnish Meteorological Institute have an article about their novel uses of Flickr to collect and verify hail reports.

http://journals.ametsoc.org/doi/abs/10.1175/2010MWR3270.1

Abstract

An increasing number of people leave their mark on the Internet by publishing personal notes (e.g., text, photos, videos) on web-based services such as Facebook and Flickr. This creates a vast source of information that could be utilized in meteorology, for example, as a complement to traditional weather observations. Photo-sharing services offer an increasing amount useful data, as modern mobile devices can automatically include coordinates and time stamps on photos, and users can easily tag them for content. In this study, different weather-related photos and their metadata were accessed from the photo-serving service Flickr, and their reliability was assessed. Case studies of hail detection were then performed, the position of hail detected in the atmosphere by radar being compared with positions of Flickr photos depicting hail on the ground. As a result of this preliminary study, we think that further exploration of the use of Flickr photographs is warranted, and the consideration of other social media as data sources can be recommended.

Review Articles in Monthly Weather Review

Monthly Weather Review has a history of publishing great review articles. As Chief Editor, I wanted to see more reviews published. Here are some of the recent review articles that have been published recently.

Houze, Robert A., 2010: Clouds in Tropical Cyclones. Mon. Wea. Rev., 138, 293-344.

van Leeuwen, Peter Jan, 2009: Particle Filtering in Geophysical Systems. Mon. Wea. Rev., 137, 4089-4114.

Bocquet, Marc, Carlos A. Pires, and Lin Wu, 2010: Beyond Gaussian statistical modeling in geophysical data assimilation. Mon. Wea. Rev., 138, to appear in the August 2010 issue.

Thursday, April 22, 2010

Rejection rates in journals

After realizing that Monthly Weather Review rejected over 30% of the manuscripts we received, I was curious what the rejection rates at the other journals publishing atmospheric science were. So, I sent emails to all the editors that I could find and who would respond.

The results of this survey have now appeared in print in the Bulletin of the AMS:
Rejection Rates for Journals Publishing in the Atmospheric Sciences.

Prof. Roger Pielke Sr. posted on his blog a discussion of some of the issues I raised within the article here.

Although I agreed with nearly everything he said, I wanted to clarify one aspect that I felt was not my opinion. Prof. Pielke allowed me to respond to his post here.

Enjoy reading the exchange!

Sunday, March 14, 2010

Responsibilities of Reviewers

The AMS does not have guidelines for the ethical obligations of reviewers, authors, and editors. Until that time, I highly recommend the American Geophysical Union's guidelines.

Here are the guidelines for reviewers.

  • Inasmuch as the reviewing of manuscripts is an essential step in the publication process, every scientist has an obligation to do a fair share of reviewing.
  • A chosen reviewer who feels inadequately qualified or lacks the time to judge the research reported in a manuscript should return it promptly to the editor.
  • A reviewer of a manuscript should judge objectively the quality of the manuscript and respect the intellectual independence of the authors. In no case is personal criticism appropriate.
  • A reviewer should be sensitive even to the appearance of a conflict of interest when the manuscript under review is closely related to the reviewer's work in progress or published. If in doubt, the reviewer should return the manuscript promptly without review, advising the editor of the conflict of interest or bias.
  • A reviewer should not evaluate a manuscript authored or co-authored by a person with whom the reviewer has a personal or professional connection if the relationship would bias judgment of the manuscript.
  • A reviewer should treat a manuscript sent for review as a confidential document. It should neither be shown to nor discussed with others except, in special cases, to persons from whom specific advice may be sought; in that event, the identities of those consulted should be disclosed to the editor.
  • Reviewers should explain and support their judgments adequately so that editors and authors may understand the basis of their comments. Any statement that an observation, derivation, or argument had been previously reported should be accompanied by the relevant citation.
  • A reviewer should be alert to failure of authors to cite relevant work by other scientists. A reviewer should call to the editor's attention any substantial similarity between the manuscript under consideration and any published paper or any manuscript submitted concurrently to another journal.
  • Reviewers should not use or disclose unpublished information, arguments, or interpretations contained in a manuscript under consideration, except with the consent of the author.

Tuesday, February 16, 2010

Why the wrong papers get published


I recently came across this article by Peter J. Rousseeuw.

Rousseeuw, P. J., 1991: Why the wrong papers get published. Chance, 4 (1), 41-43.

The premise is that reviewers are fallible, and the result for the peer review process gets disastrous...and quickly.

Rousseeuw derives an equation for the probability of a good papers getting published, given certain inputs. Assuming that a reviewer can recognize a good paper 70% of the time and a bad paper 70% of the time, the probability of good papers getting published is 37%.

Rousseeuw also recognizes that the inability of finding the best reviewers (or their unavailability to do the review) can result in this percentage being even lower. I have been fortunate that most of my first-choice reviewers accept my invitation to perform reviews, but I know other editors that have to really work to find even adequate reviewers for manuscripts.

To sum up, Rousseeuw argues that from the perspective of an author, it would appear the solution would be to make more submissions. Fortunately, he concludes, "In the long run it pays to strive for quality, because it is easy to acquire a bad reputation." I couldn't agree more.