Research Article: The Relationship of Previous Training and Experience of Journal Peer Reviewers to Subsequent Review Quality

Date Published: January 30, 2007

Publisher: Public Library of Science

Author(s): Michael L Callaham, John Tercier, Richard Hornung

Abstract: BackgroundPeer review is considered crucial to the selection and publication of quality science, but very little is known about the previous experiences and training that might identify high-quality peer reviewers. The reviewer selection processes of most journals, and thus the qualifications of their reviewers, are ill defined. More objective selection of peer reviewers might improve the journal peer review process and thus the quality of published science.Methods and Findings306 experienced reviewers (71% of all those associated with a specialty journal) completed a survey of past training and experiences postulated to improve peer review skills. Reviewers performed 2,856 reviews of 1,484 separate manuscripts during a four-year study period, all prospectively rated on a standardized quality scale by editors. Multivariable analysis revealed that most variables, including academic rank, formal training in critical appraisal or statistics, or status as principal investigator of a grant, failed to predict performance of higher-quality reviews. The only significant predictors of quality were working in a university-operated hospital versus other teaching environment and relative youth (under ten years of experience after finishing training). Being on an editorial board and doing formal grant (study section) review were each predictors for only one of our two comparisons. However, the predictive power of all variables was weak.ConclusionsOur study confirms that there are no easily identifiable types of formal training or experience that predict reviewer performance. Skill in scientific peer review may be as ill defined and hard to impart as is “common sense.” Without a better understanding of those skills, it seems unlikely journals and editors will be successful in systematically improving their selection of reviewers. This inability to predict performance makes it imperative that all but the smallest journals implement routine review ratings systems to routinely monitor the quality of their reviews (and thus the quality of the science they publish).

Partial Text: Most authors and editors would agree that the expertise of those who perform peer reviews for scientific journals has a lot to do with the quality of what is disseminated to the scientific community to become the foundation of future research [1]. Nonetheless, despite 20 years of research presented at five International Conferences on Peer Review, there has been surprisingly little study of what training and qualities are necessary to function as a proficient scientific reviewer [2]. Even less is known about how peer reviewers should be selected, and yet all journals routinely appoint new reviewers whose true quality is often revealed only after a number of reviews.

Annals of Emergency Medicine is the leading journal in the specialty of emergency medicine and ranks in the top 11% among 5,876 science and medical journals listed by the ISI in frequency of citations [7]. All reviewers at this journal are blinded as to the authors and institution of papers they are reviewing.

At the time of the survey there were 460 reviewers in the journal’s pool of permanent reviewers. (Reviewers invited only once as a “guest” to review a particular manuscript were not included.) Of this number, 30 reviewers had performed no reviews during the study period and were excluded, leaving 430 who were sent the survey.

Our results show that, unfortunately, almost none of the experiences and training that might logically be thought to make for a high-quality reviewer (such as training in critical appraisal, academic rank, having been a funded primary investigator, serving on an IRB, etc.) actually predict subsequent performance of higher-quality reviews (Tables 2–5). The multivariable analysis (which controlled for confounders) showed that comparing acceptable versus unacceptable reviews, having participated in grant review, and university environment predicted a better review; there was a nonsignificant trend in favor of a degree in statistics. None of the other factors were predictive, except for serving on an IRB, which paradoxically was associated with lower-quality reviews. Using the outcome of excellent versus satisfactory reviews, only serving on an editorial board and university environment were associated with better-quality reviews. Again, IRB service was paradoxically associated with worse scores.



Leave a Reply

Your email address will not be published.