Research Article: Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation

Date Published: March 28, 2017

Publisher: Public Library of Science

Author(s): Lin Zhang, Ronald Rousseau, Gunnar Sivertsen, Lutz Bornmann.

http://doi.org/10.1371/journal.pone.0174205

Abstract

The scientific foundation for the criticism on the use of the Journal Impact Factor (JIF) in evaluations of individual researchers and their publications was laid between 1989 and 1997 in a series of articles by Per O. Seglen. His basic work has since influenced initiatives such as the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for research metrics, and The Metric Tide review on the role of metrics in research assessment and management. Seglen studied the publications of only 16 senior biomedical scientists. We investigate whether Seglen’s main findings still hold when using the same methods for a much larger group of Norwegian biomedical scientists with more than 18,000 publications. Our results support and add new insights to Seglen’s basic work.

Partial Text

The average citation impact of a journal is only a weak predictor of the citation impact of individual publications in that journal because, among other aspects, article citedness tends to be highly skewed among publications [1, 2]. Nevertheless, the Journal Impact Factor (JIF) is widely used for the evaluation of individual researchers and their articles. This practice has recently influenced in a series of well-organized reactions from scientific communities. First came the San Francisco Declaration on Research Assessment [3], which was initiated by the American Society for Cell Biology and now has more than 13,000 signees across the world. Then, published in Nature in April 2015 by experts in bibliometrics and research evaluation, came the Leiden Manifesto for research metrics, an annotated list of ten principles to guide research evaluation [4]. A few months later appeared The Metric Tide report [5], which provided the Higher Education Funding Council for England with an independent review on the role of metrics in research assessment and management.

Seglen studied 16 principal investigators in biomedical research at one Norwegian institution as authors of a total of 907 publications. In order to have a group of authors working in similar fields as Seglen’s we only include Norwegian scientists active in the biomedical sciences, resulting in 899 researchers working at–in principle all—Norwegian institutions. The total number of unique publications is 18,280. Their citations have been counted until the end of 2015.

As our database consists of authors, we have a complete data set (with respect to the WoS) for each author. When studying articles we de-duplicated records as articles can be co-authored by more than one Norwegian biomedical researcher.

We confirm Seglen’s observation that there is no consistent positive relationship between individual article citedness and the impact factor of the journal in which the article is published. Our study of a much larger population of researchers than Seglen was able to study thirty years ago yielded only a marginally higher correlation.

 

Source:

http://doi.org/10.1371/journal.pone.0174205

 

0 0 vote
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments