Among the various view points, I agree more with David Pendlebury, Citation Analyst of Thomson Reuters. "No one enjoys being measured — unless he or she comes out on top." "Importantly, publication-based metrics provide an objective counterweight ... to bias of many kinds." "there are dangers [to] put too much faith in them. A quantitative profile should always be used to foster discussion, rather than to end it."
Among the many currently available metrics, it seems fair to say:
- impact factor (IF), introduced in 1963, is the most important one to measure the impact of a journal. Nowadays, it is common to see the IF of a journal at its home page. For example, currently Nucleic Acids Research has an IF of 6.878. However, as emphasized by Anthony van Raan, director of the Centre for Science and Technology Studies at Leiden University in the Netherlands: "You should never use the journal impact factor to evaluate research performance for an article or for an individual — that is a mortal sin." Indeed, "In 2005, 89% of Nature’s impact factor was
generated by 25% of the articles." - h-index, introduced in 2005 by Hirsch, is currently the most influential metric to quantify the productivity and impact of an individual researcher. An h-index is "defined as the number of papers with citation number ≥h". It "gives an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contributions." I found it very informative to read the original, 4-page long PNAS paper to understand clearly why h-index is defined that way and to be aware of some of its caveats.
- number of citations is undoubtedly the most objective metric to measure the impact of a publication.
No comments:
Post a Comment
You are welcome to make a comment. Just remember to be specific and follow common-sense etiquette.