Sunday, June 20, 2010

Subscribing to mailing lists in daily digest mode

I am currently on quite a few mailing lists that are relevant to my research topics in a broad sense, including Jmol, pdb-l and CCP4. Over the years, I've found it very handy to keep informed of a field by following in its (major) mailing list. However, I could easily get lost with so many posts each day on some active lists. So subscribing to the daily digest mode, provided by many mailing list management software, has become a norm. Thus, I would receive only one (or a few) email(s) from a mailing list. I could then easily browse through the subject lines to decide if to read further of a specific topic.

Among the three lists mentioned above, Jmol is quite active with several core contributors, most impressively from Prof. Robert Hanson. The PDB mailing list is of unbelievably low volume, with less than a post per day. CCP4 bullet board is overall the most well-organized; I am surprised by the knowledge-base from the posts there – it is a great resource in structural biology.

AMBER mailing list and Computational Chemistry List (CCL) are the other two lists I subscribed to before. However, I could not get into the daily digest mode; soon my mail box was flooded by many unrelated posts so I had to un-subscribe from them.

Journal impact factor and individual researcher h-index

The June 17, 2010 issue of Nature (Vol. 465, No. 7300) has extensive discussions on assessing the impact (influence/significance) of a journal or an individual researcher using quantitative metrics. Not surprisingly, there is no consensus. Over the past 20 years, the field of bibliometrics (scientometrics) has seen a ten-fold explosion in publications. In fact, the title of the editorial is "Assessing assessment"; it boils down to whether we should use a quantitative metric, a combination of several such metrics, or which metrics to choose among the so many possibilities.

Among the various view points, I agree more with David Pendlebury, Citation Analyst of Thomson Reuters. "No one enjoys being measured — unless he or she comes out on top." "Importantly, publication-based metrics provide an objective counterweight ... to bias of many kinds." "there are dangers [to] put too much faith in them. A quantitative profile should always be used to foster discussion, rather than to end it."

Among the many currently available metrics, it seems fair to say:
  • impact factor (IF), introduced in 1963, is the most important one to measure the impact of a journal. Nowadays, it is common to see the IF of a journal at its home page. For example, currently Nucleic Acids Research has an IF of 6.878. However, as emphasized by Anthony van Raan, director of the Centre for Science and Technology Studies at Leiden University in the Netherlands: "You should never use the journal impact factor to evaluate research performance for an article or for an individual — that is a mortal sin." Indeed, "In 2005, 89% of Nature’s impact factor was
    generated by 25% of the articles."
  • h-index, introduced in 2005 by Hirsch, is currently the most influential metric to quantify the productivity and impact of an individual researcher. An h-index is "defined as the number of papers with citation number ≥h". It "gives an estimate of the importance, significance, and broad impact of a scientist’s cumulative research contributions." I found it very informative to read the original, 4-page long PNAS paper to understand clearly why h-index is defined that way and to be aware of some of its caveats.
  • number of citations is undoubtedly the most objective metric to measure the impact of a publication.
Overall, it is easy to come up with a quantitative measure; the point is what that number really means. Along the line, it is of crucial importance to know exactly how a metric is calculated. No metric could be perfect; however, a metric, defined transparently and applied consistently, is more objective and convincing than other means.