Recent ranking of the 'top' 2% of scientists by Stanford University
Scientists at Stanford University recently ranked the ‘top’ 2% of scientists in a variety of fields.
The ranking contained an up-to-date list of the most highly cited scientists in these disciplines.
That is, the list consists of the top 100,000 scientists based on an aggregate of numerical indicators, equal in this case to those scientists who have authored papers whose citation count lies in the top 2% in each field.
The presence of Indian scientists in this list has garnered substantial public attention.
They have been accompanied by institutional press releases, news features, and award citations.
Given this fanfare, it’s important that we understand what the top 2% ranking system actually measures and how well the measure correlates with real-world scientific achievement.
Limitations and potential drawbacks of relying on quantitative metrics
First, ranking systems and indices rely almost solely on quantitative data derived from citation profiles.
They don’t – can’t – evaluate the quality or impact of some scientific work.
For example, a scientist with 3,500 citations in biology could have accrued half of them from ‘review’ articles.
Which means articles that survey other published work instead of reporting original research.
As a result, this person may rank among the top 2% of scientists by citations.
On the other hand, a scientist with 600 citations all from original research will be out of the top 2%.
For another example, a biotechnologist with 700 citations from 28 papers published in 1971-2015
and a scientist with a similar number of citations from 33 papers published in 2004- 2020 may both be in the top 2%.
The latter’s advantage is that the rapid growth of and electronic access to academic publishing has resulted in citation inflation over time.
Ethical concerns about the influence of quantitative
Citation metrics don’t allow us to extrapolate between fields or account for specific aspects of research in sub-fields.
For example, in microbiology, the organism one is studying determines the timeline of a study.
So a scientist working with, say, a bacterial species that’s difficult to grow would appear to be less productive than a scientist working with rapid turnaround technologies like computational modelling.
The overvaluation of the numbers of publications and citations, the position of authorship (single, first or last), etc.
Breeds unethical scientific practices.
Such a system incentivises scientists to inflate their citation count by citing themselves.
Correcting these indices by accounting for shared authorship also devalues some fundamental tenets of scientific work.
COMMENTS