When standard network measures fail to rank journals: A theoretical and empirical analysis

Abstract

urnal rankings are widely used and are often based on citation data in combination with a network approach. We argue that some of these network-based rankings can produce misleading results. From a theoretical point of view, we show that the standard network modelling approach of citation data at the journal level (i.e., the projection of paper citations onto journals) introduces fictitious relations among journals. To overcome this problem, we propose a citation path approach, and empirically show that rankings based on the network and the citation path approach are very different. Specifically we use MEDLINE, the largest open-access bibliometric data set listing 24 135 Journals, 26 759 399 papers, and 323 356 788 citations. We focus on PageRank, an established and well-known network metric. Based on our theoretical and empirical analysis, we highlight the limitations of standard network metrics and propose a method to overcome them.

Related Posts

Reproducing scientists' mobility: a data-driven model

Reproducing scientists' mobility: a data-driven model

Scientists often move around the world to share ideas and work together, but how do these moves actually happen? This study looked at millions of career paths to map out how researchers travel between cities, countries and institutions. It found that most scientists prefer to move shorter distances, usually less than 1000 kilometers, and tend to choose places that are both close and well-regarded. The research also showed that the way we visualize these moves changes depending on the scale. At the city level, scientists move more freely, while at the country or institution level, clear pathways called “knowledge corridors” emerge. This helps us understand how knowledge spreads and how scientific careers develop over time, with important implications for both scientists and policymakers.

Read More
Success in Science - Special Issue

Success in Science - Special Issue

Scientific Networks and Success Every researcher is affected by how scientific performance is measured. How should it be measured? Do we have the right data to do it? How can we make it fair and unbiased?

Read More
Quantifying and suppressing ranking bias in a large citation network

Quantifying and suppressing ranking bias in a large citation network

Citation counts for papers from different fields can't be compared directly because they adopt different citation practices. Researchers have proposed various procedures to suppress these biases, but a new statistical framework shows that existing indicators, including the relative citation count, are still biased by paper field and age. A new normalization procedure motivated by the z-score produces much less biased rankings when applied to citation count and PageRank score. The problem of achieving an ideal unbiased ranking of publications remains open.

Read More