A story in Nature discusses a recent paper (preprint) from CNetS members Jasleen Kaur, Filippo Radicchi and Fil Menczer on the universality of scholarly impact metrics. In the paper, we present a method to quantify the disciplinary bias of any scholarly impact metric. We use the method to evaluate a number of established scholarly impact metrics. We also introduce a simple universal metric that allows to compare the impact of scholars across scientific disciplines. Mohsen JafariAsbagh integrated this metric into Scholarometer, a crowdsourcing system developed by our group to collect and share scholarly impact data. The Nature story highlight how one can use normalized impact metrics to rank all scholars, as illustrated in the widget shown here.
Scholarometer is becoming a more mature tool. The idea behind scholarometer — crowdsourcing scholarly data — was presented at the Web Science 2010 Conference in Raleigh, North Carolina, along with some promising preliminary results. Recently acquired functionality includes a Chrome version, percentile calculations for all impact measures, export of bibliographic data in various standard formats, heuristics to determine reliable tags and detect ambiguous names, etc. Next up: an API to share annotation and impact data, and an interactive visualization for the interdisciplinary network.