Literature DB >> 30984505

Measuring Academic Success: The Art and Science of Publication Metrics.

Joseph R Dettori1, Daniel C Norvell1, Jens R Chapman1.   

Abstract

Entities:  

Year:  2019        PMID: 30984505      PMCID: PMC6448198          DOI: 10.1177/2192568219831003

Source DB:  PubMed          Journal:  Global Spine J        ISSN: 2192-5682


× No keyword cloud information.
Many acknowledge the need for metrics to assess the scientific research output used by academic institutions and funding agencies. Until relatively recently, the journal impact factor (JIF) has been the primary measurement used for both individuals and institutions. However, there is a growing acknowledgment of the JIF limitations and the need for a better way with which to compare research productivity. This article will describe the more commonly used metrics, their limitations, and the future direction for the field of bibliometrics.

Commonly Used Metrics (Table 1)

The Journal Impact Factor

The JIF is a journal- and author-level index created in the early 1960s to assist in the selection of journals for the Science Citation Index (SCI),[1] a tool to facilitate the dissemination and retrieval of scientific literature. Though SCI’s primary function was that of a search engine, its success was due to its ability to measure scientific productivity, a result in large part owing to the JIF rankings.[2] The SCI, and its internet version, the Web of Science (WoS) identify both what each scientist has published (Source Author Index), and how often the papers by that scientist are cited (Citation Index). Summary of Citation-Based Bibliometric Indices. Widely used Simple formula Self-citations Authors cite works in the same journal Some disciplines cite more than others Journals change their names thus affecting impact factor Speeding up publication cycle increases impact factor Limited number of journals in Web of Science Most non-English medical journals are not covered h-Index (author-level index) Web of Science: https://clarivate.com/products/web-of-science/ Scopus: https://www.scopus.com/home.uri Google Scholar: https://scholar.google.com/ Assesses broad impact of an individual’s work Compares individuals with respect to their overall scientific impact Limited metric for young scientists Does not take into order in the authors list (1st, 2nd,…last author) Ignores most highly cited papers Suitable for comparisons within a certain specialty only Eigenfactor (journal- and author-level index) http://www.eigenfactor.org/index.php Ranks journals similar to how Google ranks websites Adjusts for citation differences across disciplines Limits self-citation Rely on 5-year citation data Freely available Given similar article quality, journals publishing many articles have higher scores than those publishing few articles Complex algorithm Article Influence Score (journal-level index) http://www.eigenfactor.org/index.php Similar to Eigenfactor Provides relative importance SCImago Journal Rank (journal-level index) https://www.scimagojr.com/journalrank.php Uses Scopus as data source Multidimensional Limits self-citation Ranks journals similar to how Google ranks websites Freely available Does not address bias created by review journals Citations from lower prestige journals get little credit Ranks are based on total number of articles in a journal, not total number of citable articles The complex calculations used to create a score are proprietary and cannot be independently verified The JIF measures use, not quality. A journal’s impact factor is determined by a simple equation involving 2 elements: the number of citations in the current year that refer to published articles in the previous 2 years (numerator) and the number of substantive articles and reviews (citable items) published in the same 2 years (denominator). For example, below is how the JIF is calculated for 2018: The JIF is compiled annually by Clarivate Analytics in the Journal Citation Reports (https://clarivate.com/products/journal-citation-reports/). Its limitations include self-citation, editorial pressure to have authors cite works in the same journal, different frequency of citation across disciplines, and a limited number of journals in the WoS from which the JIF is derived.

h-Index (Hirsch)

The h-index is an author-level metric that attempts to measure both the productivity and citation impact of the publications of a scientist or scholar. The index was suggested by Jorge Hirsch, a US physicist, as a tool for estimating “the importance, significance, and broad impact of a scientist’s cumulative research contributions.”[3] A scholar with an index of h has published h papers each of which has been cited by others at least h times. For example, an h-index of 10 means that a scientist has published 10 articles that each has at least 10 citations. The h-index serves as an alternative to more traditional JIF in the evaluation of the impact of the work of a particular researcher. The h-index is useful in comparing individuals with regard to their overall scientific impact. For example, 2 individuals with a similar h-index are comparable in terms of their scientific impact, even if the total number of papers or citations is different. Conversely, in 2 individuals with different h values that have a similar number of total papers or citations, the one with the higher h-index is likely to be the more proficient scientist. Its chief limitation is its inapplicability to evaluate young scientists.

Eigenfactor Score

The Eigenfactor (EF) score is a journal- and author-level index. With the understanding that citations are not independent and isolated events, the EF score seeks to reflect the network of interrelations among scholarly articles by ranking journals similar to Google’s rank of websites.[4] Published citations join journals together in a network of citations similar to Figure 1. The EF score is calculated based on a complex algorithm that takes into account not only the quantity of citations but also their “quality” by assigning weights to the source of the citations.
Figure 1.

Figure from the Eigenfactor.org and the University of Washington, Seattle Washington.[5]

Figure from the Eigenfactor.org and the University of Washington, Seattle Washington.[5] The EF scores are scaled so that those of all journals listed in the Journal Citation Reports sum to 100. If a journal has an EF score of 1.0, it has 1% of the total influence of all indexed publications. It is interpreted as a measure of a journal’s total importance to the scientific community.[5] Its major limitation relates to the size of the journal; journals publishing many articles often have higher scores than those publishing a few articles assuming the article quality is similar.

Article Influence Score

The Article Influence Score (AIS) is calculated by multiplying the EF by 0.01 and dividing by the number of articles in the journal. That fraction is normalized so that the sum total of articles from all journals is 1.[6] A score greater than 1.00 indicates that each article in the journal has above-average influence. A score less than 1.00 indicates that each article in the journal has below-average influence. Limitations are similar to the EF score.

SCImago Journal Rank

The SCImago Journal Rank (SJR) is a journal-level index based on the SCOPUS database (Elsevier BV).[7] It is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance of the journals from which those citations come. The SJR is calculated by taking the average number of weighted citations received during a selected year per document published in that journal during the previous three years. Also, SJR normalizes for differences in citation behavior between subject fields. Higher SJR values are meant to indicate greater journal “prestige.” The SJR uses a similar calculating method as Google’s rank of websites. SCImago indexes articles published in more countries and in more languages than most other metric tools. Limitations include little credit from lower prestige journals, ranks based on total number of articles versus total number of citable articles, proprietary calculations which cannot be independently verified.

Future Direction

Many groups have recognized the need for improvement in the ways in which scientific output is evaluated. In particular, they are calling on a more balanced approach to assessing research output with an emphasis on quality. One such call comes in the form of a declaration, the Declaration on Research Assessment (DORA). In December of 2012, a group of scholarly journal editors and publishers met at the Annual Meeting of the American Society for Cell biology in San Francisco and crafted a set of 18 recommendations to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions and other parties. They recognized that the JIF was the primary parameter used to compare scientific output of scientists and institutions. Their recommendations in full can be found at https://sfdora.org/read/.[8] We choose to reproduce a few in this article to provide a flavor of the future direction (Table 2).
Table 2.

A Sampling of the Recommendations From the Declaration on Research Assessment (DORA)

Research Output StakeholderRecommendations
General recommendationDo not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.
For institutionsBe explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.
For publishersGreatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (eg, 5-year impact factor, Eigenfactor, SCImago, h-index, editorial and publication times, etc) that provide a richer view of journal performance.
For organizations that supply metricsBe open and transparent by providing data and methods used to calculate all metrics.
For researchersWherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due
A Sampling of the Recommendations From the Declaration on Research Assessment (DORA)

Summary

There is a need for metrics to assess the scientific research output of investigators. Until recently, the Journal Impact Factor (JIF) has been the primary measure. Because of the JIF limitations, other metrics have been developed such as the h-index, Eigenfactor score, Article Influence Score, and the SCImago Journal Rank (SJR). All have their limitations in assessing scientific research output. The future of assessing research output will include a more balanced approach considering both quantitative and qualitative assessment. The emphasis will be on research quality, not just quantity.
Table 1.

Summary of Citation-Based Bibliometric Indices.

Factor/Index and AccessDescriptionAdvantagesLimitations
Journal Impact Factor(journal- and author-level index)Journal Citation Report: https://clarivate.com/products/journal-citation-reports/ The number of citations in the current year that refer to published articles in the previous 2 years (numerator), and the number of substantive articles and reviews (citable items) published in the same 2 years (denominator)

Widely used

Simple formula

Self-citations

Authors cite works in the same journal

Some disciplines cite more than others

Journals change their names thus affecting impact factor

Speeding up publication cycle increases impact factor

Limited number of journals in Web of Science

Most non-English medical journals are not covered

h-Index

(author-level index)

Web of Science: https://clarivate.com/products/web-of-science/

Scopus: https://www.scopus.com/home.uri

Google Scholar: https://scholar.google.com/

A scholar with an index of h has published h papers each of which has been cited by others at least h times

Assesses broad impact of an individual’s work

Compares individuals with respect to their overall scientific impact

Limited metric for young scientists

Does not take into order in the authors list (1st, 2nd,…last author)

Ignores most highly cited papers

Suitable for comparisons within a certain specialty only

Eigenfactor

(journal- and author-level index)

http://www.eigenfactor.org/index.php

A score that takes into account not only the quantity of citations but also their “quality” by assigning weights to the source of the citations, similar to Google’s rank of websites

Ranks journals similar to how Google ranks websites

Adjusts for citation differences across disciplines

Limits self-citation

Rely on 5-year citation data

Freely available

Given similar article quality, journals publishing many articles have higher scores than those publishing few articles

Complex algorithm

Article Influence Score

(journal-level index)

http://www.eigenfactor.org/index.php

Derived from the Eigenfactor Score, and assesses the average influence of a journal’s articles over the first five years after publication

Similar to Eigenfactor

Provides relative importance

   • Similar to Eigenfactor

SCImago Journal Rank

(journal-level index)

https://www.scimagojr.com/journalrank.php

A publicly available portal that includes the journals and country scientific indicators developed from the information contained in the SCOPUS database (Elsevier BV)

Uses Scopus as data source

Multidimensional

Limits self-citation

Ranks journals similar to how Google ranks websites

Freely available

Does not address bias created by review journals

Citations from lower prestige journals get little credit

Ranks are based on total number of articles in a journal, not total number of citable articles

The complex calculations used to create a score are proprietary and cannot be independently verified

  1 in total

1.  Metrics and methods in the evaluation of prestige bias in peer review: A case study in computer systems conferences.

Authors:  Eitan Frachtenberg; Kelly S McConville
Journal:  PLoS One       Date:  2022-02-25       Impact factor: 3.240

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.