| Literature DB >> 32310984 |
Abstract
Despite growing demand for practicable methods of research evaluation, the use of bibliometric indicators remains controversial. This paper examines performance assessment practice in Europe-first, identifying the most commonly used bibliometric methods and, second, identifying the actors who have defined wide-spread practices. The framework of this investigation is Abbott's theory of professions, and I argue that indicator-based research assessment constitutes a potential jurisdiction for both individual experts and expert organizations. This investigation was conducted using a search methodology that yielded 138 evaluation studies from 21 EU countries, covering the period 2005 to 2019. Structured content analysis revealed the following findings: (1) Bibliometric research assessment is most frequently performed in the Nordic countries, the Netherlands, Italy, and the United Kingdom. (2) The Web of Science (WoS) is the dominant database used for public research assessment in Europe. (3) Expert organizations invest in the improvement of WoS citation data, and set technical standards with regards to data quality. (4) Citation impact is most frequently assessed with reference to international scientific fields. (5) The WoS classification of science fields retained its function as a de facto reference standard for research performance assessment. A detailed comparison of assessment practices between five dedicated organizations and other individual bibliometric experts suggests that corporate ownership and limited access to the most widely used citation databases have had a restraining effect on the development and diffusion of professional bibliometric methods during this period.Entities:
Year: 2020 PMID: 32310984 PMCID: PMC7170233 DOI: 10.1371/journal.pone.0231735
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Theoretical framework of a mature profession according to Andrew Abbott.
Visualization of the theoretical framework of a professional jurisdiction [17, 18]. Source: [19].
Fig 2Bibliometric research assessment as an emerging profession.
Visualization of the application of the theoretical framework in Fig 1 to the professional field of bibliometric research assessment. Source: [19].
Frequencies of studies across countries and evaluation objects.
| Country | Acronym | Research organization | Funding instrument | Studies total | % studies |
|---|---|---|---|---|---|
| Italy | IT | 22 | 0 | 22 | 16 |
| Netherlands | NL | 17 | 1 | 18 | 13 |
| United Kingdom | UK | 8 | 8 | 16 | 12 |
| Sweden | SE | 8 | 6 | 14 | 10 |
| Norway | NO | 11 | 3 | 14 | 10 |
| Germany | DE | 8 | 3 | 11 | 8 |
| Finland | FI | 7 | 3 | 10 | 7 |
| European Union (ERA) | EU | 2 | 7 | 9 | 7 |
| Spain | ES | 7 | 1 | 8 | 6 |
| Denmark | DK | 4 | 3 | 7 | 5 |
| Greece | GR | 5 | 0 | 5 | 4 |
| Austria | AT | 2 | 2 | 4 | 3 |
| Ireland | IE | 1 | 2 | 3 | 2 |
| Switzerland | CH | 3 | 0 | 3 | 2 |
| Hungary | HU | 1 | 1 | 2 | 1 |
| Romania | RO | 2 | 0 | 2 | 1 |
| Belgium | BE | 1 | 0 | 1 | 1 |
| Island | IS | 1 | 0 | 1 | 1 |
| Lithuania | LT | 0 | 1 | 1 | 1 |
| Luxemburg | LX | 1 | 0 | 1 | 1 |
| Serbia | RS | 1 | 0 | 1 | 1 |
| Slovakia | SK | 1 | 0 | 1 | 1 |
| Studies covering two or more countries | 5 | 2 | 7 | 5 | |
Meta-evaluation study set, 2005–2019
*Some studies cover evaluation objects from more than one country; thus, the sum of research organizations and funding instruments across countries is larger (n = 154) than the total number of studies (n = 138). The last column refers to the percentage of studies.
Databases used for bibliometric research assessment.
| Databases | Studies total | % studies |
|---|---|---|
| Web of Science (WoS) | 120 | 87 |
| – WoS improved versions | 66 | 48 |
| Scopus | 29 | 21 |
| Google Scholar | 10 | 7 |
| Disciplinary databases (e.g. PubMed) | 8 | 6 |
| National research databases (e.g. Cristin) | 33 | 24 |
| Organization-specific databases | 4 | 3 |
| Patent database | 1 | 1 |
Meta-evaluation study set, 2005–2019
Enhanced data quality.
| Improvement | Dedicated orgs. | Other experts | Studies total | % | % | % |
|---|---|---|---|---|---|---|
| WoS improved versions | 55 | 11 | 66 | 68 | 19 | 48 |
| Institutional adresses cleaned | 75 | 29 | 104 | 93 | 51 | 75 |
| Author names disambiguated | 64 | 27 | 91 | 79 | 47 | 66 |
| Corrections for self-citations | 37 | 15 | 52 | 46 | 26 | 38 |
| Database coverage | 52 | 10 | 62 | 64 | 18 | 45 |
| Validity of field definition | 6 | 7 | 13 | 7 | 12 | 9 |
| Check of publication lists by authors | 8 | 12 | 20 | 10 | 21 | 14 |
Meta-evaluation study set, 2005–2019
* ‘Database coverage’ refers to analyses of internal or external coverage of scientific fields by citation databases.
** ‘Validity of field definition’ refers to the congruence between a bibliometric field definition and the targeted field of research. This is seldom checked empirically.
Sample sizes in bibliometric evaluation studies.
| Sample size | Research organization | Funding instrument | total | % research org. | % funding inst. | % total |
|---|---|---|---|---|---|---|
| 100–1,000 | 13 | 7 | 20 | 13 | 19 | 14 |
| 1,001–10,000 | 51 | 7 | 58 | 50 | 19 | 42 |
| 10,001–100,000 | 17 | 15 | 32 | 17 | 42 | 23 |
| >100,000 | 7 | 1 | 8 | 7 | 3 | 6 |
| Missing data | 14 | 6 | 20 | 14 | 17 | 14 |
Meta-evaluation study set, 2005–2019
*Total number of publications that an assessment is based upon.
Frame of reference for research assessment.
| Evaluation object | Internat. field comp. | National ranking | Other | Studies total | % studies | |
|---|---|---|---|---|---|---|
| 1.1 | 19 | –– | 2 | |||
| 1.2 | 17 | –– | 0 | |||
| 1.3 | 14 | 11 | 2 | |||
| 1.4 | 5 | 23 | 0 | |||
| 1.5 | Umbrella research orgs. with | 3 | 0 | 6 | ||
| 2.1 | Projects funded | 7 | –– | 2 | ||
| 2.2 | Scientists funded | 6 | 0 | 3 | ||
| 2.3 | Research organizations funded | 8 | 0 | 0 | ||
| 2.4 | Evaluation of funding portfolio | 10 | 0 | 0 | ||
Meta-evaluation study set, 2005–2019
Rows distinguish evaluation objects while columns distinguish performance standards. Both dimensions contribute to the frame of reference in evaluation studies. This synopsis is based on the current sample and does not display all possible combinations.
Types of impact metrics used in evaluation studies.
| Type of metric | Dedica-ted orgs. | Other experts | Studies total | % dedica-ted orgs. | % other experts | % studies total | |
|---|---|---|---|---|---|---|---|
| 1.1. | Field-normalized impact total | 78 | 22 | 100 | 95 | 39 | 72 |
| 1.1.1. | Field-normalized arithmetic mean (e.g. MNCS) | 57 | 19 | 76 | 70 | 33 | 55 |
| 1.1.2 | Field-normalized percentiles (e.g. 10% top-cited publications) | 61 | 17 | 78 | 75 | 30 | 57 |
| 1.2 | H-index and h-type indicators | 3 | 26 | 28 | 4 | 46 | 21 |
| 1.3 | Other observed impact | 70 | 23 | 93 | 86 | 40 | 67 |
| 2.1 | Journal impact factor (e.g. 2-year, 5-year); | 28 | 26 | 54 | 35 | 46 | 39 |
| 2.2 | Indirect journal impact (e.g. Eigenfactor metrics) | 14 | 3 | 17 | 17 | 5 | 12 |
| 2.3 | Source normalized journal impact (e.g. SNIP) | 0 | 2 | 2 | 0 | 4 | 1 |
| 2.4 | Impact-level categories for journals (e.g. Norwegian model) | 13 | 3 | 16 | 16 | 5 | 12 |
Meta-evaluation study set, 2005–2019
Classification of science fields used for field normalization.
| Classification | Dedicated orgs. | Other experts | Studies total | % dedicated orgs. | % other experts | % studies total |
|---|---|---|---|---|---|---|
| Web of Science Classification | 66 | 23 | 89 | 84 | 40 | 64 |
| Scopus Classification | 16 | 3 | 19 | 20 | 5 | 14 |
| Essential Science Indicators | 2 | 3 | 5 | 2 | 5 | 4 |
| Alternative journal-based classification | 0 | 5 | 5 | 0 | 9 | 4 |
| Self-defined journal sets | 9 | 4 | 13 | 11 | 7 | 9 |
| Keywords combined with journal sets | 3 | 2 | 5 | 4 | 4 | 4 |
| Publication-based clusters | 7 | 0 | 7 | 7 | 0 | 5 |
| Other | 2 | 4 | 6 | 2 | 7 | 4 |
| Studies with field normalization | 79 | 28 | 107 | 98 | 49 | 78 |
Meta-evaluation study set, 2005–2019
* This category includes classifications proposed in bibliometric literature, i.e. [55–57]; OST classification.