
This paper focuses on systematic errors and propose how these can be measured and reported when informing and offering bibliometric data for policy purposes. We analysed differences in the calculation of impact indicators, when different 23 different classification schemes derived from Clarivate’s InCites suite are used, specifically on five indicators: the Category Normalized Citation Indicator, the total number of citations, the H-index, the 1% of most cited articles, and the 10% of most cited articles. Findings show that citation counts are quite stable, with a difference of 13%. However, proportional indicators, such as top 1% and top 10% most cited articles, tend to vary more between universities, although on average they are lower than in the previous case. The results of this study can be used in estimating the error level in the research assessment of countries, institutions, and individuals.
normalization, scientometrics, error analysis, subject classifications
normalization, scientometrics, error analysis, subject classifications
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
