Journal ranking metrics must be used with caution. They are calculated as an average number of citations received by all articles in a journal over the past 2-5 years, providing an indication of the citation performance of a journal during this time frame. As an average calculation, it is unlikely that a journal metric provides an accurate indication of the citation performance of individual articles e.g. at each extreme:
- An article may be published in a journal with a very rank, but not have received any citations. Even the most prestigious journals in the world contain publications that have never been cited.
- An article may be very influential and have received many citations, but not be published in a journal with a high rank. Some journals which do not have high ranking metrics still contain very highly cited publications.
When using journal metrics, it is also essential that disciplinary differences are taken into account. There are two major factors to consider: (1) number of journals indexed in a database source per discipline and (2) citation behaviour varies between disciplines. These two factors can represent a three-fold difference between STEM (Science, Technology, Engineering and Medicine) and HASS (Humanities and Social Sciences) disciplines. This is a reflection of disciplinary differences, not academic performance.
Two landmark documents strongly recommend that journal metrics are not used for measuring the quality of individual research articles, or as a direct measure of research performance:
- The San Francisco Declaration of Research Assessment (DORA) 2012 statement recommends that journal-based metrics should not be used as a proxy for assessing the research quality of articles or researchers, or play a role in hiring or promotions.
- An outcome of The Metric Tide report, a review of the role of metrics in research assessment and management based on data from the UK 2014 Research Excellence Framework (REF) recommends that a variety of journal-based metrics rather than a single journal metric should be used to provide a richer view of performance. The report also encourages a shift towards article-level metrics, to enable the academic quality of individual articles to be assessed rather than using journal-level metrics as a proxy of academic quality.
These recommendations are supported by JCU in the Guide for Evidencing Research and Scholarship Performance.
If a journal has not been assigned a ranking for the metrics in this LibGuide, it is important to note that this does not necessarily mean that the journal is of low quality.
- Journals typically need to be operating for 3-5 years before they are ranked, with the number of years depending on the time frame of citations used for the calculation. This means that, for all new journals, there is a period where they have no ranking.
- There are also niche journals that are well regarded and fit for purpose but are not highly cited by other scholarly publications. University-based law journals are an example.