Open Research - Responsible Metrics
Responsible Metrics
DCU Open Research
The responsible use of research metrics forms an important part of the move towards Open Research. It is a key recommendation in the National Framework on the Transition to an Open Research Environment (NORF, 2019).
Metrics, used responsibly, play an important role in assessing research visibility and impact. However, an awareness of their limitations is crucial to ensure they are used appropriately and fairly. There is potential for mis-use of metrics - for example, the use of non-normalised metrics when comparing across disciplines, or the use of a single metric to evaluate an individual’s impact.
The Action Plan for Open Research (NORF, 2022) includes a stated goal to "further develop commitments to reform research assessment and evaluation, including the responsible use of research metrics" (G3.4).
DCU's Statement on the Responsible Use of Research Metrics sets out 7 guiding principles for those using research metrics for assessment purposes to ensure a nuanced and balanced approach.
Metrics are numerical measures of research impact. They are often used to measure and compare a researcher, their School or Faculty or whole institutions. These quantitative indicators include:
- publication metrics (aka bibliometrics) (e.g. numbers of outputs, number of citations (normalised), impact metrics, collaboration metrics),
- research supervision metrics (e.g. number of PG research students/FTE),
- research income metrics (e.g. number and value of external funding applications and awards) and altmetrics (e.g. media mentions, tweets).
Metrics are a useful tool but can be misused. Examples of poor use or weaknesses of metrics include:
- Using metrics in isolation, as metrics on their own are not sufficient to assess research fairly. Research can impact on the world in any number of ways, and metrics are only part of the picture.
- Metrics can reflect existing or historical bias within the scholarly community e.g. gender, disciplinary, so it is important to be cogniscant of where balancing measures are required.
- Each metrics tool takes its data from different sources, and calculates impact in different ways. Always rely on at least two metrics to reduce inherent bias of tools. Using only a single measure may also encourage people to 'game' their behaviour to suit a particular metric.
Responsible metrics advocates accountable and appropriate use of numerical measures, requiring that they are used not in isolation but rather in conjunction with other measures, so that a more complete assessment of research impact is formed.
-
Be open, transparent and explicit about the criteria used to assess research performance in the University.
-
Ensure research metrics will be used to support but not supplant qualitative expert judgement and review.
-
Refrain from the use of venue-based metrics, such as JIF, in assessing the merits of a particular research output.
-
Ensure recruitment/promotion processes assess candidates across multiple research performance indicators. Clear guidelines on assessment procedures and criteria will be provided to candidates and reviewers.
-
Take into account disciplinary diversity in research output types (examples include, where relevant for each discipline, journal articles, monographs, edited collections, creative works, datasets, software, commissioned reports and in languages other than English).
-
Recognise research contributions in the form of impacts on wider society (e.g.citation in policy).
-
Ensure that assessment of performance at an individual level fully takes into consideration the circumstances of an individuals’ career path (e.g. maternity leave, career break etc.) specifically incorporating principles enshrined in DCU Equality, Diversity & Inclusion Policies.
Metrics Tools
DCU Research Office & DCU Library provide access to and training on the tools used for accessing and analysing publication data (incl. collaboration and citation data) used by the University e.g. SciVal and Research Engine.
The University encourages the use of ORCiD identifiers so that all that all outputs and subsequent citations are correctly attributed to the researcher.
Research Engine profiles should also be kept up to date and current. ORCiD profiles can easily be synchronised with Research Engine to reduce profile maintenance.
International Frameworks and Initiatives
COARA's Agreement on Reforming Research Assessment outlines principles, commitments and a timeframe to change assessment practices for research, researchers and research performing organisations, with the overarching goal to maximise the quality and impact of research. The signatories of the agreement - which include DCU - form a coalition of organisations willing to work together in implementing these changes.
DORA has been signed by over 14,000 researchers and 1,300 organisations. Two key recommendations of DORA are:
- To assess research on its own merits rather than based on of the journal in which the research is published
- To eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations
Within Ireland, research funding organisations such as SFI, IRC and HRB have all signed DORA, and are committed to reflecting the principles within their own review processes. It has also been signed by CONUL, the Consortium of University Libraries in Ireland.
Published in Nature in 2015, the Leiden Manifesto proposes 10 principles to guide research evaluation. It was formulated by Profs Diana Hicks, Paul Wouters.and colleauges and aims to combat misuse of or overreliance on bibliometrics when assessing research quality.
The manifesto emphasises the need for quantitative information and quantitative metrics to be part of any evaluation process.
The Leiden Manifesto Explained