Composite indicators (CIs) integrate a large amount of information in a format that is easily understood and are therefore a valuable tool for conveying a summary assessment of performance in priority areas. However, the construction of composite measures creates specific methodological challenges. Any CI may be considered as a model (OECD, 2008) where the CI is the response variable and the covariates are all the subjective judgements - the sources of uncertainties - which have to be made (e.g. the selection of individual indicators, the choice of normalisation methods, weighting schemes, aggregation model etc.). All these potential sources of uncertainty should be addressed because they affect both the variance of the CIs and the variability of any rankings based on CIs. In this context, sensitivity analysis can be considered as an appropriate tool to assess such uncertainties because it studies how the variation in the output can be apportioned to different sources of variation in the assumptions. Its primary aim is hence to quantify the overall uncertainty in CIs - and in country/institution rankings based on CIs - as a result of the uncertainties in the model input. This work investigates the degree to which composite measures are an appropriate metric for evaluating and ranking the research performance of Italian universities. Do they reflect accurately the performance of universities? To what degree are they influenced by the uncertainty surrounding underlying indicators on which they are based? The construction of composite measures creates specific methodological challenges. We address these through an analysis of some individual indicators put forward by the Italian Steering Committee for Research Evaluation (CIVR). To construct a composite indicator (CI) of scientific research, five normalisation methods, a weighting scheme, and two aggregation schemes have been computed and combined, resulting in 135 CIs. The variation in the rankings assigned by the CIs to the Universities has been explored to gauge the robustness of the CIs rankings. The analysis suggests that the judgements that have to be made in the construction of the composite can have a significant impact on the resulting score and that technical and analytical issues in the design of CIs have important policy implications.

Composite indicators of scientific research

GNALDI, MICHELA;RANALLI, Maria Giovanna
2009

Abstract

Composite indicators (CIs) integrate a large amount of information in a format that is easily understood and are therefore a valuable tool for conveying a summary assessment of performance in priority areas. However, the construction of composite measures creates specific methodological challenges. Any CI may be considered as a model (OECD, 2008) where the CI is the response variable and the covariates are all the subjective judgements - the sources of uncertainties - which have to be made (e.g. the selection of individual indicators, the choice of normalisation methods, weighting schemes, aggregation model etc.). All these potential sources of uncertainty should be addressed because they affect both the variance of the CIs and the variability of any rankings based on CIs. In this context, sensitivity analysis can be considered as an appropriate tool to assess such uncertainties because it studies how the variation in the output can be apportioned to different sources of variation in the assumptions. Its primary aim is hence to quantify the overall uncertainty in CIs - and in country/institution rankings based on CIs - as a result of the uncertainties in the model input. This work investigates the degree to which composite measures are an appropriate metric for evaluating and ranking the research performance of Italian universities. Do they reflect accurately the performance of universities? To what degree are they influenced by the uncertainty surrounding underlying indicators on which they are based? The construction of composite measures creates specific methodological challenges. We address these through an analysis of some individual indicators put forward by the Italian Steering Committee for Research Evaluation (CIVR). To construct a composite indicator (CI) of scientific research, five normalisation methods, a weighting scheme, and two aggregation schemes have been computed and combined, resulting in 135 CIs. The variation in the rankings assigned by the CIs to the Universities has been explored to gauge the robustness of the CIs rankings. The analysis suggests that the judgements that have to be made in the construction of the composite can have a significant impact on the resulting score and that technical and analytical issues in the design of CIs have important policy implications.
2009
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11391/168201
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact