Ellen Hazelkorn, member of the Editorial Team for the next GUNi Report

Ellen is Policy Advisor, Higher Education Authority (HEA) (2014-); and Emeritus Professor and Director, Higher Education Policy Research Unit (HEPRU)(2008-), Dublin Institute of Technology (Ireland). She is President of EAIR (European Higher Education Society) and on the Management Committee and Advisory Board, Centre for Global Higher Education, UCL Institute of Education, London.
Ellen has worked as higher education policy consultant and specialist with international organizations and governments for over 15 years, and has 20 years experience as a vice-president at DIT. Ellen is internationally recognized for her writings and analysis of the impact and influence of rankings on higher education policy and institutional decision-making. Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence, 2nd ed. was published 2015.
In the interview, Ellen talks about the next GUNi Report, university rankings and their popularity, social impact, and how to measure the reaching of the SDGs.
Global rankings are perceived as a simple method to compare the quality of universities around the world. They have become popular in the absence of other international and comparative tools, and due to growing frustration with traditional academic quality assurance as a form of accountability and transparency. Rankings are used by students, governments, employers, and universities.
What affects the credibility of rankings are the choice of indicators and their methodology, rather than the number of rankings.
The number of rankings continues to rise because of their popularity; different kinds of rankings are also devised by the main commercial companies in response to different segments of the international HE and policy market. However, what affects the credibility of rankings are the choice of indicators and their methodology, rather than the number of rankings. This is because there is no internationally agreed objective or value-free measures to assess or measure teaching and learning, or research quality. Thus, most rankings simply identify indicators which reflect the judgment of the producers of the rankings. It is also true that the methodology changes often, and this means that it can be very difficult to meaningfully compare one set of rankings to the next.
Most global rankings focus on measuring and comparing research performance; this is because there are plenty of data sources about research activity. However, a major flaw with research rankings is that they over-concentrate on science and technology and ignore the arts, humanities and social sciences; they also ignore the impact and benefit of research on society. Another major flaw is the absence of meaningful measures of teaching and learning quality. Global rankings often use indicators of staff/student ratio or international students – but these indicators simply measure reputation which does not measure the quality of the student learning environment. Many rankings also use indicators which simply reflect the socio-economic characteristics of high-achieving incoming students, e.g. progression and graduation rates, and salary after graduation.
Rankings have become an important indicator of national competitiveness. They are used by investors and employers to judge the extent to which there are “good quality graduates” available for employment or as potential entrepreneurs. International students, especially post-graduate students, often use rankings to inform their choice. Because of this, governments are very aware of how many universities in their country are listed in the top rankings. Many governments have introduced policies to help some universities rise in the rankings. So rankings are having a very direct impact on government policy. This is why rankings have significant geo-political influence.
Rankings have become an important indicator of national competitiveness.
There is strong evidence that rankings can help maintain and build institutional reputation, that high-achieving students use rankings to shortlist university choice, especially at the postgraduate level, and that stakeholders use rankings to influence their own decisions about funding, sponsorship and employee recruitment. Universities also use rankings to help identify potential partners, assess membership of international networks and organizations, and for benchmarking. Because of these effects, many universities have introduced policies to help them rise in the rankings. This may include strategic decisions, resource allocation, prioritization, student recruitment, etc. in many instances, the strategic choices universities make is to focus more attention on boosting their international reputation rather than contributing to their own society.
Measuring the impact and contribution of higher education to society is complicated. In recent years, a lot of different organizations have focused on identifying appropriate indicators. For example, the EU E3M initiative identified ninety-five possible indicators under three different categories (continuing education, technology transfer and innovation, and social engagement) making it the most comprehensive data-base, albeit it is very idealistic and probably impracticable to implement. The US Carnegie Classification system has introduced the idea of the engaged universities; it identified 10 different indicators. Because national and institutional context varies considerably, there are few indicators which are internationally comparable.
There is strong evidence that rankings can help maintain and build institutional reputation
Identifying the most appropriate and meaningful indicators is often contentious. This is because, as Albert Einstein said, “Not everything that can be counted, counts, and not everything that counts can be counted.” Therefore, there is a “political” decision to be made between being so comprehensive that the process is too complex and costly to implement and having too few indicators that the process is meaningless and can distort behavior. An important action is to develop an agreed international dataset which is overseen and monitored by an international non-governmental organization rather than commercial interests.
The following features might be considered: graduate employability in the region/ability to attract and retain graduates in the region, recruitment of students from the region, student/graduate entrepreneurship/start-up companies, community/SME research and product/service development, evidence of contribution to public policy and social transformation, etc. These indicators must extend across all disciplines, should not be restricted to science and technology, and should be assessed using a combination of qualitative and quantitative indicators – involving some form of end-user assessment. Context remains important.
There is in my view no contradiction between being engaged locally and globally. Today’s societal challenges transcend institutional, regional and national borders. Therefore, as universities work to resolve such problems are home or afar, they also need to be involved in bringing the lessons and results of that work back home.