By Professor Sioux Mckenna

The latest university rankings have just been published to much fanfare and, in many cases, handwringing. As a researcher of higher education systems, I am often asked about Rhodes University’s position on rankings.

Professor Sioux Mckenna

In this Q&A, I answer some frequently asked questions.

Why does RU refuse to participate in rankings?
Because they are unscientific, neocolonial, and follow a problematic business model.

What makes them unscientific?
Each ranking company follows its own formula, but the methodology is largely the same. They collect proxy metrics that are supposed to represent complex social activities, such as ‘reputation survey’ data as a proxy for institutional quality, or ‘number of publications’ as a proxy for research quality.
These proxy metrics are then added together, despite being unrelated to each other.
As we teach our students in introductory research methods courses, adding apples to oranges to calculate an average tells you very little about either fruit. And then there is the problem that the weighting of each metric is arbitrary. The order of the rankings will change, for example, depending on whether the ranking company gives the student-to-staff ratio, a metric for teaching quality, a weighting of 2% or 5% or 10%.

What makes them neocolonial?
The rankings industry measures all institutions identically – regardless of their history, their resources, their context, or their specific goals. For example, rankings encourage all universities to rush for academic publications rather than focusing on research impact or community engagement. Furthermore, the publications that really count in the rankings are those published in English by publishing houses in the Global North. Universities that chase a place in the rankings may encourage a focus on activities that enhance the rankings’ metrics at the cost of local projects pertaining to their context.

What do you mean by ‘problematic business model’?
The rankings industry is a multibillion-dollar industry. Its money doesn’t come from the rankings as such, as these are published publicly. Rather the industry makes money from selling the data which universities hand over. You might say that rankings are a vehicle to collect and sell data. Rhodes University is included in some rankings, despite refusing to hand over its staff, student, and institutional data. The rankings that include the university do so by drawing on publicly available data but fail to point out that the institution does not participate in this charade.

What universities have withdrawn from rankings?
As John Manning, dean of Harvard Law School, said: “It has become impossible to reconcile our principles and commitments with [their]methodology and incentives.” So, it is unsurprising that several universities have stated their decision not to participate. These include Utrecht University, Zurich University, Jawaharlal Nehru University, Stillman College, Sains Malaysia University, Berkley University of California, Harvard Medical School and the law schools of Harvard, Stanford, Georgetown, and Yale.

What about the new rankings, like the ‘Emerging Economies’ ranking or the ‘Sub-Saharan Africa’ ranking?
The newly developed rankings focused on the Global South reflect the expansion of the rankings industry into new markets. But these rankings all use the same unscientific processes of adding unrelated metrics that are often very poor proxies of the activity they purport to represent.

Isn’t it a risk to refuse to participate in rankings?
Many vice-chancellors agree ‘off the record’ that rankings are unscientific and have perverse consequences, but most are unwilling to withdraw their institutions. There are now over 40 kinds of university ranking, so it is possible to judiciously select the one in which a particular institution fares best for marketing purposes. So, yes, it is a risk to stay out of the game because much of the public mistakenly believe that the rankings reflect orders of quality. But I would suggest that in an era of science-skepticism where questions are being asked about the value of public universities, it is very dangerous for institutions to be playing a game that fails its own definitions of scientific research.

What do you mean ‘playing a game’?
The rankings industry gathers a particular set of proxy metrics, so universities that want to do well in the rankings spend increasing amounts of time and money focused directly on improving their scores. This is what is known as Goodhart’s Law – the moment a measurement becomes a target, it is no longer a good measurement. There is a great deal of research on how universities around the world have ‘gamed’ the system, including by submitting dubious data. It seems that a common means of gaming the system in South Africa is by buying publications. Academics from elsewhere in the world are paid to include a South African institution as one of their affiliations on publications. These ‘research associates’ have little to no affiliation to the country or the relevant institution but nonetheless they are counted in the data submitted to the rankings industry. Such gaming is encouraged by a system that pits universities against each other as competitors in a zero-sum game that harms staff, students, and ultimately the knowledge project.

Professor Sioux Mckenna is a higher education researcher. The views expressed here are her own
and do not necessarily represent those of the university.

Comments are closed.