By Prof SIOUX MCKENNA, director of the Centre for Postgraduate Studies at Rhodes University
Many universities expend vast sums of money to improve their place on the rankings. And they make strategic decisions that favour rankings placement over transformation.
Yale and Harvard have both announced that they are withdrawing from the US News & World Report university rankings. Harvard Law School Dean, John Manning, said it had become “impossible to reconcile our principles and commitments with the methodology and incentives”.
“[The rankings provide] perverse incentives that influence schools’ decisions in ways that undercut student choice and harm the interests of potential students,” he explained.
Perhaps now, other universities will acknowledge that they have been participating in an unscientific and socially damaging billion-dollar game.
Thankfully, Rhodes — the university where I work — has consistently refused to participate in rankings because of the poor science underpinning them. It is the only university in South Africa to have taken this stance.
Many universities expend vast sums of money to improve their place on the rankings. And they make strategic decisions that favour rankings placement over transformation.
Participating in the rankings brings short-term benefits because most prospective students do not understand the industry’s problems, so they select institutions on the basis thereof.
This makes it a bold and challenging decision for universities to stay out of them.
But staying out of them entirely is often outside of institutional control. Companies that rank institutions simply harvest often flawed and incomplete data online to rank those institutions that refuse to participate voluntarily.
Taking part in these games while at the same time acknowledging that they are highly problematic, as some vice-chancellors in this country have done, displays a lack of integrity. As knowledge institutions, we would expect universities to refuse to participate in a scientifically problematic and socially unjust process.
Let’s hope they finally follow the science.
Four reasons why rankings are unscientific
Firstly, rankings combine unrelated measures to form a composite score.
The method of adding together entirely discrete items such as web presence, number of Nobel prize winners, and publication counts to get a score that supposedly represents quality is scientifically problematic.
What’s more, the selection of items is highly contentious, and the weighting of each item is entirely arbitrary. Universities invest vast sums to improve their position, but the truth is that if the weighting of any item is changed, the pack of cards rearranges itself.
Secondly, many rankings include a heavy weighting for reputation.
But reputation is typically a reflection of income and marketing rather than educational quality. When respondents rate the reputation of universities worldwide, they are far more likely to recognise Oxford and Cambridge than some smaller, lesser-known institutions.
Reputation is arguably a measure of an institution’s money and history more than its quality. And given the impact of coloniality, it is also unsurprising that Global North universities will be more widely recognised than those in the Global South.
Thirdly, there is one set of criteria for all institutions.
Each ranking system uses a particular set of measures for all institutions, regardless of where each university is, its history, or the nature of its academic project. All universities are thus steered towards one set of priorities.
Generally, ranking systems privilege large universities that are research-intensive, especially where they include engineering and medical sciences.
And fourthly, the measurements are proxies, not the real thing.
Because the methodology rests on being able to add together numeric scores for various issues, it is essential that complex realities are neatly expressed as simple numbers. But most aspects of educational quality cannot be neatly quantified. Some proxies have a fairly loose relationship with whatever they are meant to represent.
Three reasons rankings are anti-transformation
First off, rankings ignore social aspects. Many university roles are ignored in international ranking systems. For example, the extent to which a university engages in community engagement is rarely considered. The extent to which the university welcomes a diverse student body or tackles local problems is nowhere included in the measurements.
Secondly, elite begets elite: wealthy institutions can charge high fees and can be highly selective in their student body. Taking an elite university’s excellent graduation rates as a measure of teaching quality requires a degree of scepticism.
And thirdly, they promote competition within a public good. Most students in South Africa attend public institutions subsidised by the taxpayer on the understanding that it is good for broader society to have universities and to have highly educated critical citizens.
But the divisive history of our country has left us with a highly uneven university sector. Sadly, rankings pit us against each other rather than draw us into more productive collaborations.
While ranking systems can indeed value collaborations, Global South universities seeking to rise up the ranks may be tempted to focus on collaborations with elite Global North institutions, which will yield the measures rewarded in the rankings rather than pan-African partnerships.
While any system pitting institutions against each other is a problem, some newer rankings may be worth considering as benchmarks for universities rather than as marketing tools, such as measures of the extent to which an institution contributes to sustainable development goals.
Let’s hope that our universities start to demonstrate some discernment in the games they choose to play and how they choose to play them.
This article was first published by The Daily Maverick and republished by Rhodes University Communications.