fbpx
Categories
Approximate Thinking Approximation Education Fuzzy Thinking Incompleteness Innovation Knowledge Learning Measurement Research Science Systems Thinking

On Reduction and Approximations : University Rankings and Measuring Education

“Nothing simplified, however, comes for free… “University Rankings” are an extreme Approximation (reduction), and those are dangerous.”

Ranking universities and higher education institutions is (??) helpful in some respects. It might make it easier for (the perplexed) youth entering into higher education to choose… Moreover, A competitive environment, promoted by university rankings, encourages creativity, innovation, and effort.

Nothing simplified, however, comes for free… “University Rankings” are an extreme Approximation (reduction), and those are dangerous.

University Rankings : What ?

There are many widely available university rankings published by different institutions. These rankings aim to evaluate, quantify the impact of, and rank universities.

I’ve included here a few of the most popular ones, and some of the performance indicators they use.

Scrutinizing the measures

A recurring theme on this blog, and something that I frequently worry about, is about the dangers of ‘ ‘reduction into numbers’ (and other forms of reduction).

Sure, it is convenient. Sure, it makes it easier for ‘Big Ranking’ (did you think it was just ‘Big Banking, ‘Big Pharma’, and ‘Big Tech’..?). Sure, it is good to be comfortable.

But what goes into this magical (rank) number?

Here’s a description of methodology from THE:

THE Methodology for University Rankings

Reputation, Research, etc… you might say ‘but why?’.

Is this (or that) component of the measure, frequently given arbitrary weight, important for a certain decision maker, making choices based on rankings…?

Scarier still (I know… I’m exaggerating), is ‘reputation’ really reputation? is ‘research’ really research? By this I mean, whatever they called reputation – do you know precisely what that reputation means?

To you, ‘Research’ might be research, but to someone else it might be a ‘reputation for research’ (see above), or ‘the ability of research to attract funding’. Likewise, something like ‘research influence’ can be – very easily – manipulated and twisted to point to something other than its original intention. Eventually, that 60% weight given to research, becomes an elephant in the (class)room [I will show myself out].

Here’s the methodology from another ranking:

US News University Rankings Methodology

Malcom Gladwell, in “I hate the Ivy League” (Link . I will probably share something later on the importance of making university education a more democratic and accessible undertaking) describes in entertaining, Hitchcock-level suspense, his foray into discovering what goes into these rankings.

All forms of surveying in this regard should be taken with a grain of salt, but the “Peer Reputation” variable was particularly a funny component in the US News ranking, where some of these survey responses were done on the ‘my cousin went there’ level of familiarity (Nice discussion from Reed University can be seen here). This ‘Reputation’ variable present in many of the rankings has a ‘circularity’ stink to it [your reputation gets you a good evaluation which gets you a good reputation which gets you a good evaluation…].

Academic Publishing (Research) and University Rankings

I’ve wanted to write on this for a while, but a recent event motivated me.

Can you imagine a scientist publishing 700 research papers in 13 years? That’s 4.5 papers a month. More than one per week! Then he gets suspended..!?

(This is from a twitter thread I wrote: https://twitter.com/_AhmadHijazi/status/1643973428792401922?s=20 )

wionews.com

Reputed Spanish chemist and one of world’s most cited scientists, Rafael Luque, suspended without…

Spanish chemist Rafael Luque, who is regarded as one of the most cited scientists in the world, was suspended without pay by the University of Córdoba after working there for 13 years, El País…

Legendary chemist, and one of the world’s most cited scientists “Raphael Luque” was suspended from his position in the University of Cordoba, for bypassing the university’s cooperation policy.

The real story however isn’t about cooperation policies. It is about university rankings and prolific academic authors.

“Without me, the University of Córdoba is going to drop 300 places [in the Shanghai ranking],” “They are envious and mediocre people” he said. The researcher has published 700+ articles over a 13 year career –> this is more knowledge than what whole universities can produce.

Imagine this (and it might be true): one man, in the chemistry department, has a 300-place impact on a university, where – maybe – you have decided to study business… because reputation.

Image
yes. this picture is strategically placed here.

Oh but wait.

He isn’t raising the ranking of one university. Being larger than university/life, the super star has 3 universities (in Spain, Russia, and Saudi Arabia) that list him as part of their teams (apparently there were talks with more in Iran, but it isn’t clear if this did happen).

University Rankings depend on published research. Academic careers depend on published research. Logical conclusion: Let’s toy with published research… who reads (most of) it anyway?

Many in academia crack bitter jokes about the true value of their research. Many people who’ve been through doctoral programs know this, and even sometimes talk about ‘incest’ where a big portion of this research is solely consumed by other academics creating more research (does it smell like a ponzi scheme?).

Everyone abuses the system (the University Rankings System): Academics abuse the system with their publishing (they publish more research, and get it done faster. they focus on publishing research that is ‘easy to be accepted’ by editors). I’ve also heard of quotation agreements (I quote you, you quote me) and alliances in forming teams of authors are more prevalent than ever (So many research papers now have teams of authors: 5, 10 authors on a paper. Call me old-fashioned and introverted…).

I think that the gigantic academic effort to manipulate the ranking system (and get promoted) frequently comes at the expense of: insight and meaning !

Oh, and at the expense of one more small things that universities sometimes do : Teaching !!!

Conclusion: Ranking Universities and the University Mission

My problem is not with the university rankings themselves. My problem is with this great avalanche of unintended effects (and misery) caused by the innocent approximation (reducing the university to one number (rank), and then reducing this number to a ‘good university’ / ‘bad university’ label) .

The Ranking system suddenly gains importance, and academics and universities spend disproportionate amounts of effort in trying to manage it.

Is this an example where economist-ic thinking, with its rudimentary numerical knowledge is corrupting evaluation? I will not say that, because I’m not sure what a good alternative might be. But the abuse is clear to anyone who examines this.

Tragically, universities sometimes abuse the system by prioritizing ‘ranking manipulation’ over the sacred mission of spreading knowledge. They neglect the true value of research and teaching, and the impact they can make on people’s lives.. They go on boring badge-collection missions.

“They just bought be first class airline tickets and luxury hotel stays. I didn’t take money.” Says the research Messi. Even though many hard-working teachers and educators suffer financially, I think the problem is no way near here..

I have discussed the effects of approximate thinking on the institution of science in greater detail in “Fuzzy on the Dark Side: Approximate Thinking” (Amazon Link).

The problem is with the structure of education and the scientific system (and the two are symbiotes and frequently hard to tell apart). Many ‘scientists’ and ‘experts’ are that: ranking manipulators and gate keepers!

Society respects scientists… but it frequently doesn’t know what ‘scientist’ specifically (and practically) means.

The lure of using a simple number to escape complexity is obvious. But without awareness of what’s getting thrown away with the approximation, the dangers of the system becoming a monster, and gaining a life of its own, spiral out of control.

More on Ranking , Reduction to Numbers , and Approximations :

I’ve written about another area in which rankings can be significantly problematic – Ranking National Innovation . Just like here, multiple levels of approximate thinking can have unintended consequences, and bad measures can create circular loops of ‘beat-the-point’ variables.

Leave a Reply