Sunday, October 06, 2013

Global Rankings: Why Do Indian Policy Makers Take Them Seriously?


Over at University Ranking Watch, Richard Holmes says what sounds to me to be quite plausible: a few high profile, highly cited papers could make a huge difference to an institution with a small research base. He cites two examples from this year's THE-TR rankings: the Tokyo Metropolitan University and the Panjab University, and suggests that their high profile papers are possibly related to the LHC collaborations.

In an op-ed (which he has posted on his blog with data tables) on the inconsistencies in global ranking exercises, Prof. Gautam Barua, former director of IIT-Guwahati, echos this view:

The high scores of Panjab and IIT-G vis-à-vis IIT-D could be explained by this. Panjab University's high energy physics group (and to a lesser extent IIT-G's) is part of global experiments at CERN and Fermi Labs, and papers from that project have very high citations. Thus, a small of group of international collaborations are providing a high score. Isn't the median number of citations per faculty a better measure than the average (there are other issues, for example, citations in the sciences are usually much more than in engineering)?

The global ranking exercises like THE-TR and QS rely on pretty dubious measures, including something called the reputation survey. Even on the so-called objective measures (such as citation metrics, which come with their own problems), they have screwed up -- remember Alexandria? Thanks to folks like Richard Holmes, we know how their "mistakes" and corrections and flip-flops have led to wild fluctuations in the ranking fortunes of Malaya over the years.

When a bunch of money-grubbing entities come along and tell the world that they will rank universities across the globe (irrespective of the vast differences among them), and end up doing a demonstrably shoddy job of it year after year, shouldn't we laugh them off the stage?

No! We treat them like they are superstars.

We welcome them to our living room, and have a tête-à-tête in which we ask them to "educate" us on what we need to do to get more Indian institutions in their top 200 or top 400 or whatever.

And we give their top-400 lists a privileged position in our higher-ed policies.

Forget about growing a spine -- it's time people grew some self-respect.

5 Comments:

  1. Neelima said...

    I suppose you are saying that the playing field isn't level. In response to that, I would like to say, if the playing field isn't level, and you have to play (which we have to), you learn strategies to play in a nonlevel playing field. Panjab University appears to have learnt them! Congrats, friends!

    The second comment comes from a friend, who says,` if the playing field were level, would you win?'

  2. Anonymous said...

    @Neelima: Prof. Abinandan is not saying that "the playing field isn't level". He is saying that the game is not worth playing, and the evidence put forth by Holmes shows that he is is absolutely right. The MHRD, if it really wants to improve the state of Indian universities, should ignore these incompetent ranking agencies and work harder to evolve valid and indigenous criteria for what constitutes a "good university".

  3. L said...

    If the governments would just look at universities as places where young people must be educated to make them into contributing citizens of the country and of the world, we would do a lot better. On the one hand, we aim at world rankings and on the other, we want to achieve a target of having X number of graduates in the state/ Y number of PhDs. So the aim is that X number of people must graduate this year- never mind if they can barely read and write- but parallely, we must also achieve world standards! These two objectives are mutually incompatible.

  4. Ankur Kulkarni said...

    Abi, it does not matter whether the ranking methodologies have any substance to them. What matters is whether a) those who matter to us, i.e., Indian public, potential students, Indian govt, academia at large etc, take these agencies seriously and b) whether we have it in us to debunk them in the eyes of those listed in a). Particularly important here are students -- if students take ranking serious and we cannot debunk the rankings conclusively, then we have to perform at them. Else, students will go other univs with higher rank, even if these univs may not be as good as ours. This is unfortunately the reality and we have to do one thing or the other -- either excel at the however imperfect rankings or pull off a strong campaign to debunk them. It is in our interest to do so.

  5. Anonymous said...

    Speaking only for myself, I have a much better estimate regarding IISc Bangalore than what it would have been if (i) I were to rely only on some or the other official report (whether Indian or foreign), and (ii) its faculty and students were not allowed to maintain their personal Web sites and blogs.

    The greatest fallacy about rankings is the belief that one (or a few) rankings can properly serve all the needs of all the people. IMO, doubts raised concerning methodologies (e.g., how do you go assessing the "reputation" parameter?) are, comparatively, less troublesome.

    Solution?

    A few years ago, PhDs.org used to maintain a relatively simpler system of ranking programs/universities. It would allow the Web site visitor to select and assign his own weightages to the various parameters, and use these in order to arrive at the kind of rankings he wanted. ... They still have a system, but it no longer generates a list of clearly ordered rankings. There are too many irrelevant details like medians and all... They have messed a neat idea by overworking on it.

    IMO, a system like what it used to be at PhDs.org, is the best solution.

    For instance, if I were an Indian student looking for PG admissions in the USA, the size of the program would matter, but not as much as the department citation statistics. The availability of scholarship to international applicants would be the most important criterion, and encouragement to campus diversity (e.g. comparative studies of the Mayan and the Green cultures on equal footings) wouldn't be a criterion at all. Also, in comparison to money available to international students, the ability to complete PhD in 3 years wouldn't matter at all---not at the time of entry, anyway.

    On the other hand, if I were to be an IIT director, the number of years taken for completion of PhD, might be a parameter of interest to me, so as to help me benchmark my faculty better. Also, citations per $ per faculty member, etc.

    So, what PhDs.org were doing was in the direction of the best possible option. Give some raw data to people, and let them choose the parameters they want, and apply a weighting scheme the way they want, and let them arrive at some ranking that they want to apply for some one particular purpose of theirs.

    It was quite some ago, may be a decade ago, that there had sprung up certain services on the 'net that would allow you to launch your own newspaper site. All that you would do is to customize the layouts, and choose from the news-feeds (and even opinion pieces). And, you could write your own pieces (like editorials) as well. ... Instead of wasting your time in advising the editors of the existing newspapers, you could create your own newspaper.

    The idea of custom-made rankings is, relatively, far more relevant, more practical, and, overall, better.

    If a system to create rankings were available, I would publish my own rankings, at my blog.

    --Ajit
    [E&OE]