ERA 2012 – what does it mean?
It seems that the results of the 2012 ERA research assessment exercise are about to be released, which is causing some people around the University more than a little nervous anticipation.
If you recall, this is an exercise where the fields of research within each Australian university are evaluated, essentially according to various criteria involving total research output, the numbers of papers in prestigious journals and the numbers in low-ranked journals, all weighted in some opaque fashion. Nobody actually knows what the results will be used for, which accounts for a fair proportion of the nervousness.
So what will the results mean?
Unfortunately, I fear that our old friend, Goodhart’s Law, to which so much university evaluation falls prey has swung into operation so forcefully that the ERA 2012 results will be almost uninterpretable. The original ERA was simply intended to give the government and indeed the universities some measure of the quality of the research that they are funding and producing respectively. Research from each university was allocated to “discipline codes” (Pure Maths, Applied Maths, Stats etc) and each discipline evaluated accordingly. Of course, as soon as the results were released the media, aided and abetted by the universities, compile these into “league tables” allowing universities to brag about how many of their disciplines scored 5 or what proportion of their disciplines were above average or whatever statistic painted them in the best possible light.
And wherever there are “league tables”, manipulation and game-playing take over, and completely dominate whatever erstwhile valuable purpose motivated the collection of the data.
I’m slightly ashamed to recall that after ERA 2010, I was sufficiently pleased that Pure Maths at UWA scored a 5 (in fact, I blogged about it), that I let my normal (hopefully healthy) scepticism about any “ranking” temporarily subside. However, my scepticism is now back in full, to the extent that I don’t think the ERA 2012 results will be at all meaningful — but I had to say this before the results were announced, because if I say it afterwards, then I would be accused of sour grapes in the event that Pure Maths no longer gets a 5 (which I don’t think it will.)
So, how can this seemingly-simple process of evaluating the research from each discipline be so easily gamed?
Some techniques are breathtakingly transparent – pay highly-cited productive researchers who don’t actually work at your university a modest amount of money to claim that they actually do work at your university, then call them “part-time” or “adjunct”. (This tactic is not limited to the Australian ERA; witness the bulk purchase of highly-cited researchers by King Abdulaziz University.)
Others are more subtle, such as manipulating the fields into which research is allocated. A general journal, especially in an area like medical research, might have multiple discipline codes (Epidemiology, Statistics, Public Health) or, closer to home, a generalist Maths journal might have discipline codes for both Pure Maths and Applied Maths. The ERA rules allow universities to allocate any paper in that journal to any of its discipline codes, regardless of whether this accords with the actual subject matter of the paper. So, within these parameters, the name of the game is to shuffle the papers around to maximise some objective. If you want to avoid getting poor scores for any discipline, then shuffle the papers in lower-ranked journals to a discipline code whose total quantity is below the measurement threshold (and therefore won’t get measured). If you have a very strong discipline that will easily get a 5 and more, then use just enough of their papers to bolster other disciplines that you guess are close to a boundary, but of course don’t use so many of them that the original discipline drops back down!
Of course, even when done “straight” its hard to interpret the results of the ERA, because the discipline codes rarely match up with university departments, and some people working in multidisciplinary areas essentially have their whole output diffused across multiple disciplines. But when you combine the inherent difficulties along with the gaming of the system, I fear that none of the numbers reported in ERA 2012 will have any real meaning, except as a testament or otherwise to the skill and intuition of the people shuffling the papers.
So, while I don’t expect Pure Maths to get a 5 this time round, I also don’t think it will mean anything whether it does or not, and I will do my best to ignore the whole thing. But of course, the university sector is addicted to numbers — even if a number is demonstrably meaningless (such as counting papers in the “pay to publish” era), many otherwise intelligent people will argue fervently that any precise number is worth using.
And as many people expect the results to be used to determine research funding, I may well be unable to ignore it after all.