ERA 2012 – what does it mean?

It seems that the results of the 2012 ERA research assessment exercise are about to be released, which is causing some people around the University more than a little nervous anticipation.

If you recall, this is an exercise where the fields of research within each Australian university are evaluated, essentially according to various  criteria involving total research output, the numbers of papers in prestigious journals and the numbers in low-ranked journals, all weighted in some opaque fashion. Nobody actually knows what the results will be used for, which accounts for a fair proportion of the nervousness.

So what will the results mean?

Unfortunately, I fear that our old friend, Goodhart’s Law, to which so much university evaluation falls prey has swung into operation so forcefully that the ERA 2012 results will be almost uninterpretable. The original ERA was simply intended to give the government and indeed the universities some measure of the quality of the research that they are funding and producing respectively. Research from each university was allocated to “discipline codes” (Pure Maths, Applied Maths, Stats etc) and each discipline evaluated accordingly. Of course, as soon as the results were released the media, aided and abetted by the universities, compile these into “league tables” allowing universities to brag about how many of their disciplines scored 5 or what proportion of their disciplines were above average or whatever statistic painted them in the best possible light.

And wherever there are “league tables”, manipulation and game-playing take over, and completely dominate whatever erstwhile valuable purpose motivated the collection of the data.

I’m slightly ashamed to recall that after ERA 2010, I was sufficiently pleased that Pure Maths at UWA scored a 5 (in fact, I blogged about it), that I let my normal (hopefully healthy) scepticism about any “ranking” temporarily subside.  However, my scepticism is now back in full, to the extent that I don’t think the ERA 2012 results will be at all meaningful — but I had to say this before the results were announced, because if I say it afterwards, then I would be accused of sour grapes in the event that Pure Maths no longer gets a 5 (which I don’t think it will.)

So, how can this seemingly-simple process of evaluating the research from each discipline be so easily gamed?

Some techniques are breathtakingly transparent – pay highly-cited productive researchers who don’t actually work at your university a modest amount of money to claim that they actually do work at your university, then call them “part-time” or “adjunct”.  (This tactic is not limited to the Australian ERA; witness the bulk purchase of highly-cited researchers by King Abdulaziz University.)

Others are more subtle, such as manipulating the fields into which research is allocated. A general  journal, especially in an area like medical research, might have multiple discipline codes (Epidemiology, Statistics, Public Health) or, closer to home, a generalist Maths journal might have discipline codes for both Pure Maths and Applied Maths. The ERA rules allow universities to allocate any paper in that journal to any of  its discipline codes, regardless of whether this accords with the actual subject matter of the paper.  So, within these parameters, the name of the game is to shuffle the papers around to maximise some objective. If you want to avoid getting poor scores for any discipline, then shuffle the papers in lower-ranked journals to a discipline code whose total quantity is below the measurement threshold (and therefore won’t get measured). If you have a very strong discipline that will easily get a 5 and more, then use just enough of their papers to bolster other disciplines that you guess are close to a boundary, but of course don’t use so many of them that the original discipline drops back down!


Of course, even when done “straight” its hard to interpret the results of the ERA, because the discipline codes rarely match up with university departments,  and some people working in multidisciplinary areas essentially have their whole output diffused across multiple disciplines. But when you combine the inherent difficulties along with the gaming of the system, I fear that none of the numbers reported in ERA 2012 will have any real meaning, except as a testament or otherwise to the skill and intuition of the people shuffling the papers.
So, while I don’t expect Pure Maths to get a 5 this time round, I also don’t think it will mean anything whether it does or not, and I will do my best to ignore the whole thing. But of course, the university sector is addicted to numbers — even if a number is demonstrably meaningless (such as counting papers in the “pay to publish” era), many otherwise intelligent people will argue fervently that any precise number is worth using.


And as many people expect the results to be used to determine research funding, I may well be unable to ignore it after all.


3 thoughts on “ERA 2012 – what does it mean?

Add yours

  1. Yes, one of my papers is in the “applied maths” code this time round as part of the gaming. But what is more worrying is what universities do with the results. I know one Australian university which has used ERA2010 scores in the way they apportion postgraduate scholarships in their university, and they also use the rankings in determining internal small grants. I heard that one of their schools which scored a 5 were rewarded with a new $94M building!

    1. However, the scores are not awarded to “schools” but to “4-digit field of research codes”. Pure Maths at least has the property (modulo game playing) that pretty much everything in the Pure Maths code was done by people in our school. But for other areas, the score attributed to a particular FoR might have involved papers from dozens of different schools. But, as we know, even if an assessment is known to be inappropriate for a particular purpose, due to the way the data is collected or aggregated or whatever, and it is firmly and publicly labelled as “Not to be used for purpose X”, then half the universities will immediately use it for purpose X.

  2. Until 2008, we had in Britain a “Research Assessment Exercise” or RAE, which did what it said on the tin, and determined how much baseline research funding universities got from the Higher Education Funding Council (based on a product of a factor depending on the grade, a volume factor for number of people submitted, and a factor for the cost of research in that subject). Game-playing was limited to leaving out staff to achieve a higher grade, which could be counterproductive. Then the money was cut off, research support funding was put into overheads from the research councils. The whole thing should have been stopped at that point, but it wasn’t – these things gain a momentum of their own. Now it has been re-labelled “Research Excellence Framework” or REF, which is of course meaningless; it is just for game-playing; departments are less in control of their submissions, which are re-written by the administration.

    I’ll keep my fingers crossed for you!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

Up ↑

%d bloggers like this: