Skip to content

The ARC, the ERA and the EJC

March 30, 2010

The ERA, or Excellence in Research in Australia, is a government attempt to measure the quality of research being done in Australia, for as-yet-unspecified reasons. It is being run by the Australian Research Council (ARC).

Rather than go to the difficulty and expense of getting people to actually read papers (as they did in Britain’s RAE), they decided to try to rank every journal – not just in mathematics – and then judge each paper by the journal it appears in, rather than the paper itself (though see the footnote below).  It’s a bit like judging a person by the school they went to, rather than who they are, but it’s probably a little better than the previous method of simply counting all papers. Maybe it’s not even that different to how we rate mathematicians in fields distant to our own? In any case, like it or hate it, ignoring the ERA is not an option for someone in my position.

The ranking itself is fairly coarse-grained and was intended to put all the journals in one discipline (such as mathematics) into A* (top 5%), A (next 15%), B (next 30%) and C (the rest). Of course this creates immense problems for multi-disciplinary journals (and subsequent problems for inter-disciplinary research) and even within a discipline there can be endless arguments about the boundary cases.  For mathematics, the job of ranking the journals was partly undertaken by the Australian Mathematical Society and their rankings seemed pretty much as I would have expected, at least for the few journals I know about (that is, few relative to the hundreds ranked).

But then something went screwy with The Electronic Journal of Combinatorics. This is a well-respected online journal in combinatorics that has been going for 16 years and boasts a stellar editorial board including such people as László Lovász, the current President of the International Mathematical Union. When the AustMS did the rankings, it was given an A rating, so when I got my list of papers to check, I was surprised to see that both of my EJC papers were ranked in the bottom C category!

So what happened? It turns out that from 1994-2009, a printed version of the journal was also published (by a commercial publisher) under the name “Journal of Combinatorics”.  Most people probably didn’t even know or think about it, and certainly very few people cited papers in the “reprinted” version of the journal and so (despite identical content) it was just given a C rating. However it appears that the “Journal of Combinatorics” had two ISSNs, including the one used by the EJC and so when the ARC received the final spreadsheet they were confronted by two journals with the same ISSN.

Now the plot thickens.  There is an obvious fix to this problem: simply remove the Journal of Combinatorics from the list as “not a real journal”. Or even leave it in the list, but remove its second ISSN that clashes with that of the EJC. But, for some unfathomable reason, the ARC has decide to remove the A-rated Electronic Journal of Combinatorics from its list and just leave the C-rated reprint. Worse still, the C-rated copy gets to keep both ISSNs and so every paper in the EJC now gets ranked C by the ARC. I’d much rather the paper disappeared entirely from my record than drag down the average ranking.

The CMSA (Combinatorial Mathematics Society of Australasia) has repeatedly pointed out this problem, but the ARC simply refuses to do anything about it, although it would take 10 seconds on a spreadsheet to change. So in the most significant research assessment exercise ever undertaken in Australia, the ARC is deliberately choosing to perpetuate a trivially-fixed error. Why? What is the point of being so obstructive? Why bother going through a long and expensive journal ranking exercise just to deliberately refuse to fix an error at the last hurdle?

So what, you might say. It’s just another piece of evidence that this assessment exercise is meaningless bureaucracy so shameless that it doesn’t even pretend to be accurate?

I’d tend to agree, except that our Faculty has decided that in order to be considered “research-active” and hence eligible for study leave, only papers published in A/A* journals count. Presumably swapping two As for two Cs will push some people under the bar. I don’t think I’ll be in that category, but I’d sure hate to lose study leave because some bloody-minded bureaucrat has decided that the EJC does not actually exist!

Footnote: It seems that a sample of papers in some disciplines will be individually reviewed, though it is not clear if this augments, supplants or cross-checks the main process.

4 Comments leave one →
  1. Philip Brooker permalink
    April 1, 2010 8:20 am

    As far as I can see, there is no provision on the list of journal for refereed conference proceedings (please set me straight if I’m wrong), or maybe another list will eventuate to cover conference proceedings. Last year I was invited by my supervisor to submit a paper to the proceedings of one of the peak conferences in my field (he was one of the editors), but I had to decline on account of having to worry about ARC journal rankings. Otherwise I’d have been happy to. I’m guessing that other people might have been similarly discouraged from publishing work in conference proceedings (except maybe survey articles, which might be hard to place otherwise).

    • Gordon Royle permalink*
      April 1, 2010 9:17 am

      In Computer Science, where conference publication is the norm, they have ranked lists of conferences, which is even harder to do than ranking journals. I don’t know of any ranked mathematics conferences, but we are lucky that attending and presenting at conferences is largely independent of publishing papers. The close association in CS between paying an expensive registration fee and getting another paper is a blatant conflict of interest, and in many cases comes perilously close to “pay to publish”.

  2. Jerry Vanclay permalink
    March 29, 2011 11:21 am

    There are plenty of other problems with the ERA ranking in other disciplines too. Here’s an analysis of some of the more glaring problems… http://dx.doi.org/10.1016/j.joi.2010.12.001

    • Michael Giudici permalink
      April 7, 2011 3:59 pm

      There are also many problems with using citation statistics to assess research. One notable example is that outlined in the paper `Nefarious numbers‘ published recently in the Notices of the AMS.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 42 other followers

%d bloggers like this: