Serendipity smiles on FluffyScience. I said I wanted to talk about the peer review scandal and the very next day I’m asked to act as the peer reviewer for the first time! If I ever needed an excuse to talk about peer review, I certainly have it now.
The peer review scandal everyone has been talking about is this: 120 computer generated conference papers were withdrawn by Springer. Including a delightful sounding paper about an empathetic spreadsheet. It sounds shocking. But it’s probably not quite as shocking as you think it is.
Let me explain using my own first hand knowledge of peer review. I don’t pretend to be an expert by any stretch of the imagination, but by fortunate happenstance I have enough publications, and enough different kinds of publications, to walk you through the process.
Have you checked out my Google Scholar profile? The link’s over on the right hand sidebar. Or just click here. You’ll see that there are currently five items in my profile. You can use Web of Knowledge instead but that requires an academic log-in, doesn’t update as quickly and isn’t as comprehensive in its record keeping. Therefore we’ll stick with Google.
All five of those publications have a Digital Object Identifier which basically means they have a permanent presence online. And they’re all linked to my name. The top three publications on that list are scientific papers. There’s MacKay et al 2012, MacKay et al 2013 and MacKay et al 2014. Then there’s a conference proceedings and a book chapter about the video game Halo (that’s another story).
Therefore I have publications in three categories: Papers, Conference Proceedings and Book Chapters. Now. Which publication wasn’t peer reviewed?
If you guessed the video game chapter, you’re correct. Video games, much as I love them, are not known for their rigorous peer review. You might be surprised to find that the Conference Proceedings (this one for the British Society of Animal Science (BSAS) 2012 conference in Nottingham) were, in fact, peer reviewed.
I’ve presented at two International Society for Applied Ethology (ISAE) conferences, two BSAS conferences, and two regional ISAE conferences. For all six conferences I have had to submit an abstract, a short description of what I intend to talk about. For BSAS the abstract is a page long, with references, subheadings and must present data in table or graph form.
This is quite unusual for my field, ISAE by contrast is something like 2000 keystrokes and frowns on references. On the other side of the fence, many of my colleagues have just submitted papers to the World Congress on Genetics Applied to Livestock Production (doesn’t it sound thrilling?) and these have been three pages in length. They’re papers more than abstracts. But WCGALP only runs every four years and it is a very large conference, hence the extra work.
All of these conference papers will get comments back on them. Even the regional ones (although my last regional one was a comment simply saying I needed to be more explicit about what behaviour I was recording). The point being that a human has read these papers and passed judgement on it.
Here we stop and acknowledge the scientific committees of these conferences who have an extremely hard job reviewing so many small pieces of science.
So these 120 conference papers that were removed from Springer – what went wrong?
Well first off, many of the authors on these conference proceedings did not know they were co-authors. Someone submitted papers with their name on it without them knowing
But wait – I hear you say. Jill, your conference paper is on your Google Scholar profile. Wouldn’t you be suspicious if an extra publication cropped up? Yes, reader, I would. But note that despite publishing in six conferences only one of them is on Scholar. Why is this paper the lucky one? If I had to guess I’d say someone referenced something in that Proceedings, forcing Scholar to index it. But I may be wrong – Google’s not particularly clear on this. And some of these authors may be in the lucky position of having so many publications they genuinely don’t notice a few extra ones.
So why submit a fake paper with somebody else’s name on it? I’d be surprised if these papers are being presented at the conferences (although I’d love to hear the talk on empathetic spreadsheets). But what I do know is that those fake-papers have to cite other papers and another very important research metric is the number of citations you have. (Mine is a glorious 1).
Yes – fake papers are a problem. But it’s not a problem of bad results getting out there, it’s a problem of the system being gamed, and people using these kinds of research metrics as the only gauge of a researcher’s quality.
There are ways around this. In the UK we have a new system for assessing research quality in Higher Education Institutes. It’s called the Research Excellence Framework (REF). Funding bodies use this to help them decide how to allocate funds over the next few years.
You enter REF as an institute. REF grades the papers submitted to them based on this framework.
- A four star paper is world leading research, original, with great impact and the highest quality science.
- A three star paper is pretty damn good, internationally recognised, good impact and high quality science.
- A two star paper is internationally recognised as being original research, good impact and high quality science.
- A one star paper is nationally recognised as being original research, good impact locally and high quality science.
The difference between these stars is really all about the impact of the science, which is great. Of course, REF has its own drawbacks. In the next few years you’re going to see a lot of UK papers with sweeping, statement titles instead of informative titles. A paper which would have been titled “The effect of indoor housing on behaviours in the UK dairy herd” will now be titled “Management systems affect dairy cow behaviour [and industry if I could wrangle that in there]”. You’ll also see a lot of new roles being created just before a REF exercise because the papers stay with the primary author. If you have a 4 star publication under your belt you become a Premier League footballer during the transfer season.
But one thing REF is not vulnerable to is false publications. A panel of experts reviews all the papers that the institute chooses to put forward. No empathetic spreadsheets allowed.
We’ve just finished the first round of REF and it will be interesting to see how it affects research going forward. As animal welfare scientists we’re excited about the emphasis on impact, something we’re good at demonstrating.
Lastly, while 120 papers sounds like a lot, I’d like to direct you towards Arif Jinha’s 2010 paper which estimated that there are 50 million published articles out there. We are talking about less than 0.000001% of articles even if these were papers, never mind conference publications.
By that logic, my own publication records contributes 0.00000008% of the world’s papers. I’d better get cracking.