Elephants Who Marry Mice

Don’t you just hate when you’re forced to face up to the fact you’re not as virtuous as you think you are?

One of the courses I’m currently writing for the International Fund for Animal Welfare came back to me with some corrections. My reviewer had changed the following sentence, the change in capitals.

“Dogs WHO showed pessimistic behaviours were more depressed.”

And try as I might, my gaze kept tripping over that word. Dogs Who, Dogs Who, Dogs Who.

Let us momentarily leap backwards in time to our English classes. My education contained very little formal grammar training, which may be obvious to the casual reader, but even I know that personal pronouns (e.g. who, he, she, they) are reserved for people. Animal are referred to as objects (e.g. which, it, that).

“The dog which barked” is preferable to “The dog who barked”.

“It is lying in the cat basket” may be preferable to “she is lying in the cat basket”.

This can lead to the English language treating animals very strangely. For example, say you visit a new acquaintance. You know this acquaintance has two cats, Gin and Tonic (this friend might be a bit odd), but you see one cat on the windowsill. You want to know, is that cat Gin or is that cat Tonic? You may ask “What cat is that?” or “Which cat is that?” seeing as you know it is one of two. It would be wrong to say “Who is that?”

Is it problematic to refer to animals as objects? Well first we have to ask if grammar affects the way we think. (And before we go any further I want to tell you that journals on grammar and semantics are almost as impenetrable as journals on molecular genetics)

Boroditsky (2009) investigated the differences in how speakers of English and Mandarin thought about time. In English we speak of time as a horizontal construct (you look ahead to the good times and back on the bad times) whereas in Mandarin time is spoken of in a vertical manner (the paper gives the translated example “what is the year before the year of the tiger?”).

The experiment itself is a bit odd to get your head around, but first they primed English and Mandarin speakers with either vertical or horizontal concepts (i.e. the black worm is ahead of the white worm, the black ball is below the white ball) and then given ‘target’ statements about time ‘March is earlier than April’, ‘March is before April’.

English speakers answered these questions faster after hearing a horizontal prime (similar to how they think of time) and Mandarin speakers answered these questions faster after they had heard a vertical prime (similar to how they think of time). Boroditsky concludes that the way we speak frames the way we perceive the world.

But does this happen in animal welfare? Well I’m not the only one who wondered about this. Gilquin & Jacobs (2006) wrote a paper which is whimsically titled ‘Elephants Who Marry Mice’. They reviewed style standards in various publication manuals. For example, the Guardian’s, which you can find here, says:


pronoun “it” unless gender established


The Guardian also says:

any more

Please do not say “anymore” any more


So I don’t dream of writing a Comment Is Free column anymore.

Unsurprisingly, Gilquin and Jacobs found that it was the familiar animals (horses, dogs, cats, etc.) which scored a ‘who’ more often than the non familiar animals. Furthermore, publications aimed at animal-related interest groups were more likely to use ‘who’, e.g. Dogs Today.

They noted that in general texts or interviews, the personal pronoun was used when the author wanted to garner sympathy for the animal in question. It is “the poor cat who was stuck in a tree” rather than “the cat which was stuck in the tree”.

More interestingly, given some of my other posts on anthropomorphism, 60% of the sentences they found which used the personal pronoun for the animals attributed human-like characteristics to the animals.

Gilquin and Jacobs conclude that ‘who’ is used in English to refer to animals, although inconsistently. They suggest a wider adoption of this grammatical structure might engender more empathy for animals from humans, something which I think reflects what Ganea et al found in their work.

Should animal welfare scientists be calling for the personal pronoun usage?

I really can’t decide. I’m not convinced that it will completely change the way we think about animals. But it’s a nudge you might want to be aware of if you’re talking animal welfare science.


And for what it’s worth, I changed the text on the course.


Badger Fortnight – The Solution?

But Jill, a fortnight is two weeks not three.

Shut up, that’s what.


This week I want to discuss two main studies – the first by Torgerson and Torgerson (2010) and the second a 2008 in the Veterinary Record. We have talked about why the disease is a problem, why the cull hasn’t worked, so the question becomes: What now?

The Torgersons start off with the claim that Defra’s continual fight against bovine tuberculosis is a misplaced use of public resources and we should just chill on the whole thing.

What’s their reasoning?

They start by going into the details of the few cases Britain has had of humans developing bovine TB. Between 1993 and 2003 they note that there were only 315 human cases of Bovine TB and only 14 of those were in people born after 1960 and were British Nationals.

Molecular investigation found that only 10 of the 25 spoligotypes of the bovine TB present in infected humans were actually present in contemporary UK cattle. They describe two cases from Gloucestershire where on-farm transmission from cattle to humans was likely. A third case in Cornwall where a veterinary nurse was infected was considered to have more likely come from her dog. (Interestingly, cats and ferrets are also known vectors of bovine TB and I know I’ve had more cats sneeze into my mouth than badgers, and I’ve probably worked with more badgers than most . . . Ragg et al, 1995). The more infamous six cases which sprouted up in Birmingham featured a UK national with a ‘history’ of drinking unpasteurised milk at home and abroad. And four of these six patients were likely immunocompromised.

Historically, Bovine TB did not come from cattle-to-human airborne transmission, but through milk. And as we pasteurise all our milk nowadays, the Torgersons conclude this risk is now negligible. I want to take a  moment to say that I have anecdotally observed a strange counter culture of people who love unpasteurised milk (in fact it is a topic of conversation that seems to leap up whenever I tell people I work with dairy cows). Unpasteurised milk drinkers are a little like foodies who insist you’re using the wrong kind of spice and I’m often asked if I drink unpasteurised milk – once I was fairly certain my optician wouldn’t sell me glasses until I converted to unpasteurised, but I digress – seeing as milk makes me ill at the best of times, the thought of drinking milk unpasteurised ‘gies me the boak’ as we say in Glasgow. Unpasteurised cheese is another matter . . .

Where was I?

Oh yes. The end conclusion of the Torgersons (are they brothers, or did they just think they would be epic scientific partners?) paper is that they believe our hypervigilant position on bovine tuberculosis in the UK is a waste of public resources. They don’t see a reason for spending so much money on a disease which so rarely affects humans.

The Veterinary Record article doesn’t quite agree, but following the randomised badger culling trial in 2007, they too realised that badger culling was not the way forward (yes we were having this discussion seven years ago) . In the article they propose:

  • More frequent testing of cattle using combined tests to detect active disease.
  • Research on post-movement cattle testing.
  • Research into a vaccine for cattle and badgers and immediate usage as soon as its developed.
  • Research into the disease.
  • Get farmers to understand the need for greater on-farm biosecurity.


Really, this article back in 2008, was proposing the oldest solution: identify, research, prevent.

So why aren’t we there yet? Well seven years in research and pharmaceuticals is not a lot of time. Defra’s old website has a page on cattle vaccinations and it points out that the EU prohibits vaccinating cattle against TB (because being vaccinated makes some cattle test positive for TB, ergo herds cannot be declared TB free because the vaccine may be masking infection. The EU prohibits trade of TB infected cattle).

The BCG vaccination is not brilliantly effective in cattle, so we either need a better vaccine, or to use that vaccine to protect some of the herd and reduce the number of cattle we need to cull. But it’s expensive and hampers trade with the EU.

This post will be published a week after I voted in the European elections. I can tell you I didn’t vote for UKIP or anything like that, I’m a good left winger who lives in Scotland, you get three guesses on my vote and the first two don’t count, but the role of the EU legislation in our Bovine TB problem can’t be ignored. The Farmers Guardian reports that the European Commission doesn’t expect a vaccine to be around until 2023.

There is definitely something to be said for better biosecurity measures on farms. There are some brilliant farmers, and there are some poorer farmers, and coughs and sneezes spread diseases. We have known this since Koch came up with his postulates. The good farmers resent being told what they already know and the poor farmers resent being told to do better. We come back to my old hobby horse – how do you communicate that science to a varied audience?

And finally the TB test – if we can find a test that can discriminate between infected, active infections, vaccinated and TB free, and do so reliably, we can still trade with the EU. These things all take time, money, and a little bit of luck.

We won’t find the solution to the bovine TB problem on a welfare scientists hobby blog. The answer is not badger culling. It’s not, as the Torgersons suggest, just letting the disease roam free. If we want to trade with the EU we need to deal with it.


Just wait till Defra finds out cats transmit TB . . .


How the Elephant Got Her Trunk

When I was very small I had a beautiful picturebook Rudyard Kipling book of just-so stories. One I particularly remember is the tale of how the Elephant Got Her Trunk. She was staring at her beautiful nose in a pool every day until one day a crocodile swam up underneath her reflection, grabbed her nose, and pulled and pulled and pulled. The little elephant struggled for so long her nose was stretched all the way down to the ground before the crocodile finally released her. And that, children, is how the elephant got her trunk.

It’s silliness of course, we all know that evolution acts upon the populations, in random genetic mutations, and whatever happens to the individual, so long as it doesn’t stop them reproducing, doesn’t matter. All you need is to procreate, after that the genetic material is mixed and the next line of mutations start.

For the most part, these are the rules of evolution. But all rules have exceptions, and sometimes evolution works through a different mechanism, that of epigenetics.

You don’t come to FluffySciences to find out about cell mechanisms and inheritance (if you are coming here for that we need to have a conversation about our relationship) but you do come here for the real-world explanations. Epigenetics results in a kind of Just So story where a stressful event can result in a change that can be passed down to the next generation. And the Just So story we use to illustrate this isn’t so pleasant.

In the Netherlands in 1944 there was a Hongerwinter, when the Nazis cut off food and fuel transports in the river areas. Tens of thousands of people starved to death. Can you imagine the endless hunger, the enemy soldiers in your streets, the cold? At one point the daily calorie allowance was less than 600 calories.

The immediate effects of this horrible famine were obvious. Pregnant women gave birth to very small babies, children lost the ability to digest wheat, and Aubrey Hepburn developed anaemia. But what happened next?

In Painter et al’s 2008 study (which is open access and you can read here) they investigated the results of the Mothers who starved (F0), their sons and daughters who were born during the famine (F1) and their grandchildren (F2). Using a combination of historical health records, interviews and health checks, they investigated whether the ill health of the granddaughters could be attributed to what the mothers experienced.

Between the 07/01/45 and 08/12/45, children in this cohort were being born to mothers who had, during one of the 13 week periods of gestation, an average daily calorie allowance of 1000 calories. To put this in context, the typical pregnant woman needs 2200 calories per day to maintain both her body and her baby’s growth. In the Dutch Cohort study, children which were born between 01/11/43 – 06/01/45 and 09/12/45-28/02/47 were either born before the famine truly began or conceived after the words was over. They act as controls. They are from the same population, similar mothers, similar time period, even similar psychological stressed. They just don’t have that crippling 1000 calorie a day limitation during the important parts of the baby’s gestation.

The researchers monitored the F1 generation (remember that’s sons and daughters) weight, BMI, Socio-Economic-Status and blood tests to look at how they cope with sugar, and their good and bad cholesterol levels.  They did all this when the sons and daughters were 58 years old.

Then they asked about the F2 (grandchildren). Were the grandchildren premature, on time or late? What did they weigh at birth? Were they twins? How many kids? What order? How many girls? How many boys? How healthy are the grandchildren?

At this point the researchers know what has caused the ill health that F1 have suffered – it was the famine. The question is, has this ill health, which was entirely due to a short term environmental challenge experienced in utero, been inherited by the grandchildren? The two categories of disease they were most interested in were cardiovascular/metabolic diseases and psychiatric diseases.

They used a variety of mixed models (which allow you to have multiple children of the same parent, which would otherwise be a case of pseudoreplication), and regression models, among some other statistical tests which can cope with non-parametric (i.e. real world) data to investigate these questions.

Their results showed that the F1 generation were smaller at birth and as adults they had higher blood sugar levels two hours after eating than the ones who hadn’t suffered through the famine. So far so expected. The children of F1 women were also born smaller (though weighed the same) if the F1 woman had been in utero during the famine. This is the really interesting thing. The environmental effect was inherited. Also, the F2 children of the F1 generation who had been exposed to famine earlier in their gestation (i.e. when they were forming) were more likely to have poor health due to ‘other’ causes. Finally, and the one I consider to be most interesting, the F2 generation of those who had suffered in the famine were more prone to being fat babies.

The last point interests me the most. I’m working on a project looking at prenatal effects in farm animals and the way we talk about this phenomenon is to say that the offspring ‘samples’ the mother’s environment in early gestation. For example, if the offspring is receiving little food and lots of stress then it should prepare itself to be born into a world where resources are unpredictable and scarce. Whatever cellular changes it switches on to do this make it predisposed to obesity and heart disease in a normal environment, and can be passed on to its own offspring.

If you remember last week I ranted about nature vs nurture, this is why. This significant change in baby fat in the famine group is not genetics – the population is not genetically different. It’s not the environment, because the famine was long over when these kids were in the womb. It’s historical environment and the changes that produced. Remember, never use Nature Vs Nurture kids.

There’s still plenty we don’t know about epigenetics – how many generations can these effects last for? How cumulative can the total effect be? And when it comes to prenatal effects, what is the mechanism by which the offspring samples the mother’s environment? As a relatively new field, it’s sexy and cool and a lot of people are into it. Expect a lot more information about it in the future.

Just So.

The Anthropomorphism High Horse

I rarely read a piece of scientific journalism and think “what absolute tosh”, in part because I tend not to use the word ‘tosh’ and in part because I know that science journalism involves digesting and reconfirming a complex idea. It’s not easy.

But this article had me gnashing my teeth. It’s a summary of a paper by Ganea et al 2014 [in press pdf download – only link I can find]. The essence of the paper is this: children which grow up in urban environments (in this case pre-school age children from Boston and Toronto) are not exposed to animals. When they’re given anthropomorphic stories about unfamiliar animals (cavys, handfish and oxpeckers) they will agree with statements that attribute complex emotions to those animals, but not statements which attribute human physical capabilities, e.g. talking, to the animals. The conclusion is that anthropomorphic animal stories inhibit a child’s ability to learn animal facts.

The science I think is interesting – it is the conclusion and the bandying about of the word ‘anthropomorphism’ that get my goat. Let rant at you.

The article’s author says:

Setting aside the shades of grey as to whether non-human animals have analogues for things like friends, the findings suggest that for young kids, “exposure to anthropomorphized language may encourage them to attribute more human-like characteristics to other animals than exposure to factual language.”



This anthropomorphism spectre infuriates me at times. Let me put it this way, one of the questions asked of the children was “do oxpeckers have friends?” I’m asked relatively frequently if cows have friends, and if I want to answer that question accurately, I have to dance around terminology and use baffling scientific language to answer it in a way that means ‘yes but I can’t really say that because I’m a scientist’.

Cows have preferential associations within their herd. Being with these other individuals makes them more capable of physiologically coping with stressful events (Boissy & Le Neindre, 1997) such as being reintroduced to the milking herd (Neisen et al, 2009), being milked (Hasegawa et al, 1997), or feed competition (Patison et al, 2010a). They will preferentially engage in social interactions with these preferred associations, and these associations go on for longer than with other animals (Faerevik et al 2005, Patison et al, 2010b).

How do you explain this to a 2-5 year old child from Boston without using the word ‘friend’ or any synonym of it? Is it any wonder a child might reasonably assume that animals can have friends? Is it wrong to say that an animal can have a friend?

My irritation here lies with the writer of the article saying children believed ‘falsehoods’ about animals, based on anthropomorphism. We get one link, to a website I can’t access being based in the UK, to research which might suggest animals are similar to us in some ways. Then we move on to a paper I’ve referenced before talking about how dogs’ guilty looks are based on our behaviour (Hecht et al, 2012). The underlying assumption is still that animals are so different from us that children are wrong to believe that animals have the capacity for friendship and caring.

Now I’m fascinated by dogs for precisely this reason. They are so excellent at communicating with us, and reading us, that they are almost in-animal as much as they are in-human. They’re a possible model for human-child behaviour they’re so adept at this. I wouldn’t necessarily use dogs as an example for how the rest of the animal kingdom thinks if I was very worried about making cross species comparisons.

Anthropomorphism is either the attribution of human characteristics to animals. In which case it cannot be used pejoratively. For example, to say “This cow has eyes” would be anthropomorphic.

Or anthropomorphism is the inappropriate attribution of human characteristics to animals, in which case you must carefully consider why the characteristic is inappropriate when given to animals. It is not anthropomorphic in this case to say “This cow feels fear”, because fear, as we understand it, is an evolutionary mechanism to increase your chances of survival, it has physiological and behavioural components and the cow meets all of these. Ergo, this cow feels fear, and that is not an inappropriate characteristic.

Much as I lament the fact urban children have very little contact with the natural world, and I think this is a major issue for animal welfare, food sustainability, and the mental health of the children, I don’t fully agree with the paper’s conclusions, or the writing up in the Scientific American blog.

Firstly, the study found that all children learned new facts regardless of whether they read the anthropomorphic story or the non-anthropomorphic story. The results appear to indicate to me there was less fact-retention in the anthropromorphic story (and while I’m not a psychologist, I have worked with children and I do now work in education, I wonder if the anthropomorphic story, being similar to entertainment, indicated ‘you do not need to pay attention here’ to the kids. This does not appear to be discussed in the paper.).

Secondly, the study found that the children who had anthropoorphic stories told to them were more likely to describe animals in anthropomorphic terms immediately afterwards. Now again I’m no psychologist, but after I went to see Captain America I was partially convinced I was a superhero. It faded after the walk home. I’d like to know more about the extent of this effect over time before I declared anthropomorphic stories as damaging to children’s learning.

Thirdly, the Scientific American article presents some ‘realistic’ and ‘anthropomorphised’ images of the animals side by side. This is not what happened in the paper. In the first experiment the children were shown ‘realistic images and factual language books’ or ‘realistic images and anthropomoprhic language books’. The second study used ‘anthropomorphic images and factual language’ and ‘anthropomorphic images and anthropomorphic language’. The upshot of this is that the realistic image condition was not directly compared to the anthropormphic image condition, regardless of how it seems when you read the Scientific American article.

The paper says at one point:

This reveals that, like adults, young children seem to have a less clear conception of differences between humans and other animals in regard to mental characteristics, as opposed to behaviors. However, exposure to anthropomorphized language may encourage them to attribute more human-like characteristics to other animals than exposure to factual language.



Well there’s little wonder about that because even we scientists don’t have a particularly clear conception of the mental differences between humans and other animals. The paper itself is interesting and well worth a read, but it falls into the trap of thinking about anthropomorphism as a wholly negative thing. If I was a reviewer I’d suggest Serpell (2002) as an excellent starting point for a more balanced view of the phenomenon.

And I’d also suggest they watch this video before assuming that kids are daft for thinking animals feel emotions.


The Other

One of my colleagues recently took a sabbatical year and worked with another university’s anthropology department. This week she gave us a fascinating seminar about how anthropologists view human-animal relations and how different it is from the ethologist’s view.

I can only simplify what my colleague had already simplified for me (if you’re all interested we can harass her to write a guest blog post for us), but anthropologists don’t seek to understand and quantify their subjects like we do. Instead its more about a holistic documentation that incorporates the feelings and inherent biases of the observer. This is because the observer is coming in with their own culture and can never fully escape all those biases.

To me it seems as though anthropology does a lot of case studies, and as an ethologist I’ve been trained to look down on case studies. I’m not entirely au fait with everything that anthropology does (I worry about the inevitable changing of anything you observe so intimately) but I really like that they take the time to look at cases, and that they acknowledge how our own culture biases us.

But I do take issue with one thing in particular. They talk about their subjects as the ‘Other’. I don’t fully understand this concept from the brief seminar I got this week, but to the best of my understanding they are very concerned about their subjects being objectified. Therefore when they study animals they are reluctant to do anything that would objectify them, e.g. keeping them as a pet. The equivalence given was that you wouldn’t keep a woman or a tribesperson as a pet, so you can’t study an animal, ethically speaking, in that context.

I think this is forgetting just how ‘other’ the nature of animals can be. For example my colleague at the seminar quoted a paper by Smuts (2001 – and incidentally, what a wonderful name).  In the paper, Smuts investigates human-animal relationships. She details a revelation that occurred when the baboons she studied started to treat her like a baboon.

As a result, instead of avoiding me when I got too close, they started giving me very deliberate dirty looks, which made me move away. This may sound like a small shift, but in fact it signalled a profound change from being treated as an object that elicited a unilateral response  (avoidance), to being recognized as a subject with whom they could communicate. Over time they treated me more and more as a social being like themselves, subject to the demands and rewards of relationship. This meant that I sometimes had to be willing to give more weight to their demands (e.g., a signal to ‘get lost!’) than to my desire to collect data. But it also meant that I was increasingly often welcomed into their midst, not as a barely-tolerated intruder but as a casual acquaintance or even, on occasion, a familiar friend. Being treated like a fellow baboon proved immensely useful to my research…

To me this final sentence is a fundamental misunderstanding. We do not know that the baboons treated her like a baboon. I think they recognised her as an ‘other’, an ‘agent’ in anthropological speak (which, in fairness to Smuts, she does say in her lead in). They communicated with her in the only way they could and she responded as a human, therefore they knew she could understand some form of their communication. That doesn’t mean they recognised her as baboon, with all the inherent baboon culture. (I guess this then raises the question anthropologically speaking as to whether baboons tell science fiction stories of other species that have hugely different cultures – without the concept of another culture, can you truly have a culture of your own? I wonder).

We see this every day with pets – I’ve spoken before about how a special language can evolve between two members of a completely different species. Dogs, my favourite example for this kind of stuff, have so clearly adapted to us that they’ve survived across different human cultures, and yet they have their own dog language that they use within their species. When they don’t know this language they have huge problems interacting with their fellow dogs. When they don’t know the human language they have huge problems interacting with us. But do dogs understand the difference between dogs and humans, or do they just accept that humans are entities that are capable of interacting with them. (Possibly they accept that humans are entities that they can love, be loved by, etc., if dogs have a concept of love – I leave that for you to judge for now).

With that critique aside it is a very interesting paper.

Why do I go into all of this? Well there’s another interesting example of strange animal behaviour on the internet today. An Indian elephant was on a rampage, destroying houses as elephants are wont to do. At one house its wreckage disturbed a baby’s cot and the baby began to cry. The elephant stopped and picked rubble off the cot until the baby was freed.

Does the elephant recognise that the crying infant is an ‘other’? Does the elephant recognise that it has done something which has caused pain? (I’ve often wondered if cats recognise they hurt people when they scratch – or if it’s simply our emotional reaction they’re responding to). That’s quite a cognitive leap. We drill into children that our actions can hurt others and yet we’re forever hurting peoples’ feelings inadvertently.

Or has the elephant been distracted by an unusual noise and investigated (thus freeing the baby) until its curiosity was satisfied? With its energies so directed the rampage stopped.

I don’t  know because I cannot understand elephants. To me, the best way of getting to know elephants is to observe their behaviour, to objectify them, and to gather data on them (how often to elephants respond to infant cries, do elephants respond to any cries, etc.)

But I do like talking about the other possibilities.

The Empathetic Spreadsheet

Serendipity smiles on FluffyScience. I said I wanted to talk about the peer review scandal and the very next day I’m asked to act as the peer reviewer for the first time! If I ever needed an excuse to talk about peer review, I certainly have it now.

The peer review scandal everyone has been talking about is this: 120 computer generated conference papers were withdrawn by Springer. Including a delightful sounding paper about an empathetic spreadsheet. It sounds shocking. But it’s probably not quite as shocking as you think it is.

Let me explain using my own first hand knowledge of peer review. I don’t pretend to be an expert by any stretch of the imagination, but by fortunate happenstance I have enough publications, and enough different kinds of publications, to walk you through the process.

Have you checked out my Google Scholar profile? The link’s over on the right hand sidebar. Or just click here. You’ll see that there are currently five items in my profile. You can use Web of Knowledge instead but that requires an academic log-in, doesn’t update as quickly and isn’t as comprehensive in its record keeping. Therefore we’ll stick with Google.

All five of those publications have a Digital Object Identifier which basically means they have a permanent presence online. And they’re all linked to my name. The top three publications on that list are scientific papers. There’s MacKay et al 2012, MacKay et al 2013 and MacKay et al 2014. Then there’s a conference proceedings and a book chapter about the video game Halo (that’s another story).

Therefore I have publications in three categories: Papers, Conference Proceedings and Book Chapters. Now. Which publication wasn’t peer reviewed?

If you guessed the video game chapter, you’re correct. Video games, much as I love them, are not known for their rigorous peer review. You might be surprised to find that the Conference Proceedings (this one for the British Society of Animal Science (BSAS) 2012 conference in Nottingham) were, in fact, peer reviewed.

I’ve presented at two International Society for Applied Ethology (ISAE) conferences, two BSAS conferences, and two regional ISAE conferences. For all six conferences I have had to submit an abstract, a short description of what I intend to talk about. For BSAS the abstract is a page long, with references, subheadings and must present data in table or graph form.

This is quite unusual for my field, ISAE by contrast is something like 2000 keystrokes and frowns on references. On the other side of the fence, many of my colleagues have just submitted papers to the World Congress on Genetics Applied to Livestock Production (doesn’t it sound thrilling?) and these have been three pages in length. They’re papers more than abstracts.  But WCGALP only runs every four years and it is a very large conference, hence the extra work.

All of these conference papers will get comments back on them. Even the regional ones (although my last regional one was a comment simply saying I needed to be more explicit about what behaviour I was recording). The point being that a human has read these papers and passed judgement on it.

Here we stop and acknowledge the scientific committees of these conferences who have an extremely hard job reviewing so many small pieces of science.

So these 120 conference papers that were removed from Springer – what went wrong?

Well first off, many of the authors on these conference proceedings did not know they were co-authors. Someone submitted papers with their name on it without them knowing

But wait – I hear you say. Jill, your conference paper is on your Google Scholar profile. Wouldn’t you be suspicious if an extra publication cropped up? Yes, reader, I would. But note that despite publishing in six conferences only one of them is on Scholar. Why is this paper the lucky one? If I had to guess I’d say someone referenced something in that Proceedings, forcing Scholar to index it. But I may be wrong – Google’s not particularly clear on this. And some of these authors may be in the lucky position of having so many publications they genuinely don’t notice a few extra ones.

So why submit a fake paper with somebody else’s name on it? I’d be surprised if these papers are being presented at the conferences (although I’d love to hear the talk on empathetic spreadsheets). But what I do know is that those fake-papers have to cite other papers and another very important research metric is the number of citations you have. (Mine is a glorious 1).

Yes – fake papers are a problem. But it’s not a problem of bad results getting out there, it’s a problem of the system being gamed, and people using these kinds of research metrics as the only gauge of a researcher’s quality.

Research Metrics

There are ways around this. In the UK we have a new system for assessing research quality in Higher Education Institutes. It’s called the Research Excellence Framework (REF). Funding bodies use this to help them decide how to allocate funds over the next few years.

You enter REF as an institute. REF grades the papers submitted to them based on this framework.

  • A four star paper is world leading research, original, with great impact and the highest quality science.
  • A three star paper is pretty damn good, internationally recognised, good impact and high quality science.
  • A two star paper is internationally recognised as being original research, good impact and high quality science.
  • A one star paper is nationally recognised as being original research, good impact locally and high quality science.


The difference between these stars is really all about the impact of the science, which is great. Of course, REF has its own drawbacks. In the next few years you’re going to see a lot of UK papers with sweeping, statement titles instead of informative titles. A paper which would have been titled “The effect of indoor housing on behaviours in the UK dairy herd” will now be titled “Management systems affect dairy cow behaviour [and industry if I could wrangle that in there]”. You’ll also see a lot of new roles being created just before a REF exercise because the papers stay with the primary author. If you have a 4 star publication under your belt you become a Premier League footballer during the transfer season.

But one thing REF is  not vulnerable to is false publications. A panel of experts reviews all the papers that the institute chooses to put forward. No empathetic spreadsheets allowed.

We’ve just finished the first round of REF and it will be interesting to see how it affects research going forward. As animal welfare scientists we’re excited about the emphasis on impact, something we’re good at demonstrating.

Lastly, while 120 papers sounds like a lot, I’d like to direct you towards Arif Jinha’s 2010 paper which estimated that there are 50 million published articles out there. We are talking about less than 0.000001% of articles even if these were papers, never mind conference publications.


By that logic, my own publication records contributes 0.00000008% of the world’s papers. I’d better get cracking.

Ritual Slaughter and Animal Welfare

Quite a few people thought I should talk about the Independent’s story: Denmark banning kosher and halal meat. 

One of the people who thought I should talk about it was my cousin who’s currently doing a PhD in philosophy. Understanding somebody else’s PhD topic is always tricky, but to my knowledge, she’s investigating the rights of minority groups, e.g. religions, in liberal societies. There’s a fundamental conflict in a society which likes to believe everyone has the right to practice their beliefs when those beliefs might compromise the rights of others in the society. Whose rights should be most protected?

Now I am neither Muslim nor Jewish, I’m a staunch atheist. I’ll talk about this as objectively I can, and it’s not my intent to insult anyone.

Firstly – Halal meat is meat killed in accordance with Islamic laws. The animal is slaughtered in the Dhabīḥah  method which involved the animal’s carotid artery being slit and the aim of this is to kill the animal as quickly as possible to reduce suffering. It’s important to note this – for years halal was considered good welfare. The law is there to promote good welfare as traditionally, Allah wants us to look after the animals.

Jewish dietary law is called kashrut, and foods which obey these laws are kosher. Only clean animals may be eaten and clean animals are cloven hoofed cud-chewers (ruminants), but not animals which digest in the hind gut or do not have a cloven hoof. There’s a list of flying animals that it is not okay to eat, such as birds of prey, bats, fish-eating birds, and you can only eat sea-dwelling animals that have both fins and scales. Incidentally, and in the light of my last post, I do have a Jewish friend who likes to point out that giraffes are kosher. Poor Marius never stood a chance in Denmark. The ritual slaughter of kosher animals is similar to halal, a precise cut to the throat severing the carotid, jugulars, vagus nerves, trachea and oesophagus. The shochet, the man who kills the animals, traditionally should be a good Jewish man with great respect for the religion, and therefore a respect for the suffering of the animals.

Both of these methods promote good care of the animals, respect for the animal being slaughtered, and – and I think this is really important – traceability of meat. They both tick a lot of my boxes. They protect human interest by showing due care and attention to the food chain and food hygiene, and they protect the animal’s interest by showing them respect and killing them in what is perceived to be the best way of avoiding suffering.

So why do Denmark have concerns over halal and kosher meat?

I expect it’s to do with the lack of stunning. Gregory et al (2009) compared three forms of killing beef cattle by investigating the blood found in the trachea. They compared shechita (no bolt stunning beforehand), halal (no bolt stunning beforehand) and bolt stunning plus ‘sticking’ (the method of slaughter is mechanically the same but because it is stunned beforehand and there’s no prayer it the religious terms are not accurate). Now note first off that this study does have a flaw in that it’s not the same person killing all these animals, because then it would not be true shechita/halal, so some of the variation here cannot be attributed to the method but the slaughterer. All three methods found animals which had blood in the trachea (the shechita slaughtered animals had the least amount of blood in the trachea with only 19% of animals showing blood there, with the 21% of the stuck animals showing blood and 58% of the halal animals). The blood reached down as far as the upper bronchi (indicating quite a lot of aspiration of the blood, e.g. the animal was sucking down a breath of blood) in 36% of the shechita animals, 69% of the halal and 31% of the stuck animals. There was a bright bloody foam in the in the trachea of some of the animals (indicating air being forced through the blood) in 10% of the shechita, 19% of the halal and 0% of the stunned animals. The authors concluded that the animals killed without stunning could suffer a welfare challenge from the inhalation of blood before they lose consciousness.

In 2010, Gregory et al looked at how quickly halal slaughtered cattle collapsed after the cut was made. 14% of the animals studied stood again after collapsing. This demonstrates that consciousness is not lost, and so the method, wonderful though its intent may be, does not work as it should.

Another interesting religious dietary law is that Sikhs cannot eat either halal or kosher meat. Sikhs believe that ritual slaughter which involves prayer and a protracted death is an unnecessary level of ritualism and isn’t appropriate. Instead they slaughter their meat animals using the jhatka method which should completely sever the head from the body of the animal in one blow, minimising suffering. (Incidentally, I haven’t been able to find out if Denmark still allows jhakta meat, please let me know if you have info on this).

The EU has a directive on animal slaughter which requires stunning unless the member state wants to exempt a religious group from the directives rules. Denmark has decided no longer to allow this exemption for religious groups. Some papers have looked at what it would take to have Islam accept stunning as part of halal slaughter (Nakyinsige et al, 2013, spoilers – there are ways to have halal meat with stunning)

But! I just want to point out one last thing. When we’re assessing welfare in slaughterhouses, we use ‘success of stunning’ as a welfare measure (Grandin, 2001, Grandin, 2010). Stunning is not the end to all slaughter related welfare problems. Who has the right to tell religious groups what they can and cannot do? Well I have a personal opinion about that, but I think that science’s role in this debate is to investigate welfare indicators, to find reliable and safe methods of slaughter, and not to forget that many of these dietary rules come from a desire to protect welfare. And it is my job as a member of my society to say I’m worried about animal welfare at slaughter.

One last thing. While I’m concerned about halal, kosher and even jhakta meat, I have eaten the first two and would eat the third. I’m considering making a goat curry and the local butcher who does goat is a halal butcher. But I rarely ever buy Danish bacon. In part because I want to support the British pork industry, but in part because I have welfare concerns about the farming of Danish bacon. Rightly or wrongly, I have more concern over the policy differences between my country and Denmark, than I do over ritually slaughtered meat. I wonder how right I am about that.

Edited to Add – An acquaintance of mine with more experience on the slaughter side of animal welfare had a few good points to make about this article, which I will share here.

  • There’s a difference between small ruminants and large ruminants in using cut-throat slaughter. Smaller animals tend to lose consciousness within 8 seconds and so the worries about consciousness and suffering that Gregory et al raise are less of a concern for my goat curry (I am making that goat curry soon – I can taste it already . . .)
  • And two – the animal’s life before slaughter is such an important component of animal welfare that my last point may be misleading for the layperson. We need lots of research on slaughter, all forms of it, but how we care for food-production animals in their lives is one of the biggest welfare challenges facing our society.


If you live in the UK or US you’re running out of excuses not to watch the documentary Blackfish. It’s had a cinematic release and been shown on the BBC, as well as being available on iTunes.

For the uninitiated, Blackfish is the story of an orca who recently killed its trainer at SeaWorld. As a result, SeaWorld trainers were prohibited from entering the water with the animals.

When I’m not slaving away over a hot computer screen and working on my next paper, I am a bit of a film geek. In fact I wrote the first draft of this post before heading to my monthly film pub quiz (we lost). Blackfish is a truly brilliant documentary. It takes you an emotional journey, is beautifully structured, and paints the orca, Tilikum, as a flawed, sympathetic character. I love it as a film.

But we’re scientists! Let’s take a critical look at the concept of keeping orcas in captivity. As I have access to scientific papers, I decided to do a short review of the literature. When talking about science I think it’s important to cite your sources (and no doubt I’ll say this many times in future) so I will link to papers. Unfortunately some of them, if not most, will be behind a paywall.

I wrote this post over a number of days, but it’s certainly not an exhaustive literature search. This is the kind of literature search I’d do if someone asked me what I thought of orcas in captivity.

So what did I find out?

Continue reading “Blackfish”