Pavlov’s Sabbatical – The Dangers of Misinterpreting Your Experimental Findings

Very few people are aware that it is the 100th anniversary of Ivan Petrovich Pavlov’s sabbatical year at the University of Sussex. In 1912, fresh from many research successes and a Nobel Prize in Physiology in 1904, he had managed to negotiate a year’s sabbatical leave from the surgical department of the physiological laboratory at the Institute of Experimental Medicine in St. Petersburg. I am able to say that I am aware of this unusual episode in Pavlov’s illustrious history because many years ago I was asked to write a chapter on Pavlov in an Edward de Bono book called ‘The Greatest Thinkers’[1], and my research uncovered this relatively unrecognized period of Pavlov’s life. At that time, Russia was riven with famine and political discontent, and Pavlov was struggling with the concept of ‘psychical secretion’. He had begun to realize that the forms of nervous regulation of digestive gland secretion in dogs could often be conditioned not only by purely physiological factors but also by what he initially called ‘psychical’ factors. He felt that the time was right to move away from the political and social turmoil in Russia to develop his overarching theory of conditioning that he felt would some day consolidate his role as the eminent physiologist of the century. Having secured funding for his sabbatical year, he travelled to England. When he arrived he was overjoyed by the opportunities that the University of Sussex offered him for his research. He was based in the School of Life Sciences (a building which is now occupied by administrators and accountants trying to determine how biological sciences can operate at anything less than an enormous loss), and immediately set up a conditioning lab to further explore his theories of ‘psychical secretion’.

Pavlov loved the liberal, academic atmosphere at Sussex, and spent many hours in what then was euphemistically called ‘East Slope Bar’ (because of its ability to slope eastwards and be a bar at the same time). He had decided that his next goal was to prove that associative conditioning was a basic and universal learning process, and that it was the most basic adaptive learning mechanism in the animal kingdom. There was no doubt that classical conditioning was universal – it could be found in primates right down to single celled organisms (yes, even nematodes), but for Pavlov there was something missing. His conditioning theory was incomplete. It had to apply to animals of all kinds, creeds, political persuasion, and psychic state.

To this end, Pavlov dedicated his sabbatical year at the University of Sussex to determining whether classical conditioning applied to dead as well as living organisms. This was a stroke of genius. Only very few scientists possess the insight that allows them to project their theories into areas which are challenging and paradigm shifting (e.g. Sheldrake[2], Bem[3]), but Pavlov was such a scientist.

Pavlov began his research by looking for a source of dead dogs that would serve as subjects in his research. He very soon found that source at the ‘Goods Inwards’ door of the University Refectory. He negotiated a regular supply of dead dogs for his experiments, and the University Hospitality services have recently recognized the historical importance of this by erecting a brass plaque outside the University cafeteria commemorating their role in Pavlov’s sabbatical research.

Pavlov conducted his research on the salivary conditioning of dead dogs with his usual scientific rigour. Having carried his subjects back to his lab, he placed them in the usual experimental restraints and began the conditioning trials. I have been lucky enough to secure some original transcripts of the notes Pavlov kept on those early experiments. His excitement was palpable. As he writes:

 “ I placed the subject on the experimental table; I rang the bell; I waited very briefly then I gave the dog the food….. Nothing.! ….No salivation! – I was puzzled. This had always worked before in the lab in St. Petersburg. Why was this so different in the University of Sussex? I could not believe that my universal learning principles did not also apply to dead organisms. But wait….of course! This was only the first trial. There will be no learning on the first trial! We must pair the bell with food on more occasions.”

 Pavlov’s scientific logic was impeccable. He continued with his experimental procedure,  but time eventually told a sad story. Although Pavlov had strived manfully to extend his so-called universal principles of learning to dead animals, it didn’t appear to work. His dead dogs failed to salivate to the bell CS even after hundreds of conditioning trials. Nevertheless, being the scientist that he was, and after many hours and days of detailed thought and analysis, Pavlov came to the obvious conclusion. It was not that dead dogs were not conditionable – they were in fact deaf. Pavlov had managed to salvage his universal principle of learning by taking a thoughtful and insightful new look at the data. Any of you that have come across a dead dog will be fully aware that deafness is indeed a feature of dead dogs, and our knowledge of this feature stems from Pavlov’s pioneering experimental work during his sabbatical year at the University of Sussex.



[1] De Bono E (1976) The Greatest Thinkers. Weidenfeld & Nicolson: London

[2] Sheldrake R (2012) The Science Delusion. Coronet.

[3] Bem D. J. (2011) Feeling the future: Experimental evidence for anomalous retroactive influences on cognition and affect. Journal of Personality & Social psychology, 100, 407-425.

Advertisements

The Trials and Tribulations of the Journal of Null Results

Ok – straight to the point again. There will never be a Journal of Null Results. Recently Richard Wiseman tweeted “Psychologists: I am thinking of starting a journal that only publishes expts that obtained null results. Wld you submit to it?” I replied – Yes, I have drawers full of studies with null results. But – having said that – I struggle with even thinking about writing them up, let alone submitting them – and the reality is that I know that no editor would contemplate accepting them for publication. But I have great sympathy for creating a Journal of Null Results.

So what would a Journal of Null Results look like? Is it going to be swamped with submissions from people who didn’t get significant results? That would be nice for all our young and up-and-coming researchers. It would mean that those 50% or so of PhD studies that never achieve significance would see the light of day; it would mean that all undergraduate projects – at a sweep – would be suddenly publishable. But I’m not sure that’s what’s intended. One of the cornerstones of science is replication, but we still seem quite bad at deciding when replication is necessary and deciding when non-replication is publishable. These are the decisions upon which science – according to the principle of replication – should be based. I haven’t looked closely at the literature, but I suspect that most journal editors would not be interested in publishing (1) a straight replication of a study that is already published, and (2) a study that fails to replicate an already published study – especially if the results are null ones.

There are two issues that immediately come to mind here. First, we can never quite know why a study has generated null results. As a good scientist, you certainly wouldn’t design your experiment to generate predictions that would support the null hypothesis. We know that null results can be produced by low power, inappropriate control conditions, poor experimental environments, experimenter bias, etc. etc. – so null results studies already have that inherent disadvantage unless they are obsessively faithful reproductions of studies that have produced significant results in the past.

As I said earlier there are two distinct, but related issues here: who decides whether a finding is so significant that it needs replicating, and is a replication going to generate enough citations to make an editor believe it is worth publishing? For example, in 2011 Daryl Bem published an article in Journal of Personality & Social Psychology – one of the flagship journals of the APA – apparently showing that ESP existed and that people could predict the future. The normal process for the vast majority of research findings would be that this finding  – published in one of the major psychological journals – would now simply enter into core psychological knowledge. It would find its way into Introduction to Psychology textbooks and become taught as psychological fact. But it is controversial. Richard Wiseman, Chris French & Stuart Ritchie have had the salutary experience of trying to publish a replication of this finding, and their failure to get their null results published in that same journal is documented in this blog (http://chronicle.com/blogs/percolator/wait-maybe-you-cant-feel-the-future/27984). Quite strangely, Journal of Personality & Social Psychology told them ‘no, we don’t ever accept straight replication attempts’. I assume the implication there is that whatever they publish is so rock hard fact because they are such a high impact journal that they have no need to justify the scientific integrity of what they publish? So where do you publish replications if not in the same journal as the original study? Wiseman et al. did submit their replication to another journal, but the reasons for not publishing it in that journal seem from the above blog also to be bizarre. Once a finding does get published – no matter how bizarre and leftfield – there are editors bending over backwards to find reasons not to publish attempted replications.

One reason why replications may not get published, and specifically why null findings may not get published, is because editors are not only nowadays being pressurized to find reasons to reject quite scientifically acceptable studies (because the journal simply has to manage demand), but because a study may not generate citations. This is certainly true of successful replications, which we assume will never be as heavily cited as the original study. In the last couple of weeks I had an article rejected by an APA journal, and one of the reasons given by the action editor was “we must make decisions based not only on the scientific merit of the work but also with an eye to the potential level of impact for the findings”. Whoa! – that is very worrying. Who is making decisions about whether an article is citable or not?

Finally, I should recite one of my own experiences on how you try to get the research community to hear about null findings. During the early 2000s, my colleague Andy Field and I did quite a bit of research on evaluative conditioning. There was a small experimental literature in the 1990s suggesting that evaluative conditioning was real, important, and possessed characteristics that made it distinct from standard Pavlovian conditioning – such as occurring outside of consciousness and being resistant to extinction. Interestingly, multinational corporations suddenly got interested in this research because it suggested that it might be a useful paradigm for changing people’s preferences, and so had important ramifications for advertising generally. Andy’s PhD thesis rather eloquently demonstrated that the evaluative conditioning effects found in most of the 1990s studies could be construed as artifacts of the procedures that had been used in those earlier studies and need not necessarily be seen as examples of associative learning (Field & Davey, 1999). We spent many subsequent years trying to demonstrate evaluative conditioning in paradigms that were artifact free, but there were many very highly powered studies that resulted in failure after failure to obtain significant effects. Evaluative conditioning was much more difficult to obtain than the published literature suggested. So what should we do with these bucket loads of null results? No one really wanted to publish them. In fact, we didn’t want to publish them because we thought we hadn’t quite got it right yet! We kept on thinking there must be a procedure that produces robust evaluative conditioning but we haven’t quite refined it. In 1998 I was awarded a 3-year BBSRC project grant worth £144K of taxpayers money to investigate evaluative conditioning. We found that it was extremely difficult to demonstrate evaluative condition – and it was also very difficult to publish the fact that we couldn’t demonstrate evaluative conditioning! Andy and I attempted to publish all of our evaluative conditioning null findings in relatively mainstream journals – summarizing a total of 12 studies largely with null results. No luck there, but we did eventually manage to publish these findings in the Netherlands Journal of Psychology (Field, Lascelles, Lester, Askew & Davey, 2008). I must admit, I don’t know what the impact factor of that journal is, and I don’t know how many people have ever read that article. But I suspect it will never make headlines in any review articles on evaluative conditioning.

The problem for null results is that they lie in that no man’s land between flawed design and genuine refutation. Our existing peer review process leads us to believe that what is published is published because this is the way we verify scientific fact. It then becomes significantly more difficult to produce some evidential material saying that ‘scientific fact’ is wrong. It then becomes even more difficult to prove that a ‘scientific fact’ is wrong because of a study demonstrating null results.

Field A. S. & Davey G.C.L. (1999) Re-evaluating evaluative conditioning: A nonassociative explanation of conditioning effects in the visual evaluative conditioning paradigm. Journal of Experimental Psychology: Animal Behavior Processes, 25, 211-224.

Field AP, Lascelles KRR, Lester KJ, Askew C & Davey GCL (2008) Evaluative conditioning: Missing, presumed dead. Netherlands Journal of Psychology, 64, 46-64.

Follow me on Twitter at: http://twitter.com/GrahamCLDavey


The Evils of Journal Scope-Shrinkage: The Example of Clinical Psychology

I’ll come straight to the point. The more that journals have to introduce demand management strategies, the more they end up shrinking their scope. The more they shrink their scope, the more they force research into a cul-de-sac and isolate it from cross-fertilization from other core areas of their over-arching discipline. In this day and age, there are more and more researchers all of whom are under pressure to do research and to publish it – for both the sake of their current jobs and their future careers. Journals find themselves overwhelmed with submissions, to the point where many APA journals now have 80%+ rejection rates. So how do you manage demand? Well, at least some journals manage their demand by shrinking their scope. Demand management is important because it reduces the journal costs incurred by managing a submission through the journal editorial system, and reduces the costs incurred through Associate Editors who manage the peer review process. For most traditional journals that also have a print as well as an on-line version of the journal, it means that increased numbers of submissions mean increased costs for just a fixed publication income. Effectively, scientific publishers believe they are spending good money having to find reasons to reject large numbers of submissions that are of acceptable scientific quality but will never themselves earn money for that publisher.

So you shrink your scope. Often scope-shrinkage has a relatively small impact. But in some areas of psychology it can have a significant impact depending on how a journal redefines its scope. I am an experimental psychopathology researcher. The majority of my research is conducted as an experimental psychologist on a subject matter that is psychopathology, and I have traditionally published in clinical psychology journals – which is a good fit for the subject matter of my research and also gets my research read by clinical psychologists.

But even before scope-shrinkage, I’ve sometimes encountered difficulties publishing in clinical psychology journals because I’ve used analogue rather than clinical samples or my research has not been viewed by editors as being relevant to clinical interventions. They might just as well have said “You’re not a clinical psychologist, and your research can’t have any relevance to clinical populations because your participants weren’t a clinically diagnosed sample, and so your research is of no interest to the clinical research community!” Harsh – but that is the feeling I got.

Well, now that’s official – at least for some Elsevier journals. In 2010 Behaviour Research & Therapy – that traditional bastion of experimental psychopathology research – posted a very brief editor’s note stating “Behaviour Research & Therapy encompasses all of what is commonly referred to as cognitive behaviour therapy (CBT)” with a revised focus on processes that had a direct implication for treatment and the evaluation of empirically-supported interventions (Vol 48, iii, 2010). In effect, it had become a CBT evaluation journal. An email exchange I had with the editor confirmed that this re-scope was a consequence of the large number of submissions to the journal. To be fair, the editor did say that “the goal is not to eliminate research on experimental psychopathology, but to try to have it more ‘directly’ related to prevention and treatment”.

So where do I now go to publish my psychopathology research if it’s not clearly intervention related. BRAT’s editor did say that “a final sentence in the Discussion would not suffice” to make any research intervention relevant. Fair enough. Most of my research is on the aetiology of anxiety disorders, so one journal that I’ve published in quite frequently is another Elsevier journal, Journal of Anxiety Disorders. I submitted a manuscript in June 2010 – around the same time that BRAT had shrunk its scope to intervention-relevant papers. I received a regretful email from the action editor immediately after submission saying “we have made a decision that we will no longer review manuscripts based solely on undiagnosed or analogue samples. This decision can be found within the Editorial Guidance paragraph on the back cover of the journal. Consequently, I will be unable to accept it for publication”.

Scope-shrinkage yet again. I’m sure that these decisions about journal scope were all taken in the best of faith and genuinely meant to help deal with and manage demand, but I can’t help but think of the potential restrictions that changes such as these will place on the discipline-wide exchange of ideas and information that seeds genuine progress in any applied area. OK, so I’m now miffed that I can’t easily publish any more in journals that I used to consider automatic outlets for my research, but there must surely be a bigger and wider cost. As we get more journals with increasingly narrower scopes, it is likely to lead to researchers reading only those journals that have a direct relevance to their research and areas of interest. There could well be significantly fewer left-field ideas, fewer opportunities for the cross-fertilization of ideas. It is also likely to lead to the entrenchment of existing paradigms of research within specific areas – especially applied areas such as clinical psychology where theoretical and empirical sharpness can often be compromised by the need for serviceable outcomes.

During our own weekly lab meetings, I always bring the latest copy of Quarterly Journal of Experimental Psychology along as soon as it’s published, and we look through it for ideas that have relevance to the psychopathology processes that we’re researching. This has already been the source of some exciting new ways for us to conceptualise and study the psychopathology processes we’re interested in. With the scope-shrinkage currently occurring in at least some important clinical psychology journals, I wonder where new ways of thinking about clinically-related research problems will come from unless those researchers who publish in these journals are actively scouring the contents of journals beyond their immediate clinical remit.

Meeting B. F. Skinner

B.F. Skinner is such a psychological icon that it’s easy for me to believe that he’s only someone I ever read about.  But I was lucky enough to meet him a couple of times.  I can honestly say he is the most charismatic person I have ever met – more charismatic than Debbie Harry, Prince Charles, and Richard Attenborough (sorry, my list of celebrity encounters is quite limited). The reason I met Skinner so often was because I was fortunate enough to do my undergraduate and postgraduate degrees at Bangor in North Wales, at a time when that Psychology Department was a hive of Behaviourist research and thinking. It was the most wonderful of educations – every lecture was about objectivity, measurement, functionality, and experimentation and couldn’t fail to make you think as a scientist about every psychological question you encountered. At that time most of the faculty were newly appointed young researchers who all promoted the same scientific view of human behaviour. The Head of Department was an ex-Oxford Philosopher, Tim Miles (http://en.wikipedia.org/wiki/Thomas_Richard_Miles), who had been greatly influenced in his thinking by Ludwig Wittgenstein, someone who we heard about regularly in our lectures – I remember Tim Miles regularly saying “Say what you like as long as you know what you mean” – something I still try to explain to my own students (except for the ‘say what you like’ bit!). Tim Miles built around him in Bangor a strong and thriving group of modern Behaviourist thinkers. Sadly, he died in 2008, and is still best remembered for his work on dyslexia and his founding of the Bangor Dyslexia Unit, but I am always amazed that his role in founding a thriving Department of Psychology dedicated to scientific principles is only rarely recognized.

I entirely owe my early academic and professional progression to Tim Miles. Like many new undergraduates who had been released from home for the first time, I lost my way in my first year. In my first year at Bangor, I failed all my first year exams (but I did achieve one unforgettable accolade as Bangor Student Union Table Football Champion – perhaps indicating where I went wrong). Tim eventually called me into his office after the first year results were announced. He sat in front of me – as usual wearing his black academic gown. “Davey” he said, “you’ve failed everything”. Yes, I knew I had. In those days you expected to be simply vilified if you’d failed and sent home to take up that menial job in a boring small company accounts department that I’d actually gone to University to avoid. But no, he said to me “Davey, I’m going to give you one more chance (He was for all his life a devout practicing Quaker). We’re setting you a special exam in September, if you achieve 40% and pass you can do Psychology”

It’s a long time ago, but I’m sure I nodded and breathed a sigh of relief. I went home for the summer and as a callous teenager whose main aim was to learn how to play Led Zeppelin guitar riffs from the first Led Zeppelin album (http://en.wikipedia.org/wiki/Led_Zeppelin_(album)) and to practice Bert Jansch’s finger picking style (http://en.wikipedia.org/wiki/Bert_Jansch_(album)), I played lip service to revising for this special exam that I’d been so fortuitously allowed to sit. I got 43%. I passed. But most importantly I had by then decided that this was too good an opportunity to miss (apart from getting to number 1 in the US Billboard charts – something that is still on my “To Do” list). Nevertheless, my willingness to want to achieve at that point was made so much easier by being taught in a Department with a close and singular philosophy – one of scientific behaviourism.

Although I’m now willing to call myself a lapsed Behaviourist (I am very happy to embrace cognitive views and explanations), my undergraduate education as a radical behaviourist was absolutely formative. Sadly, many modern day undergraduates seem to want to be taught only psychological core knowledge (Like they need to regurgitate facts at a Pub Quiz), they rarely start from a ‘way of thinking’ perspective. In fact, they rarely experience any philosophy of science that teaches them about what scientific explanations are and what are ‘good’ explanations and what are ‘bad’ explanations. In my undergraduate training that is where each lecture started.

Needless to say I enjoyed my undergraduate psychology experience. It was then a natural progression to go on and study for a PhD. I was immersed in behaviourism and animal learning at that time, and couldn’t understand why I wasn’t being nominated for a Nobel Prize for my doctoral research on the determinants of the fixed-interval post-reinforcement pause in rats! With Bangor being such an important hub of behaviourism and animal learning at that time, it was inevitable that many important researchers and theorists would pass through, and B.F. Skinner was one of those people. When his radical behaviourist treatise “Beyond Freedom & Dignity” was published in 1971 (http://en.wikipedia.org/wiki/Beyond_Freedom_and_Dignity) I remember sneaking out of the lab to buy the one copy that was in the bookshop before my research colleagues got it (I don’t think I’ve ever been forgiven for that!).

 

Skinner came to Bangor a couple of times to various Behavioural conferences. It is extraordinary that he did, because during the 1970s he was one of the most sought after psychologists and speakers in the world. He was an extraordinarily polite man (and why shouldn’t he be?), and always had available a very detailed and considered answer to every question that he was asked. He is one of the few people that I’ve ever felt overawed in the presence of – but perhaps that’s because I was a graduate student in the formative years of my life. I sat opposite him at dinner on one occasion. I only managed to ask him two questions, and they were quite disparate in their scope. The first was “Could you pass the salt please?” He did. The second was “If a society was to adopt your thesis in ‘Beyond Freedom & Dignity’ what would that society look like?” Without any hesitation, he replied “Modern day Communist China during the cultural revolution”, and continued with his meal – albeit adding small sound bites of information on this view as we ate. I was stunned into not asking for the salt again. Here was a man who had been vilified during the 1970s as an archetypical American protestant reactionary preaching tyrannical fascist views about the route that societies should take – but he firmly believed that the principles he espoused would lead to quite the opposite types of society envisaged by his critics. But that is so typical of B.F. Skinner – for example, he never responded to Chomsky’s vitriolic criticism of his book ‘Verbal Behaviour” (http://cogprints.org/1148/) because – after reading the first paragraph – he realized that Chomsky had simply not understood the main thrust of his arguments. Some of Skinner’s scheduled talks in the UK about ‘Beyond Freedom & Dignity’ immediately after its publication were cancelled because of the left-wing demonstrations that had been organized to confront his so-called tyrannical views. At least that was my memory of what happened, but I can find no record of these events or their cancellation. Neither can I find any mention of a left-wing group that I remember being called the “Red Rats” whom I recall (rightly or wrongly) made it their job to disrupt his high-profile presentations. So if anyone has any recall of this group, I would very much appreciate any information they can provide.

Skinner’s views were a philosophy of science – not, as many people think, a mechanistic view of how behaviour works. He was the original evidence-based thinker in psychology, moving psychology from being a discipline dealing with hypothetical, psychic entities to one that dealt with measurable, observable scientific facts. Sadly, I think most people remember him for trying to ‘treat people like they are rats’ (1), but it was all really about the philosophy of explanation. Basically, however we try to explain the behaviour of nonhuman animals, then the same principles of explanation apply to understanding human behaviour. It was never about ‘degrading’ humans to the level of animals nor was it about devaluing the ‘dignity’ or ‘freedom’ of human beings (2, 3). It was about ways of understanding.

(1)       Davey G C L (1981) Animal Learning & Conditioning. Macmillans.

(2)       Koestler A (1967) The Ghost in the Machine. London: Hutchinson.

(3)       Chomsky N (1972) Psychology and ideology. Cognition, 1, 11-46.

The Mystery of the Origins of Phobias

When I begin to teach my undergraduate students about phobias, I always start with a slide showing a large pile of coloured buttons next to a pad of cotton wool. Everybody knows about the most common phobias – things like fear of heights, spiders, water, snakes, creepy crawlies, darkness, death, etc. – and I’ll be writing a lot about those later in this book. But sometimes you get what I call ‘common uncommon phobias’ – phobias of everyday objects that people shouldn’t have anxieties about but with a little probing you find that a lot of people do. Cotton wool is one in particular. There seem to be quite a few people who can’t bear to touch cotton wool. If you blindfold someone with a cotton wool aversion and then unexpectedly get them to touch some cotton wool, their hands recoil from it before they’ve even realized what it is. Aversion to buttons is another puzzling phobia. In my life alone, I’ve come across dozens of people who insist on having all the buttons cut off their clothes before they’ll wear them. They are certainly uncomfortable enough with buttons to insist on this strange ritual. One particular example of button phobia reported in the psychological literature is of a 9-year-old Hispanic American boy who was unable to handle buttons. As a consequence he couldn’t dress himself and had difficulties concentrating at school because of an excessive preoccupation with not touching his school uniform or touching anything that his buttoned shirt touched. Outside of school, he avoided wearing clothes with buttons and avoided contact with buttons that others wore (1). Unlike many of the phobias we will talk about later, there did seem to be a event that precipitated this specific fear of buttons. When he was 5-years-old he was pasting buttons onto his posterboard and ran out of them. He was asked by his teacher to come to the front of the class and fetch some more buttons from a bowl on the teacher’s desk. On reaching for the bowl, his hand slipped and he accidentally tipped the whole bowl of buttons over himself – an event which he described as very distressing. I doubt very much whether all button phobias are caused in the same way as this, but this example does highlight some interesting features of phobias. First, after the precipitating event his fear of buttons simply got worse and worse – despite reassurances from his family and friends. This is known as ‘incubation’, in which – for no obvious reason – fear of the object or event simply escalates over time. Secondly, this boy’s phobia significantly interfered with his normal daily living, affecting his ability to look after himself and affecting his educational development. Thirdly, the fear develops into an intrusive and dominating cognitive preoccupation, in which he has to be continually hypervigilant that he does not accidentally come into contact with buttons. Finally, there is an interesting element of fear of contamination in this case history that is common to many phobias. Not only is he fearful of buttons, he is also anxious about other things that may have come into contact with buttons. I will talk more about this seemingly ‘magical law of contagion’ in Chapter 4.

While we are on the topic of unusual phobias, let me describe another interesting case history so that you can get a flavor of how severe specific phobias are experienced and how this experience develops. Many years ago, eminent clinical psychologist Jack Rachman described in detail a chocolate phobia exhibited by a patient known as Mrs. V (2). She complained of an extreme fear when confronted with chocolate or any object or place associated with chocolate, and even avoided anything that was brown (e.g. she would refuse to sit on any brown furniture). This avoidance extended to avoiding shops that might stock chocolate, and she once walked up eight flights of stairs rather than use the lift because of a brown stain next to the lift buttons. As with our previous button phobia example, her phobia ‘incubated’ over time to the point where she had ceased working because of her fear and was practically housebound. As Rachman points out, fear of chocolate is extremely rare and it is difficult to argue that it has any obvious survival value. Unlike the Hispanic American boy with the button phobia, Mrs. V. was relatively inarticulate about the history of her fear. But according to the accounts she was able to give, her anxieties began shortly after the death of her mother whom she was very close to. Her anxieties first became focused on fear of cemeteries and funeral parlours, and then she became aware of a mild distaste for chocolate some months later. Four years on from her mother’s death and she had become entirely chocolate phobic – avoiding chocolate and even becoming extremely frightened of it. This example illustrates the gradual onset of severe phobias that eventually ‘incubate’ to become distressing and life disrupting. It also emphasizes the lack of insight that the sufferer has into the processes that gave rise to the phobia. Mrs. V. felt sure that she had seen a bar of chocolate in the room containing her mother’s coffin, but in all probability this was a fanciful reconstruction in her attempt to explain her irrational feelings, and in the next section we will discuss the fact that a majority of phobia sufferers (even those suffering mild phobias) are usually at a loss to explain how their phobia started or how it developed.

 “IT’S A PAST LIFE THING”

 Billie Bob Thornton is a famous actor, screenwriter, director and musician. As accomplished as that sounds, he once told chat-show host Oprah Winfrey that he had a fear of antique furniture – so much so that he simply couldn’t eat anywhere in the vicinity of antiques. Thornton has also admitted that he also has a fear of certain types of silverware, and wrote this fear into his character Hank Godowsky in the film Monster’s Ball, who insists on eating his food with a plastic spoon and fork. What does Thornton have to say about his fears? In an interview with the Independent newspaper, he explained “It’s just that I can’t use real silver. You know, like the big, old, heavy-ass forks and knives, I can’t do that. It’s the same thing as the antique furniture. I just don’t like the old stuff. I’m creeped out by it, and I have no explanation why… I don’t have a phobia about American antiques, it’s mostly French – you know, like the big, old, gold-carved chairs with the velvet cushions. The Louis XIV type. That’s what creeps me out. I can spot the imitation antiques a mile off. They have a different vibe. Not as much dust” (3). How did he acquire this very unusual fear? – well he says that “maybe it’s a past-life thing and I got beat to death with some old chair”! We’ll discuss that possibility later.Other celebrities have more mundane phobias; film actress Megan Fox is afraid of paper, Country singing star Lyle Lovett is afraid of cows, Cameron Diaz has a fear of door handles, and former Baywatch star Pamela Anderson has a fear of mirrors. You may be thinking, ‘well, these are not as crazy as Billie Bob Thornton’s fears’. But even so, how do people get fears of paper, cows, door handles and mirrors?

By comparison, my own fears seem by my own admission much more adaptive and sensible. As a child I was completely panicked by loud noises – especially pneumatic drills. Whenever we saw a worker in the street using a pneumatic drill my mother insisted on taking me by the arm and dragging me as near as possible to it until I completely freaked out. All I ever remember is struggling frantically so I could just run as far away as possible. Her intention was simply to “get me used to the noise” and my fear would go away. No it didn’t, it just got worse to the point where I become scared of noisy cars, barking dogs, large crowds, and vacuum cleaners. Perhaps more understandable was my fear of dentists. I was about 7 years old when one day I was unexpectedly called out of class and taken across the school yard to a room where there were two people in white coats standing either side of what I thought was just an oversized, leather armchair. Without explanation they asked my to sit in it, lean back and open my mouth. I was only seven years old and no one had told me about dentists! It’s bad enough having someone you don’t know messing around in your mouth for reasons that are beyond you, but then when the loud whirring noise of the drill started up and they began to drill my teeth (without anesthetic in those days) – in the words of Billie Bob Thornton, I completely freaked. I don’t remember whether I felt any pain, but the sheer terror of such an unexpected oral intrusion by complete strangers welding a screaming instrument that looked for all the world like a metallic, writhing snake was enough. My terror appeared to serve its purpose because they were entirely unable to continue, and I was told not to be so childish and stop yelling for my mother (yes, the same mother that dragged me screaming towards pneumatic drills!). Decades later I still only go to the dentists when I’m suffering the most unbearable toothache, I simply cannot watch dentists on TV, and I become anxious when I read a copy of “Punch” magazine – traditionally the dentists’ favourite waiting room periodical.

I’ve described just a few examples of specific phobias that afflict famous people as well as myself. Strangely, as odd as some of these fears are, we seem quite happy to accept that people acquire them and suffer them – perhaps because specific fears and phobias are so prevalent in the population – almost everybody seems to have one. Large scale scientific surveys suggest that a clear majority of the population (over 60%) report having symptoms of a specific fear or phobia at some time in their lives (4), and around 1 in 10 people in their lifetime will report fears of a severity that make them clinically diagnosable and in need of treatment (5). This makes specific fears and phobias the most widely experienced anxiety-based problem, and they cause both distress to the sufferer and disruption to that person’s normal daily living. While we’ve so far discussed a few unusual fears, the vast majority of phobias that are experienced revolve around just a limited number of situations and objects. These include animals – especially bugs, rodents, spiders, snakes, and invertebrates – such as snails and slugs. Then heights, water, enclosed places, social situations, and blood, injury and injections. Most other phobias are much rarer, but no less scary or debilitating for their sufferers. Even so, the origins of these phobias are no less puzzling than those of Billie Bob Thornton and his fellow celebrities.

The commonness of fears and phobias seems at least in part to explain why we appear to accept that people acquire and suffer specific fears and why it seems almost ‘normal’ in a strange way. But this does raise the matter of where specific fears come from and how they are acquired. For example, how can people acquire fears that are so very specific (e.g. antique furniture, door knobs, paper) and represent debilitating fears of things that the vast majority of people would say are absolutely – and without argument – harmless!

Well, why don’t we just ask them how they acquired their fears? What is surprising is that most people will simply not be able to tell you. Their usual response is “well, I always seem to have been frightened of mice”. Some years ago now, we conducted a survey of 120 people who claimed to have a fear of spiders (one of the commonest phobias in western cultures), and we asked them all to try and recall an event that precipitated their fear. Only one person out of 120 was able to do this. She was a secretary who told us that her fear of spiders started on an occasion when she was being sexually harassed by her boss, and at that very moment she remembers seeing a spider scuttle across the floor in front of her. From that moment on she could not go near a spider, watch spiders on TV, or even stay in the same room as one (6). In contrast, no other respondents could recall a specific event as the cause of their phobia – only that they seemed to have had the fear for as long as they could remember, or that it had developed so gradually that no one single event seems to have been responsible. These findings are not unusual. Snake phobics, for example, have hardly ever been bitten by snakes. In a study of 35 highly snake phobic individuals in the USA, only 3 had been bitten by a snake – yet most were still so fearful of snakes that they couldn’t even bear to look at a picture of a snake (7).

It’s the same with water phobia and height phobia. For example, Australians love their beaches and their water sports, so water phobia is a real problem if you’re a red-blooded Australian youngster growing up on the coast. Not surprising, then, that most research on water phobia has been carried out in Australia. However, rather than being able to identify the causes of water phobia and develop interventions to prevent it, these studies merely added to the mystery. Are water phobic kids having bad experiences with water? Are they being thrown into the deep end of swimming pools by their older siblings? Seemingly not, over half the parents of children with water phobia believed their child’s phobic concerns about water had been present from their first contact with water (8). So their fear of water seemed to exist even before their first serious encounter with it.

Just one final example before we move on. I am severely height phobic – which doesn’t help when I have to call in on the ever-growing number of University Psychology Departments with trendy glass atriums, and visit monolithic shopping malls with glass-sided elevators. I even had to leave the Van Gogh museum in Amsterdam in panic when I realized that I’d have to walk up glass-sided stairways to mezzanine floors with glass balustrades to view my favourite Van Gogh paintings. Needless to say, my fear won out and I didn’t view any of the exhibits and now have a strangely ambivalent reaction to any Van Gogh paintings. Height phobia is universal and also very common. But the number of height phobics who have fallen off ladders or cliffs, fallen down steep flights of stairs, or broken their legs falling off mezzanine floors in Van Gogh museums is very small. Around 1 in 5 height phobics can recall some bad experience when their phobia began, but then again – paradoxically – these recalled events are rarely accidents. They are often just recollections of feeling very frightened or panicky while in a high place. Around six out of ten people with height phobia – just like water phobics – claim their fear has always been present (9).

The height phobic’s nightmare – Mezzanine floors at the Van Gogh Museum!

 

We’ve now discussed a number of common phobias in which a large majority of sufferers cannot recall a precipitating event or experience. This mystery is puzzling given how frequent specific fears and phobias are in the general population. Billie Bob Thornton’s tongue-in-cheek view of his phobia of antiques is that  “it’s a past-life thing and I got beat to death with some old chair”. Alternatively, he might have been abducted by aliens and taken to a room on their spaceship filled with dazzling white light where they implanted the fear of antique furniture in his brain and beamed him back down to earth to ponder his baffling complaint. Well I doubt it, and I’ll hopefully convince you later in this book that people who do claim to have acquired phobias as a result of alien abduction or through previous lives are mistaken and believe in alien abduction and past lives for reasons other than their phobias!

So, if phobia sufferers are at a loss to explain their condition, and these fears seem to have been around for as long as they can remember – where do they come from? – are they a result of some biological pre-wiring constructed by our evolutionary past?

 ESCAPE FROM THE MAN-EATING SLUGS

 Let’s begin this section by creating an historical scenario. Conjure up an image of your ancestors from fifteen thousand years ago, tirelessly migrating across continents, discovering fire, inventing the wheel, domesticating animals and building civilizations. However, during this process of social and cultural evolution they are continuously and mercilessly being hunted down by herds of giant, man-eating slugs. The sick and lame are picked off one-by-one and children are consumed as snacks as these rampant predators satisfy their appetite for food and carnage. During this particularly challenging time of pre-history, slugs occupied the ecological predatory niche later to be filled by wolves, bears, tigers and alligators. Their cunning and ruthlessness knew no bounds and those humans who survived were the ones who were the first to spot the looming shadows of the giant slug herd, the percussive shrill sound of their hunting cries, their erratic rapid movements across the savanna, and their staring eyes as they fixated their human prey.

Unfortunately, we’re unable to verify this historical scenario because giant slugs left no fossil remains, but they did leave a different legacy – our modern-day phobic dislike of slugs. Slug phobia is one of the commonest animal fears, and is often reported in the top ten animal fears worldwide (10). Have you ever been gardening with bare hands and – before you’re aware it’s happened – you’ve recoiled and shaken a slug from your fingers? Interestingly, women also tend to be significantly more slug phobic than men (11) – presumably because females were tastier to those ancient predatory giant slugs and so had to develop stronger avoidance responses.

            The reason I’ve labored this fictitious example is because it helps to caricature a process that is very easy to slip into when it comes to trying to explain phobias. We’ve already noted that most people lack an understanding of how they acquired their fears. There is also a tendency for people to believe that they have had their fear for as long as they can remember. This failure to identify both a cause and an event that precipitated the fear can lead to the assumption that it is biologically pre-wired – “If I don’t recall it starting, then it must have been part of me forever”. This certainly rings true if the fear appears to be an adaptive one that could prevent harm, and fear of heights, water, snakes, spiders, etc. could all be construed in this way. The argument here is that heights, water, snakes and spiders have all been around for many tens of thousands of years, and could all be harmful in some way. Therefore, the genes of our ancestors who actively avoided these things would be selected for, and in this way a ‘fear’ or avoidance of them would be genetically handed down to us in the present day. This is certainly consistent with the fact that many people do exhibit fear of heights, water, snakes and spiders. But there is something disconcertingly easy about this type of explanation.

Our story about the giant slugs provides one example of how this type of explanation might be fallacious. It is easy to believe how snakes and spiders (which can often be fatally venomous) might have been genuine threats to the survival and well-being of our ancestors, but surely not slugs? – And slugs are a very common object of phobic fears. It is also scientific bad practice to allocate a cause to an effect without providing any supportive evidence. For example, to my knowledge there is no substantial evidence that snakes and spiders ever represented a significant survival selection pressure to our ancestors, and this would be critical for the biological pre-wiring of any fears to these animals. We will certainly discuss the view later in this book that some aspects of phobic fear are biologically determined, but it’s hard to substantiate this down to the level of individual specific phobias. For example, we have biologically pre-wired startle reflexes that react to rapid movement towards us, rapid unpredictable movement, looming shadows, loud noises, and staring eyes and that should be quite enough to help us to detect most types of predator with some urgency. So why would evolution also want to equip us with what would be redundant pre-wired templates to detect and avoid very specific predators such as snakes and spiders?

            It is probably useful at this point to introduce you to a character called Pangloss from Voltaire’s satirical novel Candide. Pangloss was someone who exhibited universal optimism best captured in the phrase “all is best in the best of all possible worlds”, and American biologists Stephen Jay Gould and Richard Lewontin coined the term ‘Panglossian’ to refer to the misguided view that everything present in the world today exists because it has a specific beneficial purpose. So, according to the panglossian view, the task for scientists is not to discover whether a given characteristic (such as a phobia) has an adaptive function, but to clarify how the characteristic has served an adaptive function. This panglossian view (that everything that exists must be adaptive) generates what is known as the ‘adaptive fallacy’, and this fallacy is that if you’re trying to generate reasons why something might be adaptive you can do that quite easily no matter what it is you’re thinking about, and this appears to be how some psychologists have considered phobias. That is, those phobias that are most common (e.g. heights, water, spiders, snakes, blood, injury, etc.) must be so common because they have an adaptive function – that is, they enable people to successfully avoid potentially harmful and threatening things.

            I have argued many times in the past against these types of panglossian views which claim that phobias are evolutionary pre-wired adaptations – it smacks of a scientific ‘cop out’. In 1971, the famous American psychologist Martin Seligman wrote a short but very influential paper entitled “Phobias and preparedness” arguing that we hardly ever have phobias of things like pajamas, guns, electricity outlets, hammers, even though these things are likely to be associated with trauma in our world (12). Instead, we tend to have phobias of spiders, snakes, insects, heights, fire, deep water, etc. – things that have been around for a long time in evolutionary terms and were potentially harmful to our pre-technological ancestors. Seligman left us with the implication that most phobias are exaggerations of evolutionary adaptations that are pre-wired and that we are biologically prepared to acquire very rapidly given the appropriate learning conditions. This paper spawned a good twenty five years of research on the view that phobias were ‘biologically prepared’, and – even today – a glance at most introductory psychology text books shows that they still consider this evolutionary view to be an important potential theory of phobias. There was not a lot of solid evidence in Martin Seligman’s seminal paper to support the view that common phobias exist because of their adaptive evolutionary function, and as I recall, he only authored a couple of other tangential papers on this topic before moving on to other things, leaving us all to thrash around in the void trying to put some evidential flesh on these speculative bones. While adaptation through natural selection is one possible mechanism by which common modern-day phobias could exist, Gould & Lewontin also point out that some modern-day characteristics arise from random genetic sampling, and others may exist because they are associated with other structures and behaviours that do confer a selective advantage, and not because they directly increase survival themselves. But all these arguments assume that phobias are biologically pre-wired and merely contest the mechanism by which this has occurred. There are equally good arguments that phobias are not biologically-pre-wired and I will air these arguments later in this book.

            To add a further element of skepticism to this adaptionist view of phobias, this view doesn’t provide a genuinely balanced picture of how phobias might be caused. If you look at the top ten list of predatory animals that kill human beings each year you won’t find the spider amongst those ten. But you will find animals such as lions, elephants, tigers and bears (13) – all are animals that people rarely acquire a clinical phobia to. It’s true, if you were confronted by one of these animals in a confined space you would be right to be very scared, and would be well advised to run like the wind at the first opportunity. But this adaptive fear is not the same as phobic fear. Very few people attend phobia clinics with debilitating fears of tigers or bears, hardly anyone gets a rush of fear-laden adrenaline when they hear the word lion in a conversation, and people just do not turn away in panic when shown a picture of an elephant. All of these reactions are certainly true of people with severe snake or spider phobia (and even in many cases, slug phobia!). Indeed, most of us happily send our children to bed with cuddly replicas of bears and make them watch TV programmes depicting tigers, lions and elephants as good-natured cartoon characters – hardly the stuff that would be expected if evolution was constantly telling to us beware of them.

            To put this discussion into perspective, the adaptationist or evolutionary view of phobias might seem compelling because it appears to explain why common phobias focus on things that have been around for a long time (in evolutionary terms), why it might be adaptive to avoid or fear these things, and why sufferers can only rarely recall when and how their phobia started. However, it is still a highly speculative approach, and I will argue in later chapters that it is almost certainly wrong, and that people acquire phobias during their lifetime through one of a number of very different psychological mechanisms and not because their phobias are biologically pre-wired. Finally, if we go back to our celebrity phobias, it’s hard to claim that phobias of more unusual things such as antique furniture, silverware, paper, door handles and mirrors are a direct result of evolutionary pre-wiring – mainly because these things have not been around for long enough for fear of them to become encoded in the gene pool. So they have probably been acquired during the sufferer’s lifetime. What is even more intriguing is that in most cases these fears have been acquired without the sufferer noticing, without any awareness of the conditions critical to their learning, and without any insight into how their thoughts and beliefs about the world have been manipulated and shaped. In 1950s speak, there is something akin to brainwashing going on here – but who or what is doing the brainwashing, and why should we want to become so fearful of things that – by any objective logic – we shouldn’t be frightened of at all?

 PATHWAYS THROUGH THE MAZE

 Whenever the causes of a phenomenon are concealed or difficult to identify we are often seduced into seeking magical or mystical explanations for them. Historically, phobias have been variously described as ‘manifestations of evil spirits and evidence of an imbalance in the hierarchical order of the universe’, an excess of black bile, emotional delirium, lucid insanity, a poor upbringing, or stomach ailments (14). In addition, Hippocrates believed that phobias were due to an overheating of the brain caused by a build up of bile. I doubt very much whether we will be exploring these theories further in this book! But it is easy to see how explaining the causes of phobias becomes difficult when even the sufferer is unable to provide significant insight into the acquisition process. Nevertheless, it is the purpose of science to provide rational and objective insight into nature and its causes, and the scientific path is the one we will steer in detail in this book.

            Perhaps the first mistake when trying to understand phobias is to assume they are all the same thing, and so must have one single underlying cause. This homogeneous view is probably reinforced by the fact that the most important psychiatric diagnostic manual in the world – the Diagnostic & Statistical Manual of Mental Disorders (known as DSM) published by the American Psychiatric Association – defines specific phobias as a single diagnostic category “characterized by clinically significant anxiety provoked by exposure to a specific feared object or situation, often leading to avoidance behaviour” (15). So for diagnostic purposes chocolate and button phobia are lumped together with height phobia, spider phobia, slug phobia, and even phobia of antique furniture. Mrs. V, Billie Bob Thornton and myself are all burdened with the same descriptive diagnostic label for our very different anxiety-based problems. Yet everyone is different. Everyone’s experience of the world is different, including the route we steer through our lives and the way we perceive and interpret the world, and these individual differences will give rise to phobic experiences that are personal and unique. Nevertheless, scientific enquiry would have no value if were unable to pluck some common routes from the multitude of footpaths that define individual experience, and it’s my intention in this book to give the reader a detailed, close-up insight into the various pathways that give rise to phobias. These pathways are often unusual and unexpected, but on reflection make psychological sense and fit with psychological fact. The subtlety of some of these processes explains precisely why many phobia sufferers are unable to understand how their fears were acquired, their thoughts were brainwashed, and their lives disrupted to such degree by fear and avoidance. We will also ask the question “What’s the point of phobias?” Is there a point to them, or are they just poisonous carbuncles that grow uncontrollably out of the body of our experiences? I have already suggested that they may not have the ultimate functionality that is bestowed by evolutionary selection – but do they have a more specific purpose – perhaps at the level of the individual and their own lives?

REFERENCES

(1)           L. M. Saavedra and W. K. Silverman, ‘Case Study: Disgust and a Specific Phobia of Buttons’, Journal of the American Academy of Child and Adolescent Psychiatry 41 (2002): 1376-1379.

(2)           S. Rachman and M. E. P. Seligman, ‘Unprepared Phobias: “Be Prepared”’, Behaviour Research and Therapy 14 (1976): 333-338.

(3)           Rose, Tiffany (September 3, 2004). “Interview with Billy Bob Thornton: Acting very strange”Independent.co.uk (London). Retrieved May 30, 2008.

(4)           T. F. Chapman, ‘The Epidemiology of Fears and Phobias’, in Phobias: A Handbook of Theory, Research and Treatment, edited by G. C. L. Davey (John Wiley & Sons, Chichester, 1997).

(5)           F. S. Stinson, D. A. Dawson, S. P. Chou, S. Smith, R. B. Goldstein, W. J. Ruan and B. F. Grant, ‘The Epidemiology of DSM-IV Specific Phobia in the USA: Results from the National Epidemiologic Survey on Alcohol and Related Conditions’, Psychological Science 37 (2007): 1047-1059.

(6)           G. C. L. Davey, ‘Characteristics of Individuals with Fear of Spiders’. Anxiety Research 4 (1992): 299-314.

(7)           E. Murray and F Foote, ‘The origins of Fear of Snakes’, Behaviour Research and Therapy 17 (1979): 489-493.

(8)           R. G. Menzies and J. C. Clarke, ‘The Aetiology of Childhood Water Phobia’, Behaviour Research and Therapy 31 (1993): 431-435.

(9)           R. G. Menzies and J. C. Clarke, ‘The Etiology of Fear of Heights and its Relationship to Severity and Individual Response Patterns’, Behaviour Research and Therapy 31 (1993): 355-365.

(10)        G. C. L. Davey, A. S. McDonald, U. Hirisave, G. G. Prabhu, S. Iwawaki, C. Im Jim, H Merckelbach, P.J. de Jong, P. W. L. Leung and B. C. Reimann, ‘A Cross-Cultural Study of Animal Fears’, Behaviour Research and Therapy 36 (1998): 735-750.

(11)        G. C. L. Davey, ‘Self-Reported Fears to Indigenous Animals in an Adult UK Population: The Role of Disgust Sensitivity’, British Journal of Psychology 85 (1994): 541-554.

(12)        M. E. P. Seligman, ‘Phobias and Preparedness’, Behavior Therapy 2 (1971): 307-320.

(13)        ‘Animal Danger’, available at http://www.animaldanger.com/most-dangerous-animals.php.

(14)        S. J. Thorpe and P. M. Salkovskis, ‘Animal Phobias’, in Phobias: A Handbook of Theory, Research and Treatment, edited by G. C. L. Davey (John Wiley & Sons, Chichester, 1997).

(15)        American Psychiatric Association, ‘Diagnostic and Statistical Manual of Mental Disorders, DSM-IV-TR’ (American Psychiatric Association, 2000).


The Perfect University

Why doesn’t every student in Higher Education get awarded a first class degree? Is it because they’re not intelligent enough? Is it because they’re not taught effectively? Is it because a majority of students are plain lazy? Is it because they are too spoon-fed? Well no, it’s none of those. It’s because most Universities haven’t yet fine-tuned their assessment and classification systems in a way that will allow all students – regardless of ability and potential – to get a first class degree. A majority of Universities have adjusted their assessment and classification schemes to a point where only the most delinquent of students will attain anything less than an upper-second class degree, but there is still some fine-tuning required to turn out close to 100% first class students.

 Here are some tips for those Universities and institutions still striving for this level of perfection. Individually none of these factors is necessarily bad practice or illegal, and indeed many institutions strive to introduce many of these factors as examples of innovative good practice. However, if you put them all together in one scheme, you create an assessment and classification system that can turn the most delinquent and uninspiring student into a first class success. Here are the basic elements of that system:

 1.         Always adopt a categorical marking scheme. Make sure that the first class band of marks covers 30% of awardable marks (e.g. 70-100%) whereas other classification bands cover only 10% (e.g. upper second class from 60-69%).  Within the first class band of marks, make sure there are as few categorical marks available to be awarded as possible and that there is a giant leap in awardable marks between a low first and a good first. For example, make the following marks the only ones awardable in the first class band, 75%, 90% and 100%. Then make sure that the assessment guidelines for 90% are as similar as possible to 75%, but with an added factor that all first class scripts would normally possess (e.g. to be awarded 90% a piece of work must have all the characteristics of a piece of work worthy of 75%, but will show “evidence of breadth of reading”).

 2.         Always make sure that each piece of work is double marked, and that any discrepancies between markers are rounded up (e.g. if one marker awards 75% and the second awards 95%, then award 95%).

 3.         Allow all final year students a resit option on failed papers that is not capped at the basic pass mark. Indeed, also consider allowing final year students the opportunity to resubmit any piece of work where they are not satisfied with the original mark.

 4.         Include MCQs as a highly weighted component of every course/module – at both second and final year. Ensure that these MCQs are taken from a limited bank of questions that is recycled every year. Conveniently forget to adjust the marks on these MCQs for the possibility of chance correct answers.

 5.         Include as many assessments as possible where the student has the opportunity to score 100% (e.g. MCQs, assessments where there is an indisputable correct answer or answers, etc.)

 6.         Have at least one course/module in the final year that is weighted to make up most of the marks for that year (e.g. a final year project/dissertation). Ensure that the credit weighting of this course is excessive (e.g. the equivalent to 4 other courses), but that the work required by the student is nowhere near equivalent to the work required of four courses. Make sure the students are aware that this is a course/module on which they should concentrate their efforts.

 7.         Adopt a very liberal classification borderline “bumping up” scheme that will ensure that as many students below a borderline will meet the criteria for being “bumped up” into the higher classification bracket even if they haven’t achieved the required aggregate mark for that higher classification bracket. Make sure that this is a mechanistic “bumping up” process determined by an algorithm (don’t involve the external examiners in this process – they may question it!)

 8.         Introduce changes to the assessment and classification processes every year. This will mean that students will usually be simultaneously graded by two schemes – the “old scheme” and the “new scheme”, and all candidates will be classified according to their ‘best’ outcome from either of the schemes.

 9.         Encourage students to apply for concessions and submit mitigating evidence. Make this process as simple as possible and do not set deadlines for evidence to be submitted. In particular, allow mitigation to be submitted after the student has knowledge of their degree classification.

 10.       Allow external examiners to adjust agreed marks. But only upwards, so as not to unnecessarily disadvantage any student.

11.       No need to make candidates’ identities anonymous. Some good students may have an off day in the exams and names on scripts will allow the internal examiner to mark the candidate according to their ability rather than on an exam “off day”. Poor students who perform above their expected ability in an exam can be identified and rewarded accordingly.

12.       External examiners need pacifying and domesticating. Make sure that they have comfortable hotels and are given expensive dinners. Always tell them that there have been IT problems in the Registry and a full summary of marks and assessment statistics is unavailable this year. Fabricate at least two admin staff illnesses which have meant that scripts and coursework could not be sent to the external for moderation. Compliant externals should also be appointed for additional years after the end of their term of office. Make sure it is clear to externals that assessment guidelines (and anything else they may query) have been imposed by the University central administration and are out of the control of the Department. Regularly change Departmental Exam Officers so that no one individual can acquire enough knowledge to ensure the assessment period is conducted according to the full set of regulations.

13.      If an external examiner attempts to question the objectivity and validity of an examination and assessment process, the Registry should reply by stating that there was not a critical mass of external examiners across disciplines raising this particular issue to require a change in University policy. University Registries should ensure that the full range of external examiners’ reports are not compiled in any single place where they are freely available for general scrutiny.

 14.       Finally, make sure that a directive comes down from the University Registry to all examiners to “mark generously and use the full extremes of the marking scales – especially the first class band of marks”. This, of course, is imperative if the institution is to achieve a good grading in forthcoming National Student Surveys!

Please feel free to suggest more practical ideas by which Universities can adjust their assessment and classification processes to generate increasing percentages of first class students. Don’t forget, well qualified graduates are our future – we need more of them!