Skip to content

The PhD viva in the UK: what to expect.

May 22, 2014

I had a friend ask for advice about the PhD viva voce examination here in the UK, as her friend has his coming up. I decided to write an email detailing what I thought were the basic points, along with a few specific anecdotes from my own experience. Re-reading it, I thought it deserved a wider airing.  I’m happy for people to add to it if they have their on advice or examples. Here’s the unvarnished response:

PhD vivas, in the UK at least, are very closed and mysterious affairs, but they have common structures. Here’s a brief overview of what I have seen/heard in all vivas:

Above all else, the viva seeks to ask three questions: 1) did the candidate write it? 2) is there an original contribution to knowledge? 3) is there a sufficient volume and quality of work to justify a PhD? As I’ll explain later, the PhD viva is the only examination in which “I don’t know” is the right answer to some of the questions you’ll get asked.

First, a good examiner will try to settle the nerves by asking either a simple general question, which is usually “could you tell me, generally, about your thesis? In other words, explain what you did, why you did it, and what you found.” The aim here is also to make sure the candidate has some knowledge of what was done – if not, then maybe their supervisor wrote it for them! It is important to get the response to this obvious question right. Too short, and it looks like you don’t care; too long, and you’ll sound full of yourself (or full of shit). Aim for a response lasting about 10 minutes.

There are other general questions that might precede or follow this, such as “What is science?” “What is the gold-standard form of clinical trial?” or “What is knowledge?”. These can be surprisingly hard if you’ve been focusing on the minutiae of the thesis. I remember somebody telling me that a guy who had spent 3 years studying calcium transients in skeletal muscle was simply asked “how does a muscle contract?” and he was stumped!

From there on, the examiners will get into the detail. The important thing to bear in mind is that this is a defence – examiners rarely spend time telling the candidate what they liked about the thesis. The aim here is to probe the uncertainties and loose ends, and test the data presented to see if it is robust. This can be quite a depressing process because you’ll end up thinking that they hate your work when, in fact, most of the time they are just honestly asking about things they are unsure of. It is also important to remember that a good examiner is looking for honesty from the candidate. In my viva, for example, I was asked a question about the effect of muscle temperature on oxygen uptake at the mitochondrial level. After 15 minutes of considering all the possibilities, I said “…but I really don’t know.” My examiner said “Good! It’s fine to say you don’t know if you really don’t know!”

I have seen several approaches to the thesis bit. Most of the time it’s page-by-page. Sometimes it’s thematic (with long sections on specific topics), and sometimes it seems to be scattergun. Mine was the last of these, and that made it very hard as I never knew what was coming next. In here, there may also be general questions about research approaches and detailed questions about the statistics used (if relevant). In other words, know your normal distributions, p-values, confidence intervals, t-tests etc.

After the thesis bit, comes the obvious “what’s next?” question(s). The examiners are examining somebody who has been through three years of research training, so they will want to be clear that the training has been worth it and that the candidate has new queries to chase up.

Finally, and this can sometimes happen, the examiner will say “Is there anything you want to ask me?” This is great, because you’ve got a big name in to examine you (possibly even your academic hero) and you can ask him/her anything you like. Best to ask something about his/her take on the field and where it’s going now, rather than wasting it with a “Would you rather have elbows or knees?” type question.

And at that point, they will send the candidate out of the room while they make a decision. Then it’s tea and medals.

Griffith Pugh: Britain’s first applied sport physiologist

July 29, 2013

This post is motivated by a few things. The first is the anniversary of the London Olympic games, the second is my recent completion of Harriet Tuckey’s superb biography of her late father, Dr. Griffith Pugh. Scientific support for elite performance has been a major strength of Team GB’s Olympic preparations for some time now. Every now and then it is natural to look back and wonder where it all came from, and who deserves credit for giving sport science influence in elite sport.  This post will seem rather limited in geographical scope, since I’m only talking about the UK. I am also really only talking about physiological support (of which I myself am not really part, being an educator and researcher), but the narrative for psychology or biomechanics would be different only in the names used to acknowledge primacy. When all this happened is also roughly the same regardless of which discipline you refer to.  Finally, this is whole post can be considered a failed book review, as I am not a literary critic.

We could look back Olympics-by-Olympics to try to see where it all started, and we would soon realise that prior to the Sydney games, very little scientific support was truly systematic.  In fact, it was only following Britain’s Olympic nadir in Atlanta in 1996 and the subsequent boost received from lottery funding in 1997 did things really change for good.  Prior to that, sport science support was patchy. Britain’s Olympic successes had more to do with good coaching, often at quite a local level, than any high-performance systems. Previous Olympics also flattered to deceive: Moscow and LA were boycotted by a significant number of competitive countries, and Montreal and Munich took place against the backdrop of significant social and economic problems for Britain, when Olympic sport was not a government or cultural priority. Success was the result of the hard work of a few rather than the collective efforts of many.

Who were the few hard-working scientists?  In the early 1990s it was the likes of Peter Keen, then coach of Chris Boardman, who applied science to every aspect of preparation.  That this approach was still considered oddball or revolutionary in 1992 says much about prevailing attitudes in both the British press and the sporting establishment.  One wonders who the first applied sport scientist to really make a tangible impact was.  Ask an undergraduate this question, and the answer is likely to be Nobel Laureate A.V. Hill.  He undoubtely deserves enormous credit for extending his studies into athletics, and for providing some of the key concepts and theories still investigated to this day.  But he didn’t really support athletes directly.  Instead, I would suggest the first applied scientist to have an impact on human performance in the UK was Griffith Pugh.

Harriet Tuckey’s biography, entitled “Everest The First Ascent“, should be required reading for all sport science undergraduates, and for anybody interested in the history of applied physiology.  With that in mind, I won’t present a series of spoilers, but I will justify my assertion that Pugh deserves credit as Britain’s first sport physiologist.  Pugh had been applying science to the practical situation since the second world war, when he was stationed at a military ski school in Lebanon.  His first intervention there was to reduce the soldier’s training loads, resulting in an immediate increase in both morale and performance.  In 1951, through his work at the Medical Reseach Council, Pugh was recruited as a high altitude physiologist with the British and Commonwealth Himalayan expeditions.  His insistence on the use of oxygen, proper acclimatisation, and nutrition, were all pivotal in the 1953 expedition reaching Everest’s summit.

Following his support of the Everest expedition, Pugh was involved in other Antarctic and Himalayan expeditions with Edmund Hillary, with a key focus of the Himalayan work being the physiological effects of chronic hypoxia.  The Silver Hut Expedition, as it became known, remains perhaps the most comprehensive field study of its kind, and the data generated remain highly relevant today.  But Pugh went far beyond these studies in future work, which included providing recommendations for surviving cold water immersion, reducing the risks associated with hiking in bad weather (particularly in the UK), and establishing the likely effect of high-intensity exercise at medium altitude ahead of the Mexico Olympics.  The latter work also included measures of the time required to acclimatise to Mexico City’s altitude.  In doing so, Pugh realised that heat could be as great a threat as altitude, and studied that aspect of physiology in depth too.  Pugh did all this first by talking to those involved at length in order to understand what they perceived the challenges to be.  Then he applied the science, using first principles and relevant data collection, to overcome the challenge.  Those I know in the English Institute of Sport and elsewhere take exactly the same approach today, because it works.

Harriet’s wonderful text tells the above and very much more about Pugh.  Of particular interest are the battles he almost always faced to achieve credit and credibility for what he did.  Some of those were his own doing, but many were the product of British Conservatism and class-based networks which have largely (and thankfully) disappeared from most successful elite sport.  But the systems we see providing scientific support to elite athletes are, I think, exactly what Pugh would have recognised as the right way to do things.  But there is so much more to the book than Pugh’s physiological work.  It is written by an author who had no interest in finding out who her father was until the end of his life, and who only really found out long after he had died.  In that sense this book is almost as much her story as it is his. Consequently, what Harriet found out is both fascinating and deeply moving.

Everest The First Ascent is a brilliant account the life of Griffith Pugh, Britain’s first applied sports physiologist.  Such a book would make any father’s chin rise “with pleasure and pride” (Tuckey, 2013, p. xix).

The dangerous world of the essay writing company: or a comedy of errors’

May 20, 2013

It’s that time of year again, where academics the world over seem to be treading water, or drowning, in marking undergraduate exam scripts, essays, and dissertations. It is one of the more soul-destroying parts of the job, because although in reality it is a period lasting just a few weeks, it feels like several months. For me personally, it is a time of constant pressure. Pressure to make progress on the piles around the office. Pressure to do justice to the students, to make the right call in judging work that could colour their future. Pressure to finish it so that I can get to things I’m having to neglect. Pressure that comes from wondering if those research ideas that were starting to come together in April will make any sense in June.  But on the face of it, this pressure is generally tolerable. There is always light at the end of the tunnel. This post is about the other side of this pressure, and those that seek to profit from those failing to cope with assessment deadlines. Just put “essay writing” into a search engine, and you’ll find hundreds of companies willing to sell a custom-written essay to students at all levels.

The marking season always brings this issue to the fore, for reasons obvious but also for me reasons historical. Some years ago, a PhD student at Aberystwyth, now known as Dr Simon Payne, was shocked to discover that one such essay writing company had a stand at the Aber rugby 7s tournament. He bought a flyer to my office, and at that point I started to investigate. The results were shocking. The company in question can be found here. If you’re an academic like me, their about us page will make your blood boil. More so than the obvious typo in my title (stand down, you’re not marking now). At current rates, a next day publishable PhD thesis will set you back more than £18,000. Whilst this is ridiculous for a whole host of reasons (unless you’re the son of a dictator), the more troubling part of their business is the services they offer to undergraduate students. A 3000 word essay will cost about £300-£400. Not the sort of money most students have to spend on this sort of thing, but this is the top end of the market. There are many cheaper options out there.

It’s not actually the money, it’s the principle of the thing. This company purports to be a support service, and would have you believe that student support at UK universities is so poor that their help is needed. Then there is the thorny issue of whether or not it is cheating. Luckily, they address this concern head on. If you buy any of those arguments, university is not for you. These companies are the antithesis of everything we work for, and everything we aim to instill in our students. All universities have mechanisms in place to deal with all of the reasons they give for using their service on the cheating page.  Moreover why should privacy be an issue if the service is above board? I don’t think this line of business is legitimate. They almost certainly disagree.

Legitimacy is a big issue for these companies, and they claim to have hundreds of top-class academics working for them (I doubt that). One hilarious example of of this claim was provided by a company who made the mistake of following me on Twitter.  I immediately told them exactly what I thought of them (see above), and then dived down the rabbit hole to see what they were about. To my surprise they had an “our team” page with Drs and Profs identified by picture and first name (only). That looked weird, so I guessed those people weren’t who they said they were, and thanks to some quick work from Stuart Miller the true identities of these academics was established within 20 minutes. All were big-hitting academics from across the world, and we emailed them all that evening. All decided to contact their legal people, and the next morning the pictures had all been replaced with stock images. The same images, were also used by a completely different site. That site also changed their images from the academics to the stock photos without us ever contacting that site. The fact that the same group is running several sites it therefore obvious. Although not certain, I’d be surprised if you ever got an essay back from these companies if you coughed up their fee. I have no intention of testing that hypothesis though.

This comedy of eros (don’t!) stops being funny when you think about the implications for education. Extended writing is an essential academic skill. Doing it is not easy even in your first language, and the only way of getting good at it is to learn the hard way. Paying for it, either through stress, convincing yourself it’s a revision aid, or brazenly cheating might get you through your studies, but it won’t prepare you for postgraduate employment. What if your job requires a business case, a critical analysis of options requiring millions of dollars of company expenditure, or government policy change? You can’t pay an essay writing company then.

More worrying still are the implications for foreign students, who are becoming an increasingly important source of revenue for universities. Some of these students are funded by their governments to study in the UK. This sometimes comes with strings attached. If they fail, for example, they have to pay the money back. For a PhD, this could amount to £60,000 or more. Fail to pay, and they go to prison. Or worse. A colleague of mine told me a tale of two Masters students who failed and were met at their home airport by a group of officials, bundled into a people carrier and were never heard of again, by anyone. These are the students who will likely take the risk with these companies, and with plagiarism detection technology advancing all the time, they will be caught and punished. The companies (if they even deliver) will not face any penality at all, which is why they still exist.

Altogether, I think we as academics have a duty to call these companies out whenever we get the chance. They are morally bankrupt parasites who deserve nothing but contempt.

Why the BBC’s Wonders of Life is not for me and why that doesn’t matter

February 4, 2013

It happened at about fifteen minutes past nine last night.  I realised that, sadly, I was not destined to enjoy Prof. Brian Cox’s Wonders of Life series.  I’m a life scientist. I’m unlikely to learn anything from it, but I should at least enjoy it.  But when Brian uttered the phrase “Nature abhors a gradient” in relation to electric charge I knew I wouldn’t see much more of it.  Beautifully shot, and with a very challenging topic (to bring the physics and chemistry of life to prime time TV), I know deep down it’s myself I’m letting down by switching off. But here’s the thing: I and several others took issue with the aforementioned soundbite. It just didn’t sound right from a biological perspective. So my simple response is this: Nature (in the sense of physical law) does indeed abhor a gradient, but life doesn’t. If fact if you’ve got a minute, life is life because of gradients! Pressure gradients allow you to breathe in and out, allow you to pump blood, concentration (or more accurately partial pressure) gradients allow oxygen to diffuse into the cells and ultimately the mitochondria where ATP is resynthesised using proton motive forces. These forces are produced by (you guessed it) pumps that set up charge gradients across the inner mitochondrial membrane.  These metabolic processes produce carbon dioxide which is released to the atmosphere using similar gradients. The condition of life without gradients is generally known as “death”.

So Wonders of Life is flawed and a failure? Absolutely not. I am one among many who lament the apparent dumbing down of science programmes on TV, most clearly demonstrated by Horizon’s tendency, until recently, to be 45 min of landscapes and classical music and 15 min of science content. Contrast this editorial policy with its earlier triumphs, most notably its Feynman interview. The printed media are even worse, of course, abusing genuinely interesting science stories left, right and centre.  But for television, I think times have changed. We have BBC4 (for the time being), with Jim Al-Khalili’s tremendous physics and chemistry short series, and we have Wonders. TV isn’t so dumb after all.

But there is one thing here I’ve missed out, and it is without doubt the most important thing: I’m not Wonders of Life’s target audience. I know what an action potential is.  I know about conservation of energy, I know about charge and mass balance. So apart from “doesn’t this show look nice” I’ve got nothing to gain from viewing it. But I forget that millions don’t know what I know, and Wonders is for them, not me. My “Wonders” happened the first time I saw a breath-by-breath profile of gas concentrations at the mouth, and I’m paid to keep seeing it. So consider this: 15 minutes after Top Gear, a modest but intelligent man patiently explained the physics and chemistry of an action potential to an audience of millions. For that reason, I will defend the BBC’s magnificent public service broadcasting until my dying breath, at which point I really will abhor a gradient.

The 11-year-long “damn I wish I’d said that”, and bravery in scientific discussions

January 1, 2013

I received the shocking news on New Year’s Eve that Professor Rev. Anthony J. Sargeant, formerly of Manchester Metropolitan University and a “big name” in exercise physiology, had been convicted of storing and creating over 3000 indecent images of children.  Such evil crimes hit even harder when you know of the perpetrator.  In this case, I have the misfortune of not only knowing of him, but have interacted with him on several occasions.  Some held him in high regard. Others, including my PhD supervisor and many of his colleagues, detested him.  Criminal activity aside, he had a reputation for being an “arsehole”.

Every field of science has its arseholes. Human (exercise) physiology has its fair share, though the names vary depending on who you talk to.  Arseholes are those people who go beyond normal robust scientific scepticism. They make strenuous efforts to publically discredit your work, and often you personally, usually as a method of promoting their work and to demonstrate their superior intellect.  Strangely, this is not a bad thing overall; indeed, it could be argued that the arseholes help in producing good science. I’ve often heard people say “so-and-so is going to be at this meeting, I’d better make doubly sure this interpretation/analysis checks out”.  The problem is that arseholes never stop being arseholes.  And so to the anecdote this whole post is about.

In the spring of 2002 I was searching for a new job.  I had a one year teaching contract at the University of Brighton in Eastbourne, where I did my PhD, and my then boss had done me the huge favour of not promising me a contract extension.  This is not a backhanded comment, or anything stated in rosy retrospect.  I had spent seven great years in Eastbourne and needed a change.  So I applied for two jobs that spring. I was successful in getting the second job, at Aberystwyth, and spent 10 blissful years there.  But the first one was an interview at MMU, where my PhD supervisor, Andy Jones, was working.  This was the obvious attraction, but it was, and still is, a very good place to do human physiology.  This was a two stage interview, as most are for lecturing posts. The first stage is usually some kind of presentation (the “prove you’re not mute in front of an audience” part), and the second is the formal interview.  The confrontation took place in the presentation.

The brief for the presentation was to give a 15 minute overview of our research followed by 10 minutes of questions.  At the time, I had just finished my PhD on the effect of prior heavy exercise on oxygen uptake kinetics (examined by the late great Brian J Whipp), which had also yielded four publications at that point.  I was therefore confident about what I was presenting, even knowing an arsehole was in the audience.  So I delivered the best presentation I could to about 40 staff and students, and then prepared to take questions.  Sargeant softened me up with a number of technical questions which were I think designed to expose my lack of thought about my methods, before delivering the fatal blow.  He simply asked “what is new about this work?”

This was easy to answer, as the same question had come up in my PhD viva. I gave the same answer: that I’d shown that prior heavy exercise increases the primary amplitude of the VO2 response, likely as a result of an increase in motor unit recruitment. His response to this was, and still is, mind-blowing. He said “yes, but we’ve known that for twenty years, haven’t we?” I was speechless. He’d achieved his goal at this point, and decided to rub it in. “It was shown by Krogh and Lindhard, 1975, wasn’t it?” Again, I had nothing in return.  It didn’t sound right, given that the only reference with those authors I could remember was published in about 1920. But I said nothing, and the questions moved on. It bugged me at the time, and a few weeks later I decided to check. No such study exists.  Krogh, for instance, died in 1949. But in the heat of the moment, and knowing that my future was in the balance, I did not challenge him.  He knew that I wouldn’t challenge him too, which is probably why he did it.  But how much of an arsehole do you have to be to lie through your teeth and invent references to belittle somebody else, at an interview?!

If only I had my time again. Knowing what I know now, I’d have said something along these lines: “Krogh and Lindhard, 1975? I’m not familiar with that study. Let’s look for it…”.  The lesson here is that there is nothing to be gained from not standing up to an arsehole, and by standing up to them, you might just destroy their credibility.

Thankfully, the criminal justice system means academia has one less arsehole.  But, tragically, Anthony J Sargeant is several orders of magnitude more evil than even the scorned like me can imagine.  Many younger people have had their lives destroyed by him and others like him.  As a result, if this post seems bitter and overly personal, well, tough shit.

On hydration, research, media hype and contrarianism

August 26, 2012

It’s been a while since I blogged about anything, largely because Twitter has served that function perfectly well during my first year as a father. Alex’s birthday is on Tuesday, so I thought I’d celebrate by dusting off the blog and saying something about science.  So with a bottle of the (now) local brew, Spitfire, in hand, here goes…

In late July of this year, the BBC broadcast one of the most biased programmes I’ve ever seen. I love the Beeb, and the usual problem for them is not bias but false balance, but that’s a whole other blog post. The programme in question was Panorama, and the topic was sports products. The basic narrative was that most sports products lack any scientific basis and those that are supposedly backed by science (sports drinks in particular) are based on poor quality evidence sponsored by industry, which leads to widespread publication bias.  Watching the programme as an outsider, you’d be forgiven for thinking that sports science research was being conducted by either clueless muppets or industry shills.  I hope he won’t mind me singling him out, but David Briggs commented on Twitter (I paraphrase) “[Panorama] is like a Skeptics in the Pub, evidence or GTFO”.  I agree; as an exercise in dismantling marketing hype, the programme was an epic triumph.  However, knowledge of the source material and the background of some of the “talking heads” strongly suggests something altogether less palatible was going on.

The source

The BMJ published a series of free articles to coincide with the Panorama broadcast, and they make universally depressing reading.  They included a commentary by Tim Noakes, a feature article by Deborah Cohen, and several articles by a team of researchers led by Dr Carl Heneghan.  The most important for the purposes of this blog was their “40 years of sports performance research…” article, which laid out why sports performance research was, in their analysis, largely of such poor quality that the results are largely uninterpretable, and certainly not generalisable. There have been a number of responses to these articles by those involved and those named in the articles as being in the pay of the sports drink industry, and I will not repeat their critiques. They are well worth reading on their own.  I do not intend what follows to be a systematic analysis of the source material as a result, I simply wish to raise some issues that have not, in my view, been adequately or completely addressed.

With regard to the quality of sports performance research, the charge that it is poor stings me personally, as I have been involved in studying sports performance (not sports drinks but other stuff, see here). Many of the criticisms, such as low sample size and the use of poor surrogate measures, also applies to my research, so I’d like to briefly address them.

In Heneghan’s paper (above) they suggest that, from epidemiological research, “small” sample sizes are those with fewer than 100 subjects in each arm of an experimental trial.  But sports performance research is primarily laboratory-based, although with GPS and power-measuring devices field studies are becoming more common. But there really is no substitute for getting in the lab and measuring things in a carefully controlled environment if you want to learn anything about physiological mechanisms.  If you disagree with this, it’s probably because you view the performance bit being more important than the physiological mechanisms.

My bias is the other way, but both approaches have an abundant literature base.  Either way, you are rarely going to find 100 or more subjects in a sports physiology paper.  There are two principal reasons for this. First, there simply aren’t enough people willing to participate in physiology studies involving exhaustive exercise and (often) invasive procedures such as blood sampling and muscle biopsies. Secondly, sample size is determined by considerations of measurement variability and effect size. If the former is small and the latter large, it is quite possible to achieve a desired level of statistical power with fewer than 20 subjects.  It would be unethical to test another 80 subjects if 20 would do.  Anybody working in the field would know this as we have to convince ethics committees that we need as many subjects as we think we do. My highest n so far has been 23!

A secondary reason for the low sample sizes in hydration research in particular is that they are incredibly time consuming (by human performance research standards). A well-controlled fluid balance study will require the subject to compile a lead-in food diary (repeating that diet exactly if repeated visits to the lab are performed; they frequently are), at least 1 hour of preparation in the laboratory (to establish the baseline), pre-test measures, exposure to the environment before exercise commences, and then often exercise lasting 1-3 hours to exhaustion, followed by removal from the environment until core temperature and/or hydration status has reached pre-defined cut-offs to allow the subject to safely leave the lab. All the while, all fluids consumed and all urine produced needs to be monitored, and the subject repeatedly weighed (nude). Suffice to say these studies can take all day and several lab staff to complete one subject’s test, of which there may be several.  As an example, to perform one placebo-controlled study in Aber, I had to perform a pre-test, two familiarisation trials (to exhaustion), followed by two further trials with different drinks. Each visit after the pre-test started at 7 am and I left the lab at 2 pm.  The scientists in question would have needed to work into the evening to analyse the data and clear up. So, 29 hours for n=1!

Poor surrogate measures

A second suggestion of Heneghan et al. was that too many studies performed time to exhaustion trials rather than time trials, the latter being more like the “real world” and therefore better.  I have a bit of a problem with this too, as it is not only bollocks, but clearly demonstrable bollocks.  Several studies have shown that although time to exhaustion trials are more variable than time trials, the effect of a treatment on both is about the same.  Amann et al. (2008) clearly demonstrated this, and my own research shows that priming exercise increases time to exhaustion by more than 10%, whereas the effect on time trial performance is about 2-3%. Both results are statistically significant and equally meaningful. There is, therefore, plenty of evidence that time to exhaustion and time trials are similarly sensitive.  Time trials have the advantage of being a close simulation of actual performance, but have the very real disadvantage of producing large amounts of data that cannot be directly compared with other conditions because the work rate is, by definition, uncontrolled.  On the other hand, time to exhaustion suffers from being unlike any event besides certain parts of “World’s Stongest Man”, but is the method of choice for those of us wanting to understand the physiological response to exercise.  Heneghan et al. should have known this from even the most cursory glance at the literature.  As explained below, however, a cursory glance is not even what they gave it.

Poor methods of the BMJ articles

The most disappointing aspect of the analysis of Heneghan et al. is that they did exactly what you would do if you were basing a Skeptics in the Pub talk on sports drinks – examine the claims the manufacturers are making, and see if they stack up.  However, as illustrated in a number of responses to the articles, they went far beyond this in their conclusions, and their methods do not justify them. What they did was ask manufacturers to supply the science underpinning hydration.  Only GSK, manufacturers of Lucosade, got back to them.  They then went about reviewing the articles they had been given, assuming that the quality of the articles would be the same for all other sports drinks, without actually reading any of those referring to other drinks. Nor did they read any other hydration-related papers, as you’d need to do if you were being systematic.  Had they done this, they’d have found dozens of papers from the 1940s and 1950s on hydration, pre-dating the formation of the sports drink industry.  This is because many second world war operational theatres involved significant heat stress and high fluid intake demands.  Logistically, figuring out how to hydrate several hundred thousand soldiers properly was pretty important.  The Journal of Applied Physiology was created, in part, to provide an outlet for this often previously classified research.

The bias and the talking heads

It should be obvious where the bias comes from in the above: ask somebody for evidence supporting a marketing claim, and you’ll get evidence supporting that claim.  If you don’t bother to look for any other evidence, you’ll have a biased and incomplete view of the issue.  This, incredibly, led the feature writer to bring up the issue of publication bias in hydration as akin to that of the pharmaceutical industry.  Publication of positive findings rather than negative findings is a problem throughout the scientific literature.  In order to be published, negative results have got to be unexpected or unusual, and that doesn’t happen very often.  I don’t like that any more than you do, but that’s the way it is for now.  I would just like to point out the key difference between “big pharma” and the sports drink industry.  For the former, there is tangible evidence that negative findings were actively suppressed – Ben Goldacre’s Bad Science book is an excellent primer for that sort of thing.  In the case of the sports drink industry, it’s possible that they have attempted to prevent some negstive studies being published, but there is no evidence for this. More importantly, the product in big pharma can only be got from big pharma.  In the case of the sports drink industry, I urge you to consider the contents of your kitchen. If you have some salt, sugar, sweetener, fruit cordial, a water supply and some form of thickening agent – pectin for example, congratulations, you have the necessary ingredients for a double-blind, placebo-controlled study of sports drinks.

Simply stated, it would be impossible for the sports drink industry to bury negative findings, since anybody with a few quid and some accurate scales can repeat and test the evidence.  Moreover, the vast vast majority of hydration and carbohydrate studies in the literature do not use a commercial sports drink.  Instead, the scientists make their own drinks from clinical-grade ingredients.  If they are using protein or amino acid mixtures, and they have any sense, they’ll also send it for mass spectrometry or HPLC analysis to make damn sure what they think they have they really do have.  Commercially available protein mixtures are notorious for having all sorts of things in them they shouldn’t have.  Caffeine is the most obvious and common example of an unnamed ingredient.  So here is a further problem: if you’re only going to analyse those studies containing a particular commercially-available drink, you’ll miss 90% of hydration research anyway!

The Panorama programme was interesting because it took evidence from relatively few people, the most high-profile of which was Prof. Tim Noakes. It has since transpired that those providing a counter to Noakes’ well-documented animosity to the sports drink industry were either ignored or interviewed and then cut from programme altogether.  It is also obvious from the responses posted on the BMJ site that nobody named as conspiring to pervert the scientific interpretation of hydration research or to suppress results counter to the industry message in the feature article were approached for comment before publication.  It is to their credit that the scientists involved have used the right of reply rather than legal avenues to make their point.

It is unlikely that those reading the articles or watching the programme would know that Noakes’ views on hydration are considered contrarian. I pointed this out to Deborah Cohen on Twitter, during a sometimes heated debate running into the small hours, but we ended on good terms (I think).  Interestingly, Noakes subsequently replied to me – despite not following me and cc’ing Cohen to call me “Unprofessional” for suggesting that he “quote mines” and “cherry picks” in the scientific literature. I’m still not sure how he discovered my feed, and I wouldn’t want to guess here. I did make the claim that Noakes was paid by Powerade to do research. That was false and I am happy to correct it: his lab has received money from a company marketing Energade; he receives no personal financial income from the industry.

Quality of evidence?

Let us end on a lighter and broader note: how do we assess the quality of research?  In the UK, peer review is the way, and so it should be. The fact that this is wrapped up in the terrible mess that is the REF should not detract from that. But contrast this with the following, a written response from Noakes:

” In his rapid response Dr Michael Sawka implies that poor science not conflict of interest issues explains why papers are not accepted by leading journals like the Journal of Applied Physiology and Medicine and Science in Sports and Exercise, the Editorial Boards and review panels of which sometimes contain individuals who have close associations with the sports drink industry. If this is true, relevant papers rejected by these journals must subsequently fail to attract high citation rates when published in other journals.

In fact a number of our papers rejected by those journals were subsequently published elsewhere and have already been cited 50 times or more in the literature. Hence they could not legitimately have been rejected because they were of poor quality. Typical examples include the following four papers1-4 already cited respectively 52, 74, 66 and 55 times according to the Web of Science.”

I’ll not insult your intelligence by pointing out the flaws in this argument.

So there we have it. A programme based on a biased narrative, drawn from the work of researchers who clearly don’t understand sports performance research, and who ignored the vast majority of the literature in any case. The sad truth is that sports science got a good stiff kicking in the process, and it’ll struggle to recover its already meagre credibility as a result.  Now if only sports science had a role to play in, let’s say, a Tour de France victory and 29 gold medals…

Update: 22 September 2012

Recently the BMJ published as surprisingly long list of corrections to the paper of Heneghan et al. mentioned above http://www.bmj.com/content/345/bmj.e6085. These corrections follow correspondence from some authors of some studies rated as at “high risk of bias”, who pointed out that they were, in fact, misclassified. This was based on incorrect assertions of non-randomisation and a lack of blinded, when those studies were in fact randomised and blinded. In respomse Heneghan et al. have downgraded the risk of bias to moderate in these studies, only because the studies do not exhaustively state the methods used to blind and randomise – otherwise the risk of bias would have been rated as “low”. There are likely to be other studies rated “high risk” that are not, but the authors have not contacted the BMJ to point this out. Whether this will, ultimately, alter the conclusions of the BMJ article remains to be seen, but I have not seen such a large number of corrections published in a single paper before.

Watching Parliament open-mouthed

July 24, 2011

This week, Parliament was recalled to hear evidence from senior police officers (in the Home Affairs select committee) and Rebekah Brooks, Rupert and James Murdoch (in the Culture Media and Sport select committee).  Much has been written about both in all forms of media, but I was not interested in the “human” side of the story that many focused on, but rather the way in which both the PMs and the witnesses chose to use evidence. I was open-mouth because it looked like an abuse of evidence for the most part.

I was viewing the sittings of both committees whilst trying to write an introduction to a paper.  I always find this quite hard. I use a technique I would think many scientists do, where you write a sentence and, because you don’t necessarily have the most appropriate supporting reference to hand, you write “(ref)”, with the intention of coming back to it later with the supporting evidence. The key thing is that you do come back to it later. This because you live and die by the evidence you deploy here and elsewhere in the paper. For this reason, writing a paper takes bloody ages, whereas I’m writing this blogpost watching a grand prix, the Tour de France and the test match.  At the end of the writing process, I naturally feel like I have a command of the evidence I’ve used and could defend it when pressed on what I’ve said. The select committees could not have been more different.

The first part of this evidence-abusing horror show was the Metropolitan Police Farce. These senior police officers had a week to prepare their evidence, in a profession in which evidence is, I would guess, quite important.  I’m no lawyer, but I’d guess that when prosecuting a crime the time line is important.  How could you ever successflly convict anyone if you can’t establish the correct chronology of the crime. Did you don’t, the defence are likely to emphasize the doubt. Yet what I saw amazed me: these police officers could only provide answers to the nearest calender month, and sometimes this didn’t even make sense, like appointing someone in January to cover somebody who didn’t get ill until February. Overall it was astonishingly weak.

Then there was the Culture Media and Sport committee.  It has been suggested that the Murdochs used a deliberate “don’t know” or “it was someone else” strategy. If true, they shouldn’t be in charge of their own company.  If false, then the judicial inquiry will find them out.  But in this forum, it was the statements made by Louise Mensch which amazed me.  Although she is now planning to correct the Parliamentary record, she clearly misquoted Piers Morgan to make it seem as if he was boasting about hacking, which he never was. Now, Morgan is no angel, but Mensch must have known that she would be challenged on the substance of what she said. Even if she was making the statement based upon evidence not yet in the public domain, why used as your evidence something that is and use it so badly?

Finally, there was Rebekah Brooks.  She did two things that made me shake my head in disbelief.  First, she claimed that the table presented in the “What price privacy now?”, in which illegal transactions were tablulated by newspaper, had The Observer and The Guardian in the top 5.  However, The Observer was ninth and The Guardian didn’t appear in the table at all.  Presumably she thought she could get away with it, and she was right for the most part, being challenged only on the position of The Observer.  The second jaw-dropping moment was when she was asked if she had read the CMS committees previous report based, in part, on the evidence she had given.  She replied, and I paraphrase, “I didn’t read all of it…”  Hang on, I thought, here is a Parliamentary report, written about your business, and in places specifically about you, and you haven’t bothered to read it all?  Perhaps it’s just me, but I’d have read it twice and back-to-front if it was about me.  This, for me, summed the whole thing up: if you don’t think evidence is important, that you think you can argue your way out of trouble you’ll probably behave like the characters in this sorry saga. Thankfully, I’m not this type of person, because, as I’ve said before, evidence is sacred.