Abstract

If Dr Jenkins has made up his mind that his training methods are adequate [1] and need no discussion, then it may be pointless to say more – but it seems worth making one further attempt. The simple training methods we propose were debated by the Difficult Airway Society, after which some very experienced anaesthetists made a point of saying they agreed completely. Dr Jenkins does not have to get on board, but he would be welcome – as we said, this problem can only be solved by team-work. In my last 6 years at Northwick Park Hospital I asked every newcomer what they would do if they could not see the cords and I never got a satisfactory answer. Consequently, it was no great surprise to read Meek and Bythell's report of about 1% failed intubations in 4801 obstetric cases [2]. That failure rate matches the grade 3 incidence, which suggests that all such cases were failures; by contrast there were 27 grade 3s in the Durban study, all but one of which was intubated with nothing more than a Macintosh laryngoscope and bougie. Of course the incomplete response rate to Meek and Bythell weakens the evidence, which is why we did not include it in our main table. But consider this: most of us are more inclined to report our good results than our bad ones, therefore one would expect any bias to be in that direction. Respected journals are willing to publish surveys with incomplete response rates, so Dr Jenkins is unwise to pour contempt on this one – the verdict of history may be that their survey is more relevant than his. May I correct Dr Jenkins on a statistical point? He says that p-values should only be quoted for properly designed trials. But a significance test measures how likely it is that a particular difference could arise by chance, therefore it is valid whatever caused the difference. If it shows that chance is an unlikely explanation, then we have to consider what did cause the difference. At that point mathematics is no help, common sense is needed. If the trial was double-blind, with a control group and a cross-over design, then no ambiguity arises, but unfortunately such trials are seldom possible. If only perfect trials were published, then few would see the light of day and none of Dr Jenkins's would, which would be a pity. In real life we have to use less conclusive evidence, not ideal but a great deal better than guess-work. Usually there are several possible causes for the observed difference and we have to assess the balance of probabilities. Nevertheless, the first step is always a significance test, which with a modern computer takes only a few seconds. Next, can I draw Dr Jenkins's attention to another paper, namely West et al. [3], who evaluated the Northwick Park training drill and showed a striking improvement in performance? As it happens, I did the statistics for them, some of which needed methods not in the text-books. The main conclusion, however, was based on simple regression analysis, which showed the likelihood of the improvement being due to chance was 4 × 10−5. We all make mistakes, so it would be nice if he were to check the calculation. Lastly, a distinguished statistician and former colleague at Northwick Park, Douglas Altman, emphasised that common sense is at least as important as the p-value. We all know that practice is crucial for any motor skill – we also know that trainees are getting less and less practice in the most fundamental technique of our specialty. Jonny Wilkinson is arguably the best place-kicker in the world, yet he finds it necessary to practice for 3 h every day right up to the morning of the big match; by contrast, some anaesthetists go 6 months without using a bougie. Does this make sense? As we have said before [4], anaesthetic deaths have less impact on the world stage than rugby, but sub specie aeternitatis, which is more important?

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call