Summary

Doctors and needles seem to go together, but the hypodermic has only a relatively brief history. The curious tale of its enthusiastic adoption has much to teach us about contemporary practice. Our powerful medical culture seldom questions the role of injections, particularly of medicines that are often regarded as innocuous or harmless.Experience from developing cultures of the multiple factors motivating injections will inevitably illuminate our own. The misleading role of personal clinical experience in this context makes rational evidence-based practice difficult.The primacy of personal experience is a pitfall for doctors, but a potential benefit for patients. A greater understanding of the unjustly maligned placebo effect and the difference between controlling a clinical trial and healing the individual will assist practitioners to avoid both hollow and sharp practice.

 

Introduction

Of all the symbols that distinguish and define doctors, the hypodermic must surely have the most potency. The needle simultaneously signifies the power to heal through hurting and condenses the notions of active practitioner and passive patient. Like the hollow fangs of the snake which curls around the staff of Aesculapius, the needle penetrates and perpetuates our power.

 

Optimistic origins

In 1853, in a paper almost presciently entitled `A new method of treating neuralgia by the direct application of opioids to the painful point', Alexander Wood introduced his hollow needle.1 Subsequent events demonstrated the cultural forces inherent in 19th century medicine that are perhaps still relevant today. Within five years, injections of morphine had become enormously popular; thriving practices developed in response to what was seen as a potent, benign and beneficial treatment. Patients were treated with hundreds of injections. Their doctors seemed blissfully unaware of the systemic effects of the drug they were injecting and the nature of the demand for the new treatment.

Charles Hunter was discouraged from using the new technique when his first two patients developed local abscesses. In 1858, he discovered that patients gained just as much benefit from injections distant from the painful site. The stage was set for a turf war in which, from our elevated historical perspective, we, the acolytes of evidence-based medicine, can discern all the frailties of our profession. Hunter coined the term `hypodermic' and claimed his treatment superior; Wood accused Hunter of plagiarism and malpractice. Heated and acrimonious correspondence in the medical journals was eventually terminated by the appointment of a committee of the Medical and Chirurgical Society of London to investigate the merits of the two claims. The committee debated the alleged mechanisms of action of the two methods and, after two years, decided in Hunter's favour. During the debate, physicians continued with both treatments, apparently blind to the addiction underlying the huge and increasingly lucrative demand.1

When trapped into dichotomous `either/or' thinking, debate becomes polarised and political, whilst the real issues often remain unexamined and unexplored. In this 19th century parable of our profession, the underlying assumptions of efficacy and safety were never questioned, the human cost of such medical hubris never even figured, the doctrine of primum non nocere (first, do no harm) forgotten.

 

Harmless?

All drugs, including placebo, have the potential to cause local and systemic adverse reactions.2 Unthinking practitioners may be reassured by reviews of safety which show that complication rates are low, particularly for more severe reactions like anaphylaxis. Even a high complication rate can be acceptable when the likely therapeutic benefit of a treatment is great and the condition is serious. However, how can we evaluate the risks of a drug whose benefits are negligible, perhaps nil? What is an acceptable risk when the illness is minor, even trivial?

A risk/benefit analysis in this circumstance tends towards infinity. For example, local complications of intramuscular injection were less than 0.5% in a large study of more than 10 000 patients, but this still translates into 48 people who were affected by pain, distress, inconvenience, lost working time and medical costs.3

The risks of injecting drugs that are commonly regarded as innocuous also cannot be ignored. For example, thiamine is `relatively safe', but `the assumption that thiamine is a drug with a completely innocuous nature is not totally accurate'.4 Similarly, cyanocobalamin has been implicated in embolia cutis medicamentosa (circumscribed skin necrosis following intramuscular injection). '... that this severe complication may be associated with technically proper ventrogluteal injection of a wide array of therapeutic drugs shows that intramuscular injections require valid indications'.5

 

A mirror?

Developing countries have begun to recognise the burgeoning problem of inappropriate injections, perhaps because of their disproportionate impact upon small health budgets. `Injections are commonly overused in Indonesia ... which increases clinical risk and has adverse economic impact'.6 A successful behavioural intervention program aimed at reducing the inappropriate use of injections in Indonesia focused on`reality-testing prescribers' beliefs about patient assumptions'.

An Indian report of inappropriate injections exacerbating a polio epidemic found that adults approved of their children being given injections, despite having no knowledge of the substance injected or the reason.7

Experience in Ghana showed that motivating reasons for inappropriate injections were mainly socio-cultural and included patient demand and attitudes, prescriber self-interest and stereotypes, and the daily practical challenges of the community.8

It seems naive to assume that these factors are not operating in our society. Doctors and needles seem to go together and the practitioner facing the `daily practical challenges of the community' may succumb. Most of us have had experience of changing our habits of practice as a result of bitter experience. Who can forget the devastation of the young woman with a stroke to whom we properly prescribed the oral contraceptive, or the horror of a full-blown cutaneous reaction to sulphonamides? However, a low complication rate makes any individual practitioner's likelihood of causing harm through injection small. Unless the complication is immediate, the patient may present elsewhere and we may be unaware of it. Timely and appropriate feedback, prerequisites for behavioural change, will seldom occur.


 

Pleasing?

There is every likelihood that practitioners will be assailed by volumes of inappropriate positive feedback. In 1875, Dr L. Lafitte accidentally injected a patient with water rather than morphine. He was astonished to be told later that day, `Doctor, I'm so grateful to you! You relieved my pain today without upsetting my stomach!' Thus discovered, the miraculous healing properties of injected water were quickly pressed into service for a plethora of conditions which were duly 'cured or relieved ... in a miraculous and immediate manner'.1

A century later, I myself rediscovered this phenomenon as a novice locum in general practice. I reluctantly complied with the principal's schedule of water injections for a number of allegedly `neurotic' patients, whose enthusiasm for their regimen I found unshakeable. Emboldened by my recent grounding in the science of medicine, I elaborated the placebo mechanism on more than one occasion, only to be rebuffed by the primacy of patient experience, 'Well, I suppose you know more about it than I do, Doctor, but it works just fine for me!'

Indeed, the placebo response, that bane of clinical trials to be controlled out in double-blind fashion, is arguably the foundation of our noble profession. For the first 2000 years or so of medicine, the vast majority of therapeutic success must be attributed to placebo. Placebo responders commonly constitute 30-60% of the total response,2,9 but the placebo response is generally described in a denigratory fashion. The word placebo is itself a symbol, value-laden with cultural meaning, often defined as `a medication designed to please the patient rather than benefit them'.2,9 However, innumerable studies, both human and animal, have demonstrated the potency and objective reality of placebo responses. Clearly, the medication given simply `to please the patient' has pleasing effects which go beyond the patient's merely pleasing the practitioner. Whilst expert debate now addresses whether placebo responses are best explained by conditioning, expectancy or cognitive dissonance, the reality of the responses themselves is evident.9

Practitioners encounter pleasing responses every day. Some of them are due to the beneficial effects of the powerful medicines available to us, but the majority are perhaps due to placebo effects. We are constantly beset by demands to `do something'. Indeed, all our training is to respond to need with action: diagnostic, therapeutic or both. Sadly, once having taken an action, humans inevitably tend to attribute subsequent events to that action; sacrificing always seemed to work when it came to placating the gods. In a working environment in which there is infrequent feedback about the adverse effects of our actions and substantial, repetitive positive feedback as a result of the placebo effect, practitioners face real challenges in separating actions and outcomes which are not necessarily causally linked. Although we like to think of ourselves as rational human beings, the chastening paradox is that there is overwhelming rational evidence to the contrary. Bluntly put, practitioners are unlikely to behave rationally.

Worse, the most effective route of giving placebo seems to be by injection, and regularity and repetition are synergists.10 The stage is set for the establishment of unshakeable beliefs about efficacy and safety which are at variance with the evidence. The more experienced the doctor, the more irrational the treatment choices will become when experience alone dictates.

 

Sharp practice?

Prevention, it is said, is better than cure; therefore, choose treatments wisely, relying upon evidence rather than experience. A rational approach to the use of injections may prevent our medical successors, using the long lens of history, from judging us as harshly as we might judge the early history of the hypodermic. In short, doctors should ensure they use the available objective evidence, rather than rely upon their practical experience. The immediacy of personal experience will inevitably lead us astray, as it did during the 19th century cult of morphine injections.

How then should doctors respond to the placebo effect? The prevailing view derived from clinical trials is that the placebo effect is not real. It is an artefact, an impurity to be subtracted from the data to obtain an objective view. This is reasonable when taking an evidence-based view, but, to the patient, a placebo effect is as real as a drug effect. The biochemical changes which a placebo can produce are no less genuine for having been produced by a placebo. Ensuring that all medicines are given by the safest and most appropriate route will ensure that doctors are not engaged in either hollow or sharp practice.

 

Glyn Brokensha

Department of General Practice, University of Adelaide, Adelaide