Sensitivity and specificity - is your test reliable?
- Aust Prescr 2002;25:131-2
- 1 October 2002
- DOI: 10.18773/austprescr.2002.109
The reliability of a test depends on the sensitivity and specificity. You should ask 'How am I using this test and how sensitive and specific is the test?'
The sensitivity of a test is defined as the proportion of people with disease who have a positive test. A test which is very sensitive will rarely miss people with the disease. It is important to choose a sensitive test if there are serious consequences for missing the disease. Treatable malignancies (in situcancers or Hodgkin's disease) should be found early - thus sensitive tests should be used in the diagnostic work-up.
The specificity of a test is defined as the proportion of people without the disease who have a negative test result. A specific test will have few false positive results - it will rarely misclassify people without the disease as being diseased. If a test is not specific, it may be necessary to order additional tests to confirm a diagnosis.
It is useful for clinicians to know the sensitivity and specificity of common tests to help in deciding which tests to use to 'rule in' or 'rule out' disease. However, predictive values1 are of more direct clinical usefulness, enabling the clinician to estimate the probability of disease given the test result. One problem is that predictive values are prevalence dependent, but the prevalence (likelihood) of disease can be increased by clinical signs, other tests and even clinical 'intuition'.
Finally, clinical signs and judgement should never be ignored in the face of a technological test result. For example, if a suspicious breast lump remains palpable, a negative mammogram should be ignored.2 In such circumstances, clinical judgement should suggest biopsy, even though the test result was negative. Tests are to be used to assist clinicians, not to rule clinical decision-making.