Featured news from NHIVNA
HIV-related news from NAM
Avoiding false positives: rapid HIV tests vary in their accuracy, so need to be used in combination
Roger Pebody, 2017-05-02 10:00:00
An evaluation of eight widely used rapid diagnostic tests used in a variety of African countries by Médecins Sans Frontières (MSF) shows that the tests vary in their performance, with false positive results being a concern. Samples from some geographical locations were more likely to have false positive results than others, suggesting that tests need to be locally validated, researchers report in the Journal of the International AIDS Society.
False negative results were rare.
The findings confirm that the diagnosis of HIV should not be based on results from a single HIV rapid diagnostic test. A combination of HIV tests, and more specifically an algorithm (sequence) of two or three different HIV rapid tests, is required to make an HIV-positive diagnosis. This is already recommended by the World Health Organization (WHO).
Misdiagnosing someone as having HIV can cause psychological trauma and profound social effects. Marriages may break down and people may take HIV treatment which they do not need.
Rapid diagnostic tests (RDTs) are the main diagnostic tool for HIV screening and diagnosis in resource-constrained settings. Despite their widespread use, there has been no systematic, head-to-head evaluation of their accuracy with specimens from diverse settings across sub-Saharan Africa. Experience from the field has suggested that, in some settings, some RDTs have sensitivities and specificities that are inferior to those recorded in WHO evaluations.
Venous blood samples were collected from 2785 people attending six clinics in Guinea, Cameroon, Democratic Republic of Congo, Uganda (two clinics) and Kenya. These samples were frozen and sent to a laboratory in Belgium where they were tested with eight different RDTs. Because all tests were done in one location, it is unlikely that any differences in performance could be attributed to user variation, storage conditions or other methodological factors. Tests were performed and interpreted according to the manufacturer’s instructions, with each test being read by two laboratory technicians separately.
A reference algorithm, using laboratory tests, showed that of the 2785 samples, 1474 were HIV negative and 1306 were HIV positive.
The sensitivity of a test is the percentage of results that will correctly provide a positive result when HIV is actually present. In the evaluation, sensitivities were extremely high:
- 98.8% for the First Response HIV Card Test 1–2.0.
- 99.5% or above for the Uni-Gold HIV, HIV 1/2 STAT-PAK and Vikia HIV 1/2.
- 100% for the Determine HIV-1/2, Genie Fast HIV 1/2, INSTI HIV-1/HIV-2 Antibody Test and SD Bioline HIV 1/2 3.0.
The high sensitivities mean that it is rare for people to be given false negative results (people being told they are HIV-negative when they, in fact, have HIV) in these settings. Nonetheless, this may still occur when people have very recent (acute) HIV infection.
The other key measure of test accuracy is specificity, in other words, the percentage of results that will correctly provide a negative result when HIV is not present. Lower rates of specificity will produce more false positive results.
Here the results were mixed and sometimes sub-optimal:
- Between 90% and 95% for the First Response HIV Card Test 1–2.0, INSTI HIV-1/HIV-2 Antibody Test, Determine HIV-1/2 and Genie Fast HIV 1/2.
- Between 97% and 98% for the Vikia HIV 1/2, SD Bioline HIV 1/2 3.0 and Uni-Gold HIV.
- 99.7% for the HIV 1/2 STAT-PAK.
As a result of the lower specificity, a total of 438 specimens had false-positive results on at least one test. False-positive results were associated with different factors for each of the tests – including being male (INSTI, Vikia and Genie Fast tests), being referred for testing by a clinician (probably as a result of having co-morbidities, relevant to the Determine test) and the geographical location (INSTI, SD Bioline and First Response tests). For some tests in some locations, the odds of a false positive result were ten times greater than elsewhere, but it remains unclear what factors are driving this effect.
The researchers note that these widely used rapid diagnostic tests performed more poorly than in the evaluations conducted for WHO – only one test met the recommended thresholds for RDTs of ≥99% sensitivity and ≥98% specificity. There appear to be geographical and population differences in test performance.
“This publication reports on the performance of individual HIV rapid tests, but not the accuracy of entire HIV testing algorithms,” commented lead author Cara Kosack. “The results underscore the challenges in designing accurate testing algorithms, and the need for local validation to be part of the design process.”
A separate analysis, in Clinical Infectious Diseases, highlights the dangers of HIV treatment programmes not routinely re-testing individuals before they begin treatment. Although re-testing is recommended by WHO, only 2 of 48 national testing policies that were surveyed recommended re-testing before initiating antiretroviral therapy.
The financial costs could be considerable. Researchers created a simple mathematical model to compare the cost of re-testing all HIV-positive people versus the expected cost of providing HIV treatment to misdiagnosed HIV-negative individuals.
In a setting with an HIV prevalence of 1%, initially testing 10,000 people with a series of three RDTs cost $83,000 and the testing algorithm had a specificity of 99.9%. Without re-testing, 9 HIV-negative people would be misdiagnosed as positive and initiated on antiretroviral therapy (ART) for life, costing $58,000 in unnecessary ART costs. However re-testing all those initially diagnosed HIV-positive would only cost $2000.
In a setting with an HIV prevalence of 10%, a two-test strategy is recommended. This would have a specificity of 99.6%. The cost to initially test 10,000 people would be $87,000. Without re-testing, 39 HIV-negative people would be misdiagnosed as positive and initiated on ART, costing $243,000 in unnecessary ART costs. Because of the larger numbers, re-testing people with HIV-positive results would cost more than in the other example, $14,000. The net savings would be $225,000.
In both cases, re-testing would be cost-saving within a few months. But the researchers note that their narrow focus on financial and human resources does not capture a number of other important factors: “the potential ethical, personal, and social consequences of incorrect diagnosis and treatment for an HIV-negative person, the quality-of-life implications of unneeded regular treatment and potential associated toxicities, and the potential undermining effects of misdiagnosis for confidence in the health system more widely.”
Source:1