Abstract
Our objective was to evaluate the performance of HIV testing algorithms based on WHO recommendations, using data from specimens collected at six HIV testing and counselling sites in sub-Saharan Africa (Guinea, Conakry; Kitgum and Arua, Uganda; Homa Bay, Kenya; Douala, Cameroun; Baraka, Democratic Republic of Congo). A total of 2780 samples, including 1306 HIV-positive, were included in the analysis. HIV testing algorithms were designed using Determine as a first test. Second and third rapid diagnostic tests (RDT) were selected based on site-specific performance, adhering where possible to the WHO-recommended minimum requirements of sensitivity and specificity of ≥99%. The threshold for specificity was reduced to 98% or 96% if necessary. We also simulated algorithms consisting of one RDT followed by a simple confirmatory assay. The positive predictive values (PPV) of the simulated algorithms varied from 75.8%-100% using strategies recommended for high-prevalence settings; 98.7%-100% using strategies recommended for low-prevalence settings; and 98.1%-100% using a rapid test followed by a simple confirmatory assay. Although we were able to design algorithms that met the recommended PPV of ≥99% in five of six sites using the applicable high prevalence strategy, options were often very limited due to sub-optimal performance of individual RDTs and to shared false-reactive results. These results underscore the impact of the sequence of HIV tests and of shared false-reactivity on algorithm performance. Where it is not possible to identify tests that meet WHO-recommended specifications, the low-prevalence strategy may be more suitable.