Journal Article > ResearchFull Text
Bull World Health Organ. 2015 June 25; Volume 93 (Issue 9); 623-630.; DOI:10.2471/BLT.14.146480
Fajardo E, Metcalf CJ, Piriou E, Gueguen M, Maman D, et al.
Bull World Health Organ. 2015 June 25; Volume 93 (Issue 9); 623-630.; DOI:10.2471/BLT.14.146480
OBJECTIVE
To estimate the proportion of invalid results generated by a CD4+ T-lymphocyte analyser used by Médecins Sans Frontières (MSF) in field projects and identify factors associated with invalid results.
METHODS
We collated 25,616 CD4+ T-lymphocyte test results from 39 sites in nine countries for the years 2011 to 2013. Information about the setting, user, training, sampling technique and device repair history were obtained by questionnaire. The analyser performs a series of checks to ensure that all steps of the analysis are completed successfully; if not, an invalid result is reported. We calculated the proportion of invalid results by device and by operator. Regression analyses were used to investigate factors associated with invalid results.
FINDINGS
There were 3354 invalid test results (13.1%) across 39 sites, for 58 Alere PimaTM devices and 180 operators. The median proportion of errors per device and operator was 12.7% (interquartile range, IQR: 10.3-19.9) and 12.1% (IQR: 7.1-19.2), respectively. The proportion of invalid results varied widely by country, setting, user and device. Errors were not associated with settings, user experience or the number of users per device. Tests performed on capillary blood samples were significantly less likely to generate errors compared to venous whole blood.
CONCLUSION
The Alere Pima CD4+ analyser generated a high proportion of invalid test results, across different countries, settings and users. Most error codes could be attributed to the operator, but the exact causes proved difficult to identify. Invalid results need to be factored into the implementation and operational costs of routine CD4+ T-lymphocyte testing.
To estimate the proportion of invalid results generated by a CD4+ T-lymphocyte analyser used by Médecins Sans Frontières (MSF) in field projects and identify factors associated with invalid results.
METHODS
We collated 25,616 CD4+ T-lymphocyte test results from 39 sites in nine countries for the years 2011 to 2013. Information about the setting, user, training, sampling technique and device repair history were obtained by questionnaire. The analyser performs a series of checks to ensure that all steps of the analysis are completed successfully; if not, an invalid result is reported. We calculated the proportion of invalid results by device and by operator. Regression analyses were used to investigate factors associated with invalid results.
FINDINGS
There were 3354 invalid test results (13.1%) across 39 sites, for 58 Alere PimaTM devices and 180 operators. The median proportion of errors per device and operator was 12.7% (interquartile range, IQR: 10.3-19.9) and 12.1% (IQR: 7.1-19.2), respectively. The proportion of invalid results varied widely by country, setting, user and device. Errors were not associated with settings, user experience or the number of users per device. Tests performed on capillary blood samples were significantly less likely to generate errors compared to venous whole blood.
CONCLUSION
The Alere Pima CD4+ analyser generated a high proportion of invalid test results, across different countries, settings and users. Most error codes could be attributed to the operator, but the exact causes proved difficult to identify. Invalid results need to be factored into the implementation and operational costs of routine CD4+ T-lymphocyte testing.
Journal Article > ResearchFull Text
Malar J. 2011 December 13; Volume 10 (Issue 1); 362.; DOI:10.1186/1475-2875-10-362
Asito AS, Piriou E, Jura WGZO, Ouma C, Odada PS, et al.
Malar J. 2011 December 13; Volume 10 (Issue 1); 362.; DOI:10.1186/1475-2875-10-362
BACKGROUND
Plasmodium falciparum infection leads to alterations in B cell subset distribution. During infancy, development of peripheral B cell subsets is also occurring. However, it is unknown if infants living a malaria endemic region have alterations in B cell subsets that is independent of an age effect.
METHODS
To evaluate the impact of exposure to P. falciparum on B cell development in infants, flow cytometry was used to analyse the distribution and phenotypic characteristic of B cell subsets in infant cohorts prospectively followed at 12, 18 and 24 months from two geographically proximate regions in western Kenya with divergent malaria exposure i.e. Kisumu (malaria-endemic, n = 24) and Nandi (unstable malaria transmission, n = 21).
RESULTS
There was significantly higher frequency and absolute cell numbers of CD19+ B cells in Kisumu relative to Nandi at 12(p = 0.0440), 18(p = 0.0210) and 24 months (p = 0.0493). No differences were observed between the infants from the two sites in frequencies of naïve B cells (IgD+CD27-) or classical memory B cells (IgD-CD27+). However, immature transitional B cells (CD19+CD10+CD34-) were higher in Kisumu relative to Nandi at all three ages. In contrast, the levels of non-class switched memory B cells (CD19+IgD+CD27+) were significantly lower overall in Kisumu relative to Nandi at significantly at 12 (p = 0.0144), 18 (p = 0.0013) and 24 months (p = 0.0129).
CONCLUSIONS
These data suggest that infants living in malaria endemic regions have altered B cell subset distribution. Further studies are needed to understand the functional significance of these changes and long-term impact on ability of these infants to develop antibody responses to P. falciparum and heterologous infections.
Plasmodium falciparum infection leads to alterations in B cell subset distribution. During infancy, development of peripheral B cell subsets is also occurring. However, it is unknown if infants living a malaria endemic region have alterations in B cell subsets that is independent of an age effect.
METHODS
To evaluate the impact of exposure to P. falciparum on B cell development in infants, flow cytometry was used to analyse the distribution and phenotypic characteristic of B cell subsets in infant cohorts prospectively followed at 12, 18 and 24 months from two geographically proximate regions in western Kenya with divergent malaria exposure i.e. Kisumu (malaria-endemic, n = 24) and Nandi (unstable malaria transmission, n = 21).
RESULTS
There was significantly higher frequency and absolute cell numbers of CD19+ B cells in Kisumu relative to Nandi at 12(p = 0.0440), 18(p = 0.0210) and 24 months (p = 0.0493). No differences were observed between the infants from the two sites in frequencies of naïve B cells (IgD+CD27-) or classical memory B cells (IgD-CD27+). However, immature transitional B cells (CD19+CD10+CD34-) were higher in Kisumu relative to Nandi at all three ages. In contrast, the levels of non-class switched memory B cells (CD19+IgD+CD27+) were significantly lower overall in Kisumu relative to Nandi at significantly at 12 (p = 0.0144), 18 (p = 0.0013) and 24 months (p = 0.0129).
CONCLUSIONS
These data suggest that infants living in malaria endemic regions have altered B cell subset distribution. Further studies are needed to understand the functional significance of these changes and long-term impact on ability of these infants to develop antibody responses to P. falciparum and heterologous infections.
Protocol > Research Study
de Wit MBK, Rao B, Lassovski M, Ouabo A, Badjo C, et al.
2018 July 1
Primary Objective: To measure the prevalence of molecular markers of SP resistant malaria in North and South Kivu, DRC.
Sulfadoxine/pyrimethamine (SP) forms the backbone of most malaria chemoprevention programmes in high endemicity settings, including intermittent preventative therapy in pregnancy and infants (IPTp and IPTi respectively) as well as seasonal malaria chemoprevention (SMC). P. falciparum parasite resistance to SP threatens recent triumphs preventing malaria infection in the most vulnerable risk groups. WHO guidance is that chemoprevention using SP may not be implemented when prevalence of the dhps K540E gene denoting SP resistance are greater than 50%. Simple, robust polymerase chain reaction (PCR) - based methods for molecular surveillance of resistance to SP have the potential to indicate whether SP-based chemoprevention programmes would be effective in areas where surveillance was conducted, but also to identify early stages of emerging resistance in order to advocate for alternative chemoprevention strategies.
A minimum of 750 samples will be collected per province. Three sites per province will provide 250 samples assuming an estimated prevalence of 50% prevalence of dhps K540E gene with 95% confidence and 5% precision. This is also sufficient for robust estimation of the prevalence of dhps 581, an alternative critical marker. This sample size is calculated to estimate regional prevalence, i.e. for both South Kivu and North Kivu, and hence this study requires samples from multiple MSF sites (including from different MSF Operating Centre missions) e.g. Baraka, Kimbi and Lulingu amongst others in South Kivu and Mweso, Rutsuru and Walikale in North Kivu with a minimum total of 750 per province. If estimating specific prevalence in only one limited site, a large sample size would be required.
Sulfadoxine/pyrimethamine (SP) forms the backbone of most malaria chemoprevention programmes in high endemicity settings, including intermittent preventative therapy in pregnancy and infants (IPTp and IPTi respectively) as well as seasonal malaria chemoprevention (SMC). P. falciparum parasite resistance to SP threatens recent triumphs preventing malaria infection in the most vulnerable risk groups. WHO guidance is that chemoprevention using SP may not be implemented when prevalence of the dhps K540E gene denoting SP resistance are greater than 50%. Simple, robust polymerase chain reaction (PCR) - based methods for molecular surveillance of resistance to SP have the potential to indicate whether SP-based chemoprevention programmes would be effective in areas where surveillance was conducted, but also to identify early stages of emerging resistance in order to advocate for alternative chemoprevention strategies.
A minimum of 750 samples will be collected per province. Three sites per province will provide 250 samples assuming an estimated prevalence of 50% prevalence of dhps K540E gene with 95% confidence and 5% precision. This is also sufficient for robust estimation of the prevalence of dhps 581, an alternative critical marker. This sample size is calculated to estimate regional prevalence, i.e. for both South Kivu and North Kivu, and hence this study requires samples from multiple MSF sites (including from different MSF Operating Centre missions) e.g. Baraka, Kimbi and Lulingu amongst others in South Kivu and Mweso, Rutsuru and Walikale in North Kivu with a minimum total of 750 per province. If estimating specific prevalence in only one limited site, a large sample size would be required.
Journal Article > ResearchFull Text
J Infect Dis. 2016 August 28; Volume 214 (Issue 9); DOI:10.1093/infdis/jiw396
Reynaldi A, Schlub TE, Piriou E, Ogolla S, Sumba OP, et al.
J Infect Dis. 2016 August 28; Volume 214 (Issue 9); DOI:10.1093/infdis/jiw396
The combination of Epstein-Barr virus (EBV) infection and high malaria exposure are risk factors for endemic Burkitt lymphoma, and evidence suggests that infants in regions of high malaria exposure have earlier EBV infection and increased EBV reactivation. Here we analysed the longitudinal antibody response to EBV in Kenyan infants with different levels of malaria exposure. We found that high malaria exposure was associated with a faster decline of maternally-derived IgG antibody to both the EBV viral capsid antigen (VCA) and Epstein-Barr virus nuclear antigen (EBNA1), followed by a more rapid rise in antibody response to EBV antigens in children from the high malaria region. In addition, we observed the long-term persistence of anti-VCA IgM responses in children from the malaria high region. More rapid decay of maternal antibodies was a major predictor of EBV infection outcome, as decay predicted time-to EBV DNA detection, independent of high and low malaria exposure.
Journal Article > ResearchFull Text
Malar J. 2016 September 6; Volume 15 (Issue 1); 455.; DOI:10.1186/s12936-016-1444-x
de Wit MBK, Funk A, Moussally K, Nkuba DA, Siddiqui R, et al.
Malar J. 2016 September 6; Volume 15 (Issue 1); 455.; DOI:10.1186/s12936-016-1444-x
BACKGROUND
Between 2009 and 2012, malaria cases diagnosed in a Médecins sans Frontières programme have increased fivefold in Baraka, South Kivu, Democratic Republic of the Congo (DRC). The cause of this increase is not known. An in vivo drug efficacy trial was conducted to determine whether increased treatment failure rates may have contributed to the apparent increase in malaria diagnoses.
METHODS
In an open-randomized non-inferiority trial, the efficacy of artesunate-amodiaquine (ASAQ) was compared to artemether-lumefantrine (AL) for the treatment of uncomplicated falciparum malaria in 288 children aged 6-59 months. Included children had directly supervised treatment and were then followed for 42 days with weekly clinical and parasitological evaluations. The blood samples of children found to have recurring parasitaemia within 42 days were checked by PCR to confirm whether or not this was due to reinfection or recrudescence (i.e. treatment failure).
RESULTS
Out of 873 children screened, 585 (67%) were excluded and 288 children were randomized to either ASAQ or AL. At day 42 of follow up, the treatment efficacy of ASAQ was 78% before and 95% after PCR correction for re-infections. In the AL-arm, treatment efficacy was 84% before and 99.0% after PCR correction. Treatment efficacy after PCR correction was within the margin of non-inferiority as set for this study. Fewer children in the AL arm reported adverse reactions.
CONCLUSIONS
ASAQ is still effective as a treatment for uncomplicated malaria in Baraka, South Kivu, DRC. In this region, AL may have higher efficacy but additional trials are required to draw this conclusion with confidence. The high re-infection rate in South-Kivu indicates intense malaria transmission.
Trial registration NCT02741024.
Between 2009 and 2012, malaria cases diagnosed in a Médecins sans Frontières programme have increased fivefold in Baraka, South Kivu, Democratic Republic of the Congo (DRC). The cause of this increase is not known. An in vivo drug efficacy trial was conducted to determine whether increased treatment failure rates may have contributed to the apparent increase in malaria diagnoses.
METHODS
In an open-randomized non-inferiority trial, the efficacy of artesunate-amodiaquine (ASAQ) was compared to artemether-lumefantrine (AL) for the treatment of uncomplicated falciparum malaria in 288 children aged 6-59 months. Included children had directly supervised treatment and were then followed for 42 days with weekly clinical and parasitological evaluations. The blood samples of children found to have recurring parasitaemia within 42 days were checked by PCR to confirm whether or not this was due to reinfection or recrudescence (i.e. treatment failure).
RESULTS
Out of 873 children screened, 585 (67%) were excluded and 288 children were randomized to either ASAQ or AL. At day 42 of follow up, the treatment efficacy of ASAQ was 78% before and 95% after PCR correction for re-infections. In the AL-arm, treatment efficacy was 84% before and 99.0% after PCR correction. Treatment efficacy after PCR correction was within the margin of non-inferiority as set for this study. Fewer children in the AL arm reported adverse reactions.
CONCLUSIONS
ASAQ is still effective as a treatment for uncomplicated malaria in Baraka, South Kivu, DRC. In this region, AL may have higher efficacy but additional trials are required to draw this conclusion with confidence. The high re-infection rate in South-Kivu indicates intense malaria transmission.
Trial registration NCT02741024.
Journal Article > ResearchFull Text
PLOS One. 2015 July 10; Volume 10 (Issue 7); e0132422.; DOI:10.1371/journal.pone.0132422
Shanks L, Ritmeijer KKD, Piriou E, Siddiqui MR, Kliescikova J, et al.
PLOS One. 2015 July 10; Volume 10 (Issue 7); e0132422.; DOI:10.1371/journal.pone.0132422
Co-infection with HIV and visceral leishmaniasis is an important consideration in treatment of either disease in endemic areas. Diagnosis of HIV in resource-limited settings relies on rapid diagnostic tests used together in an algorithm. A limitation of the HIV diagnostic algorithm is that it is vulnerable to falsely positive reactions due to cross reactivity. It has been postulated that visceral leishmaniasis (VL) infection can increase this risk of false positive HIV results. This cross sectional study compared the risk of false positive HIV results in VL patients with non-VL individuals.
Journal Article > ResearchFull Text
PLOS One. 2022 July 29; Volume 17 (Issue 7); e0271910.; DOI:10.1371/journal.pone.0271910
Mesic A, Decroo T, Mar HT, Jacobs BKM, Thandar MP, et al.
PLOS One. 2022 July 29; Volume 17 (Issue 7); e0271910.; DOI:10.1371/journal.pone.0271910
INTRODUCTION
Despite HIV viral load (VL) monitoring being serial, most studies use a cross-sectional design to evaluate the virological status of a cohort. The objective of our study was to use a simplified approach to calculate viraemic-time: the proportion of follow-up time with unsuppressed VL above the limit of detection. We estimated risk factors for higher viraemic-time and whether viraemic-time predicted mortality in a second-line antiretroviral treatment (ART) cohort in Myanmar.
METHODS
We conducted a retrospective cohort analysis of people living with HIV (PLHIV) who received second-line ART for a period >6 months and who had at least two HIV VL test results between 01 January 2014 and 30 April 2018. Fractional logistic regression assessed risk factors for having higher viraemic-time and Cox proportional hazards regression assessed the association between viraemic-time and mortality. Kaplan-Meier curves were plotted to illustrate survival probability for different viraemic-time categories.
RESULTS
Among 1,352 participants, 815 (60.3%) never experienced viraemia, and 172 (12.7%), 214 (15.8%), and 80 (5.9%) participants were viraemic <20%, 20–49%, and 50–79% of their total follow-up time, respectively. Few (71; 5.3%) participants were ≥80% of their total follow-up time viraemic. The odds for having higher viraemic-time were higher among people with a history of injecting drug use (aOR 2.01, 95% CI 1.30–3.10, p = 0.002), sex workers (aOR 2.10, 95% CI 1.11–4.00, p = 0.02) and patients treated with lopinavir/ritonavir (vs. atazanavir; aOR 1.53, 95% CI 1.12–2.10, p = 0.008). Viraemic-time was strongly associated with mortality hazard among those with 50–79% and ≥80% viraemic-time (aHR 2.92, 95% CI 1.21–7.10, p = 0.02 and aHR 2.71, 95% CI 1.22–6.01, p = 0.01). This association was not observed in those with viraemic-time <50%.
CONCLUSIONS
Key populations were at risk for having a higher viraemic-time on second-line ART. Viraemic-time predicts clinical outcomes. Differentiated services should target subgroups at risk for a higher viraemic-time to control both HIV transmission and mortality.
Despite HIV viral load (VL) monitoring being serial, most studies use a cross-sectional design to evaluate the virological status of a cohort. The objective of our study was to use a simplified approach to calculate viraemic-time: the proportion of follow-up time with unsuppressed VL above the limit of detection. We estimated risk factors for higher viraemic-time and whether viraemic-time predicted mortality in a second-line antiretroviral treatment (ART) cohort in Myanmar.
METHODS
We conducted a retrospective cohort analysis of people living with HIV (PLHIV) who received second-line ART for a period >6 months and who had at least two HIV VL test results between 01 January 2014 and 30 April 2018. Fractional logistic regression assessed risk factors for having higher viraemic-time and Cox proportional hazards regression assessed the association between viraemic-time and mortality. Kaplan-Meier curves were plotted to illustrate survival probability for different viraemic-time categories.
RESULTS
Among 1,352 participants, 815 (60.3%) never experienced viraemia, and 172 (12.7%), 214 (15.8%), and 80 (5.9%) participants were viraemic <20%, 20–49%, and 50–79% of their total follow-up time, respectively. Few (71; 5.3%) participants were ≥80% of their total follow-up time viraemic. The odds for having higher viraemic-time were higher among people with a history of injecting drug use (aOR 2.01, 95% CI 1.30–3.10, p = 0.002), sex workers (aOR 2.10, 95% CI 1.11–4.00, p = 0.02) and patients treated with lopinavir/ritonavir (vs. atazanavir; aOR 1.53, 95% CI 1.12–2.10, p = 0.008). Viraemic-time was strongly associated with mortality hazard among those with 50–79% and ≥80% viraemic-time (aHR 2.92, 95% CI 1.21–7.10, p = 0.02 and aHR 2.71, 95% CI 1.22–6.01, p = 0.01). This association was not observed in those with viraemic-time <50%.
CONCLUSIONS
Key populations were at risk for having a higher viraemic-time on second-line ART. Viraemic-time predicts clinical outcomes. Differentiated services should target subgroups at risk for a higher viraemic-time to control both HIV transmission and mortality.
Journal Article > ResearchFull Text
PLOS One. 2013 November 25; Volume 8 (Issue 11); e81656.; DOI:10.1371/journal.pone.0081656
Klarkowiski D, Glass K, O'Brien DP, Lokuge K, Piriou E, et al.
PLOS One. 2013 November 25; Volume 8 (Issue 11); e81656.; DOI:10.1371/journal.pone.0081656
BACKGROUND
Recent trends to earlier access to anti-retroviral treatment underline the importance of accurate HIV diagnosis. The WHO HIV testing strategy recommends the use of two or three rapid diagnostic tests (RDTs) combined in an algorithm and assume a population is serologically stable over time. Yet RDTs are prone to cross reactivity which can lead to false positive or discordant results. This paper uses discordancy data from Médecins Sans Frontières (MSF) programmes to test the hypothesis that the specificity of RDTs change over place and time.
METHODS
Data was drawn from all MSF test centres in 2007-8 using a parallel testing algorithm. A Bayesian approach was used to derive estimates of disease prevalence, and of test sensitivity and specificity using the software WinBUGS. A comparison of models with different levels of complexity was performed to assess the evidence for changes in test characteristics by location and over time.
RESULTS
106, 035 individuals were included from 51 centres in 10 countries using 7 different RDTs. Discordancy patterns were found to vary by location and time. Model fit statistics confirmed this, with improved fit to the data when test specificity and sensitivity were allowed to vary by centre and over time. Two examples show evidence of variation in specificity between different testing locations within a single country. Finally, within a single test centre, variation in specificity was seen over time with one test becoming more specific and the other less specific.
CONCLUSIONS
This analysis demonstrates the variable specificity of multiple HIV RDTs over geographic location and time. This variability suggests that cross reactivity is occurring and indicates a higher than previously appreciated risk of false positive HIV results using the current WHO testing guidelines. Given the significant consequences of false HIV diagnosis, we suggest that current testing and evaluation strategies be reviewed.
Recent trends to earlier access to anti-retroviral treatment underline the importance of accurate HIV diagnosis. The WHO HIV testing strategy recommends the use of two or three rapid diagnostic tests (RDTs) combined in an algorithm and assume a population is serologically stable over time. Yet RDTs are prone to cross reactivity which can lead to false positive or discordant results. This paper uses discordancy data from Médecins Sans Frontières (MSF) programmes to test the hypothesis that the specificity of RDTs change over place and time.
METHODS
Data was drawn from all MSF test centres in 2007-8 using a parallel testing algorithm. A Bayesian approach was used to derive estimates of disease prevalence, and of test sensitivity and specificity using the software WinBUGS. A comparison of models with different levels of complexity was performed to assess the evidence for changes in test characteristics by location and over time.
RESULTS
106, 035 individuals were included from 51 centres in 10 countries using 7 different RDTs. Discordancy patterns were found to vary by location and time. Model fit statistics confirmed this, with improved fit to the data when test specificity and sensitivity were allowed to vary by centre and over time. Two examples show evidence of variation in specificity between different testing locations within a single country. Finally, within a single test centre, variation in specificity was seen over time with one test becoming more specific and the other less specific.
CONCLUSIONS
This analysis demonstrates the variable specificity of multiple HIV RDTs over geographic location and time. This variability suggests that cross reactivity is occurring and indicates a higher than previously appreciated risk of false positive HIV results using the current WHO testing guidelines. Given the significant consequences of false HIV diagnosis, we suggest that current testing and evaluation strategies be reviewed.
Journal Article > ResearchFull Text
Emerg Infect Dis. 2016 February 1; Volume 22 (Issue 2); DOI:10.3201/eid2202.151238
Van der Bergh R, Chaillet P, Sow MS, Amand M, van Vyve C, et al.
Emerg Infect Dis. 2016 February 1; Volume 22 (Issue 2); DOI:10.3201/eid2202.151238
Rapid diagnostic methods are essential in control of Ebola outbreaks and lead to timely isolation of cases and improved epidemiologic surveillance. Diagnosis during Ebola outbreaks in West Africa has relied on PCR performed in laboratories outside this region. Because time between sampling and PCR results can be considerable, we assessed the feasibility and added value of using the Xpert Ebola Assay in an Ebola control program in Guinea. A total of 218 samples were collected during diagnosis, treatment, and convalescence of patients. Median time for obtaining results was reduced from 334 min to 165 min. Twenty-six samples were positive for Ebola virus. Xpert cycle thresholds were consistently lower, and 8 (31%) samples were negative by routine PCR. Several logistic and safety issues were identified. We suggest that implementation of the Xpert Ebola Assay under programmatic conditions is feasible and represents a major advance in diagnosis of Ebola virus disease without apparent loss of assay sensitivity.
Journal Article > ResearchFull Text
AIDS Res Ther. 2021 April 21; Volume 18 (Issue 1); DOI:10.1186/s12981-021-00336-0
Mesic A, Spina A, Mar HT, Thit P, Decroo T, et al.
AIDS Res Ther. 2021 April 21; Volume 18 (Issue 1); DOI:10.1186/s12981-021-00336-0