Journal Article > ResearchFull Text
Appetite. 2016 January 2; Volume 99; DOI:10.1016/j.appet.2015.12.030
Iuel-Brockdorf AS, Draebel TA, Ritz C, Fabiansen C, Cichon B, et al.
Appetite. 2016 January 2; Volume 99; DOI:10.1016/j.appet.2015.12.030
The objective of this study was to evaluate, within the context of a randomized controlled trial of product effectiveness, the acceptability of new formulations of six corn-soy blended flours (CSB) and six lipid-based nutrient supplements (LNS) with different quantities of milk and qualities of soy for the treatment of children with moderate acute malnutrition (MAM). Our study included 1546 children aged 6-23 months and involved questionnaires after one month of supplementation home visits and interviews with a sub-sample of 20 trial participants and their caretakers, and nine focus group discussion. All 12 products were well accepted in terms of organoleptic qualities and received good ratings. However, LNS were more appreciated by caretakers and children. Additionally, an effect of soy isolate was detected on child appreciation where products with high milk content also received better ratings. CSB were not consumed as readily; 33.9% (n = 257) of children receiving CSB were reported to have leftovers compared to 17.3% (n = 134) of children receiving LNS (p=<0.001). Both CSB and LNS were referred to as foods with medicinal properties and perceived as beneficial to child health. They were both reported to have high priority in the daily feeding of the child. In conclusion, there were minimal differences in acceptability of the various CSB and LNS formulations, although CSB were less readily consumed and required smaller meal volumes. Since all products were well-accepted, decisions regarding whether the more expensive products should be used for the treatment of MAM will need to be based on their effect on child nutrition, growth and health. Future supplementary feeding programs in similar contexts could furthermore consider introducing supplementary foods as a medical treatment, as this may increase adherence and decrease sharing.
Journal Article > ResearchFull Text
Trans R Soc Trop Med Hyg. 2009 September 23; Volume 104 (Issue 2); DOI:10.1016/j.trstmh.2009.08.012
Harries K, Zachariah R, Manzi M, Firmenich P, Mathela R, et al.
Trans R Soc Trop Med Hyg. 2009 September 23; Volume 104 (Issue 2); DOI:10.1016/j.trstmh.2009.08.012
In an urban district hospital in Burkina Faso we investigated the relative proportions of HIV-1, HIV-2 and HIV-1/2 among those tested, the baseline sociodemographic and clinical characteristics, and the response to and outcome of antiretroviral therapy (ART). A total of 7368 individuals (male=32%; median age=34 years) were included in the analysis over a 6 year period (2002-2008). The proportions of HIV-1, HIV-2 and dual infection were 94%, 2.5% and 3.6%, respectively. HIV-1-infected individuals were younger, whereas HIV-2-infected individuals were more likely to be male, have higher CD4 counts and be asymptomatic on presentation. ART was started in 4255 adult patients who were followed up for a total of 8679 person-years, during which time 469 deaths occurred. Mortality differences by serotype were not statistically significant, but were generally worse for HIV-2 and HIV-1/2 after controlling for age, CD4 count and WHO stage. Among severely immune-deficient patients, mortality was higher for HIV-2 than HIV-1. CD4 count recovery was poorest for HIV-2. HIV-2 and dually infected patients appeared to do less well on ART than HIV-1 patients. Reasons may include differences in age at baseline, lower intrinsic immune recovery in HIV-2, use of ineffective ART regimens (inappropriate prescribing) by clinicians, and poor drug adherence.
Journal Article > ResearchFull Text
PLOS One. 2010 June 11; Volume 5 (Issue 6); DOI:10.1371/journal.pone.0011086
Rose AMC, Mueller JE, Gerstl S, Njanpop-Lafourcade BM, Page AL, et al.
PLOS One. 2010 June 11; Volume 5 (Issue 6); DOI:10.1371/journal.pone.0011086
Meningococcal meningitis outbreaks occur every year during the dry season in the "meningitis belt" of sub-Saharan Africa. Identification of the causative strain is crucial before launching mass vaccination campaigns, to assure use of the correct vaccine. Rapid agglutination (latex) tests are most commonly available in district-level laboratories at the beginning of the epidemic season; limitations include a short shelf-life and the need for refrigeration and good technical skills. Recently, a new dipstick rapid diagnostic test (RDT) was developed to identify and differentiate disease caused by meningococcal serogroups A, W135, C and Y. We evaluated the diagnostic performance of this dipstick RDT during an urban outbreak of meningitis caused by N. meningitidis serogroup A in Ouagadougou, Burkina Faso; first against an in-country reference standard of culture and/or multiplex PCR; and second against culture and/or a highly sensitive nested PCR technique performed in Oslo, Norway. We included 267 patients with suspected acute bacterial meningitis. Using the in-country reference standard, 50 samples (19%) were positive. Dipstick RDT sensitivity (N = 265) was 70% (95%CI 55-82) and specificity 97% (95%CI 93-99). Using culture and/or nested PCR, 126/259 (49%) samples were positive; dipstick RDT sensitivity (N = 257) was 32% (95%CI 24-41), and specificity was 99% (95%CI 95-100). We found dipstick RDT sensitivity lower than values reported from (i) assessments under ideal laboratory conditions (>90%), and (ii) a prior field evaluation in Niger [89% (95%CI 80-95)]. Specificity, however, was similar to (i), and higher than (ii) [62% (95%CI 48-75)]. At this stage in development, therefore, other tests (e.g., latex) might be preferred for use in peripheral health centres. We highlight the value of field evaluations for new diagnostic tests, and note relatively low sensitivity of a reference standard using multiplex vs. nested PCR. Although the former is the current standard for bacterial meningitis surveillance in the meningitis belt, nested PCR performed in a certified laboratory should be used as an absolute reference when evaluating new diagnostic tests.
Journal Article > ResearchAbstract Only
Appetite. 2015 August 1; Volume 91; 278-286.; DOI:10.1016/j.appet.2015.04.058
Iuel-Brockdorf AS, Dræbel T, Fabiansen C, Cichon B, Christensen VB, et al.
Appetite. 2015 August 1; Volume 91; 278-286.; DOI:10.1016/j.appet.2015.04.058
The objective of this study was to evaluate the acceptability of new formulations of six corn-soy blended flours (CSB) and six lipid-based nutrient supplements (LNS) with different quantities of milk and qualities of soy to be used for the treatment of moderate acute malnutrition (MAM). Furthermore, we wanted to explore the acceptability of foods currently used for the prevention and treatment of malnutrition in Burkina Faso to identify possible barriers that could affect the acceptability of the new formulations of supplementary foods. The study was carried out prior to a randomized controlled trial evaluating the effectiveness of these new formulations. The study involved an observed test-meal and a three-day take-home ration of the experimental food supplements to 6-30-months-old healthy children, followed by questionnaire-based interviews about the acceptability of these supplements. Interviews and focus group discussions were carried out to explore the acceptability of foods currently used for the prevention and treatment of malnutrition. The results suggest that both LNS and CSB products with different quantities of milk and qualities of soy are equally well accepted among healthy children in rural Burkina Faso based on general appreciation of the supplements and organoleptic properties. All experimental foods received good ratings and there was no significant difference between the foods. However, after the take-home ration, 58% of participants receiving CSB reported having left-overs at the end of the day compared to 37% (n=33) of the participants receiving LNS (p=0.004), suggesting that CSB was not as readily consumed as LNS. Yet, both CSB and LNS products were perceived as easy to administer and the frequency of feeding was estimated to be adequate. The study also found that similar foods, used for the prevention and treatment of malnutrition, were well appreciated in the study location. LNS were to a higher degree associated with medicine or foods with medicinal properties, but both LNS and CSB were perceived as beneficial to child health.
Journal Article > Meta-AnalysisFull Text
PLOS One. 2013 July 22; Volume 8 (Issue 7); e68995.; DOI:10.1371/journal.pone.0068995
Pillay P, Ford NP, Shubber Z, Ferrand RA
PLOS One. 2013 July 22; Volume 8 (Issue 7); e68995.; DOI:10.1371/journal.pone.0068995
INTRODUCTION
There is conflicting evidence and practice regarding the use of the non-nucleoside reverse transcriptase inhibitors (NNRTI) efavirenz (EFV) and nevirapine (NVP) in first-line antiretroviral therapy (ART).
METHODS
We systematically reviewed virological outcomes in HIV-1 infected, treatment-naive patients on regimens containing EFV versus NVP from randomised trials and observational cohort studies. Data sources include PubMed, Embase, the Cochrane Central Register of Controlled Trials and conference proceedings of the International AIDS Society, Conference on Retroviruses and Opportunistic Infections, between 1996 to May 2013. Relative risks (RR) and 95% confidence intervals were synthesized using random-effects meta-analysis. Heterogeneity was assessed using the I(2) statistic, and subgroup analyses performed to assess the potential influence of study design, duration of follow up, location, and tuberculosis treatment. Sensitivity analyses explored the potential influence of different dosages of NVP and different viral load thresholds.
RESULTS
Of 5011 citations retrieved, 38 reports of studies comprising 114 391 patients were included for review. EFV was significantly less likely than NVP to lead to virologic failure in both trials (RR 0.85 [0.73-0.99] I(2) = 0%) and observational studies (RR 0.65 [0.59-0.71] I(2) = 54%). EFV was more likely to achieve virologic success than NVP, though marginally significant, in both randomised controlled trials (RR 1.04 [1.00-1.08] I(2) = 0%) and observational studies (RR 1.06 [1.00-1.12] I(2) = 68%).
CONCLUSION
EFV-based first line ART is significantly less likely to lead to virologic failure compared to NVP-based ART. This finding supports the use of EFV as the preferred NNRTI in first-line treatment regimen for HIV treatment, particularly in resource limited settings.
There is conflicting evidence and practice regarding the use of the non-nucleoside reverse transcriptase inhibitors (NNRTI) efavirenz (EFV) and nevirapine (NVP) in first-line antiretroviral therapy (ART).
METHODS
We systematically reviewed virological outcomes in HIV-1 infected, treatment-naive patients on regimens containing EFV versus NVP from randomised trials and observational cohort studies. Data sources include PubMed, Embase, the Cochrane Central Register of Controlled Trials and conference proceedings of the International AIDS Society, Conference on Retroviruses and Opportunistic Infections, between 1996 to May 2013. Relative risks (RR) and 95% confidence intervals were synthesized using random-effects meta-analysis. Heterogeneity was assessed using the I(2) statistic, and subgroup analyses performed to assess the potential influence of study design, duration of follow up, location, and tuberculosis treatment. Sensitivity analyses explored the potential influence of different dosages of NVP and different viral load thresholds.
RESULTS
Of 5011 citations retrieved, 38 reports of studies comprising 114 391 patients were included for review. EFV was significantly less likely than NVP to lead to virologic failure in both trials (RR 0.85 [0.73-0.99] I(2) = 0%) and observational studies (RR 0.65 [0.59-0.71] I(2) = 54%). EFV was more likely to achieve virologic success than NVP, though marginally significant, in both randomised controlled trials (RR 1.04 [1.00-1.08] I(2) = 0%) and observational studies (RR 1.06 [1.00-1.12] I(2) = 68%).
CONCLUSION
EFV-based first line ART is significantly less likely to lead to virologic failure compared to NVP-based ART. This finding supports the use of EFV as the preferred NNRTI in first-line treatment regimen for HIV treatment, particularly in resource limited settings.
Journal Article > Meta-AnalysisFull Text
Malar J. 2009 August 23; Volume 8 (Issue 1); 203.; DOI:10.1186/1475-2875-8-203
Zwang J, Olliaro PL, Barennes H, Bonnet MMB, Brasseur P, et al.
Malar J. 2009 August 23; Volume 8 (Issue 1); 203.; DOI:10.1186/1475-2875-8-203
BACKGROUND: Artesunate and amodiaquine (AS&AQ) is at present the world's second most widely used artemisinin-based combination therapy (ACT). It was necessary to evaluate the efficacy of ACT, recently adopted by the World Health Organization (WHO) and deployed over 80 countries, in order to make an evidence-based drug policy.
METHODS: An individual patient data (IPD) analysis was conducted on efficacy outcomes in 26 clinical studies in sub-Saharan Africa using the WHO protocol with similar primary and secondary endpoints.
RESULTS: A total of 11,700 patients (75% under 5 years old), from 33 different sites in 16 countries were followed for 28 days. Loss to follow-up was 4.9% (575/11,700). AS&AQ was given to 5,897 patients. Of these, 82% (4,826/5,897) were included in randomized comparative trials with polymerase chain reaction (PCR) genotyping results and compared to 5,413 patients (half receiving an ACT). AS&AQ and other ACT comparators resulted in rapid clearance of fever and parasitaemia, superior to non-ACT. Using survival analysis on a modified intent-to-treat population, the Day 28 PCR-adjusted efficacy of AS&AQ was greater than 90% (the WHO cut-off) in 11/16 countries. In randomized comparative trials (n = 22), the crude efficacy of AS&AQ was 75.9% (95% CI 74.6-77.1) and the PCR-adjusted efficacy was 93.9% (95% CI 93.2-94.5). The risk (weighted by site) of failure PCR-adjusted of AS&AQ was significantly inferior to non-ACT, superior to dihydroartemisinin-piperaquine (DP, in one Ugandan site), and not different from AS+SP or AL (artemether-lumefantrine). The risk of gametocyte appearance and the carriage rate of AS&AQ was only greater in one Ugandan site compared to AL and DP, and lower compared to non-ACT (p = 0.001, for all comparisons). Anaemia recovery was not different than comparator groups, except in one site in Rwanda where the patients in the DP group had a slower recovery.
CONCLUSION: AS&AQ compares well to other treatments and meets the WHO efficacy criteria for use against falciparum malaria in many, but not all, the sub-Saharan African countries where it was studied. Efficacy varies between and within countries. An IPD analysis can inform general and local treatment policies. Ongoing monitoring evaluation is required.
METHODS: An individual patient data (IPD) analysis was conducted on efficacy outcomes in 26 clinical studies in sub-Saharan Africa using the WHO protocol with similar primary and secondary endpoints.
RESULTS: A total of 11,700 patients (75% under 5 years old), from 33 different sites in 16 countries were followed for 28 days. Loss to follow-up was 4.9% (575/11,700). AS&AQ was given to 5,897 patients. Of these, 82% (4,826/5,897) were included in randomized comparative trials with polymerase chain reaction (PCR) genotyping results and compared to 5,413 patients (half receiving an ACT). AS&AQ and other ACT comparators resulted in rapid clearance of fever and parasitaemia, superior to non-ACT. Using survival analysis on a modified intent-to-treat population, the Day 28 PCR-adjusted efficacy of AS&AQ was greater than 90% (the WHO cut-off) in 11/16 countries. In randomized comparative trials (n = 22), the crude efficacy of AS&AQ was 75.9% (95% CI 74.6-77.1) and the PCR-adjusted efficacy was 93.9% (95% CI 93.2-94.5). The risk (weighted by site) of failure PCR-adjusted of AS&AQ was significantly inferior to non-ACT, superior to dihydroartemisinin-piperaquine (DP, in one Ugandan site), and not different from AS+SP or AL (artemether-lumefantrine). The risk of gametocyte appearance and the carriage rate of AS&AQ was only greater in one Ugandan site compared to AL and DP, and lower compared to non-ACT (p = 0.001, for all comparisons). Anaemia recovery was not different than comparator groups, except in one site in Rwanda where the patients in the DP group had a slower recovery.
CONCLUSION: AS&AQ compares well to other treatments and meets the WHO efficacy criteria for use against falciparum malaria in many, but not all, the sub-Saharan African countries where it was studied. Efficacy varies between and within countries. An IPD analysis can inform general and local treatment policies. Ongoing monitoring evaluation is required.
Journal Article > ResearchFull Text
PLOS One. 2019 July 25 (Issue 7)
Roddy P, Dalrymple U, Jensen TO, Dittrich S, Rao VB, et al.
PLOS One. 2019 July 25 (Issue 7)
Severe-febrile-illness (SFI) is a common cause of morbidity and mortality across sub-Saharan Africa (SSA). The burden of SFI in SSA is currently unknown and its estimation is fraught with challenges. This is due to a lack of diagnostic capacity for SFI in SSA, and thus a dearth of baseline data on the underlying etiology of SFI cases and scant SFI-specific causative-agent prevalence data. To highlight the public health significance of SFI in SSA, we developed a Bayesian model to quantify the incidence of SFI hospital admissions in SSA. Our estimates indicate a mean population-weighted SFI-inpatient-admission incidence rate of 18.4 (6.8-31.1, 68% CrI) per 1000 people for the year 2014, across all ages within areas of SSA with stable Plasmodium falciparum transmission. We further estimated a total of 16,200,337 (5,993,249-27,321,779, 68% CrI) SFI hospital admissions. This analysis reveals the significant burden of SFI in hospitals in SSA, but also highlights the paucity of pathogen-specific prevalence and incidence data for SFI in SSA. Future improvements in pathogen-specific diagnostics for causative agents of SFI will increase the abundance of SFI-specific prevalence and incidence data, aid future estimations of SFI burden, and enable clinicians to identify SFI-specific pathogens, administer appropriate treatment and management, and facilitate appropriate antibiotic use.
Journal Article > ResearchAbstract
Pediatr Res. 2020 July 20; Volume 89; DOI:10.1038/s41390-020-1057-5
Rytter MJH, Cichon B, Fabiansen C, Yaméogo CW, Windinmi SZ, et al.
Pediatr Res. 2020 July 20; Volume 89; DOI:10.1038/s41390-020-1057-5
Background: Moderate acute malnutrition (MAM) affects millions of children, increasing their risk of dying from infections. Thymus atrophy may be a marker of malnutrition-associated immunodeficiency, but factors associated with thymus size in children with MAM are unknown, as is the effect of nutritional interventions on thymus size.
Methods: Thymus size was measured by ultrasound in 279 children in Burkina Faso with MAM, diagnosed by low mid-upper arm circumference (MUAC) and/or low weight-for-length z-score (WLZ), who received 12 weeks treatment with different food supplements as part of a randomized trial. Correlates of thymus size and of changes in thymus size after treatment, and after another 12 weeks of follow-up were identified.
Results: Thymus size correlated positively with age, anthropometry and blood haemoglobin, and was smaller in children with malaria. Children with malnutrition diagnosed using MUAC had a smaller thymus than children diagnosed based on WLZ. Thymus size increased during and after treatment, similarly across the different food supplement groups.
Conclusions: In children with MAM, the thymus is smaller in children with anaemia or malaria, and grows with recovery. Assuming that thymus size reflects vulnerability, low MUAC seems to identify more vulnerable children than low WLZ in children with MAM.
Impact: Thymus atrophy is known to be a marker of the immunodeficiency associated with malnutrition in children.In children with moderate malnutrition, we found the thymus to be smaller in children with anaemia or malaria.Assuming that thymus size reflects vulnerability, low MUAC seems to identify more vulnerable children than low weight for length.Thymus atrophy appears reversible with recovery from malnutrition, with similar growth seen in children randomized to treatment with different nutritional supplements.
Methods: Thymus size was measured by ultrasound in 279 children in Burkina Faso with MAM, diagnosed by low mid-upper arm circumference (MUAC) and/or low weight-for-length z-score (WLZ), who received 12 weeks treatment with different food supplements as part of a randomized trial. Correlates of thymus size and of changes in thymus size after treatment, and after another 12 weeks of follow-up were identified.
Results: Thymus size correlated positively with age, anthropometry and blood haemoglobin, and was smaller in children with malaria. Children with malnutrition diagnosed using MUAC had a smaller thymus than children diagnosed based on WLZ. Thymus size increased during and after treatment, similarly across the different food supplement groups.
Conclusions: In children with MAM, the thymus is smaller in children with anaemia or malaria, and grows with recovery. Assuming that thymus size reflects vulnerability, low MUAC seems to identify more vulnerable children than low WLZ in children with MAM.
Impact: Thymus atrophy is known to be a marker of the immunodeficiency associated with malnutrition in children.In children with moderate malnutrition, we found the thymus to be smaller in children with anaemia or malaria.Assuming that thymus size reflects vulnerability, low MUAC seems to identify more vulnerable children than low weight for length.Thymus atrophy appears reversible with recovery from malnutrition, with similar growth seen in children randomized to treatment with different nutritional supplements.
Journal Article > ResearchFull Text
Journal of the American Medical Association (JAMA). 2010 July 21; Volume 304 (Issue 3); DOI:10.1001/jama.2010.980
Pujades-Rodríguez M, Balkan S, Arnould L, Brinkhof MW, Calmy A
Journal of the American Medical Association (JAMA). 2010 July 21; Volume 304 (Issue 3); DOI:10.1001/jama.2010.980
CONTEXT
Long-term antiretroviral therapy (ART) use in resource-limited countries leads to increasing numbers of patients with HIV taking second-line therapy. Limited access to further therapeutic options makes essential the evaluation of second-line regimen efficacy in these settings.
OBJECTIVES
To investigate failure rates in patients receiving second-line therapy and factors associated with failure and death.
DESIGN, SETTING, AND PARTICIPANTS
Multicohort study of 632 patients > 14 years old receiving second-line therapy for more than 6 months in 27 ART programs in Africa and Asia between January 2001 and October 2008.
MAIN OUTCOME MEASURES
Clinical, immunological, virological, and immunovirological failure (first diagnosed episode of immunological or virological failure) rates, and mortality after 6 months of second-line therapy use. Sensitivity analyses were performed using alternative CD4 cell count thresholds for immunological and immunovirological definitions of failure and for cohort attrition instead of death.
RESULTS
The 632 patients provided 740.7 person-years of follow-up; 119 (18.8%) met World Health Organization failure criteria after a median 11.9 months following the start of second-line therapy (interquartile range [IQR], 8.7-17.0 months), and 34 (5.4%) died after a median 15.1 months (IQR, 11.9-25.7 months). Failure rates were lower in those who changed 2 nucleoside reverse transcriptase inhibitors (NRTIs) instead of 1 (179.2 vs 251.6 per 1000 person-years; incidence rate ratio [IRR], 0.64; 95% confidence interval [CI], 0.42-0.96), and higher in those with lowest adherence index (383.5 vs 176.0 per 1000 person-years; IRR, 3.14; 95% CI, 1.67-5.90 for < 80% vs > or = 95% [percentage adherent, as represented by percentage of appointments attended with no delay]). Failure rates increased with lower CD4 cell counts when second-line therapy was started, from 156.3 vs 96.2 per 1000 person-years; IRR, 1.59 (95% CI, 0.78-3.25) for 100 to 199/microL to 336.8 per 1000 person-years; IRR, 3.32 (95% CI, 1.81-6.08) for less than 50/microL vs 200/microL or higher; and decreased with time using second-line therapy, from 250.0 vs 123.2 per 1000 person-years; IRR, 1.90 (95% CI, 1.19-3.02) for 6 to 11 months to 212.0 per 1000 person-years; 1.71 (95% CI, 1.01-2.88) for 12 to 17 months vs 18 or more months. Mortality for those taking second-line therapy was lower in women (32.4 vs 68.3 per 1000 person-years; hazard ratio [HR], 0.45; 95% CI, 0.23-0.91); and higher in patients with treatment failure of any type (91.9 vs 28.1 per 1000 person-years; HR, 2.83; 95% CI, 1.38-5.80). Sensitivity analyses showed similar results.
CONCLUSIONS
Among patients in Africa and Asia receiving second-line therapy for HIV, treatment failure was associated with low CD4 cell counts at second-line therapy start, use of suboptimal second-line regimens, and poor adherence. Mortality was associated with diagnosed treatment failure.
Long-term antiretroviral therapy (ART) use in resource-limited countries leads to increasing numbers of patients with HIV taking second-line therapy. Limited access to further therapeutic options makes essential the evaluation of second-line regimen efficacy in these settings.
OBJECTIVES
To investigate failure rates in patients receiving second-line therapy and factors associated with failure and death.
DESIGN, SETTING, AND PARTICIPANTS
Multicohort study of 632 patients > 14 years old receiving second-line therapy for more than 6 months in 27 ART programs in Africa and Asia between January 2001 and October 2008.
MAIN OUTCOME MEASURES
Clinical, immunological, virological, and immunovirological failure (first diagnosed episode of immunological or virological failure) rates, and mortality after 6 months of second-line therapy use. Sensitivity analyses were performed using alternative CD4 cell count thresholds for immunological and immunovirological definitions of failure and for cohort attrition instead of death.
RESULTS
The 632 patients provided 740.7 person-years of follow-up; 119 (18.8%) met World Health Organization failure criteria after a median 11.9 months following the start of second-line therapy (interquartile range [IQR], 8.7-17.0 months), and 34 (5.4%) died after a median 15.1 months (IQR, 11.9-25.7 months). Failure rates were lower in those who changed 2 nucleoside reverse transcriptase inhibitors (NRTIs) instead of 1 (179.2 vs 251.6 per 1000 person-years; incidence rate ratio [IRR], 0.64; 95% confidence interval [CI], 0.42-0.96), and higher in those with lowest adherence index (383.5 vs 176.0 per 1000 person-years; IRR, 3.14; 95% CI, 1.67-5.90 for < 80% vs > or = 95% [percentage adherent, as represented by percentage of appointments attended with no delay]). Failure rates increased with lower CD4 cell counts when second-line therapy was started, from 156.3 vs 96.2 per 1000 person-years; IRR, 1.59 (95% CI, 0.78-3.25) for 100 to 199/microL to 336.8 per 1000 person-years; IRR, 3.32 (95% CI, 1.81-6.08) for less than 50/microL vs 200/microL or higher; and decreased with time using second-line therapy, from 250.0 vs 123.2 per 1000 person-years; IRR, 1.90 (95% CI, 1.19-3.02) for 6 to 11 months to 212.0 per 1000 person-years; 1.71 (95% CI, 1.01-2.88) for 12 to 17 months vs 18 or more months. Mortality for those taking second-line therapy was lower in women (32.4 vs 68.3 per 1000 person-years; hazard ratio [HR], 0.45; 95% CI, 0.23-0.91); and higher in patients with treatment failure of any type (91.9 vs 28.1 per 1000 person-years; HR, 2.83; 95% CI, 1.38-5.80). Sensitivity analyses showed similar results.
CONCLUSIONS
Among patients in Africa and Asia receiving second-line therapy for HIV, treatment failure was associated with low CD4 cell counts at second-line therapy start, use of suboptimal second-line regimens, and poor adherence. Mortality was associated with diagnosed treatment failure.
Conference Material > Video (talk)
Sondo P
MSF Paediatric Days 2022. 2022 November 30
English
Français