Conference Material > Abstract
Chandna A, PRIORITISE Study Group, Mahajan R, Gautam P, Mwandigha L, et al.
MSF Scientific Days International 2022. 2022 May 9; DOI:10.57740/hxy9-yk07
INTRODUCTION
In locations where few people have received Covid-19 vaccines, health systems remain vulnerable to spikes in SARS-CoV-2 infections. Triage tools, which could include biomarkers, to identify patients with moderate Covid-19 infection suitable for community-based management would be useful in the event of surges. In consultation with FIND (Geneva, Switzerland) we shortlisted seven biomarkers for evaluation, all measurable using point-of-care tests, and either currently available or in late-stage development.
METHODS
We prospectively recruited unvaccinated adults with laboratory-confirmed Covid-19 presenting to two hospitals in India with moderate symptoms, in order to develop and validate a clinical prediction model to rule-out progression to supplemental oxygen requirement. Moderate disease was defined as oxygen saturation (SpO2) ≥ 94% and respiratory rate < 30 breaths per minute (bpm), in the context of systemic symptoms (breathlessness or fever and chest pain, abdominal pain, diarrhoea, or severe myalgia). All patients had clinical observations and blood collected at presentation, and were followed up for 14 days for the primary outcome, defined as any of the following: SpO2 < 94%; respiratory rate > 30 bpm; SpO2/fraction of inspired oxygen (FiO2) < 400; or death. We specified a priori that each model would contain three easily ascertained clinical parameters (age, sex, and SpO2) and one of the seven biomarkers (C-reactive protein (CRP), D-dimer, interleukin-6 (IL-6), neutrophil-to-lymphocyte ratio (NLR), procalcitonin (PCT), soluble triggering receptor expressed on myeloid cells-1 (sTREM-1), or soluble urokinase plasminogen activator receptor (suPAR)), to ensure the models would be implementable in high patient-throughput, low-resource settings. We evaluated the models’ discrimination, calibration, and clinical utility in a held-out external temporal validation cohort.
ETHICS
Ethical approval was given by the ethics committees of AIIMS and CMC, India, the Oxford Tropical Research Ethics Committee, UK; and by the MSF Ethics Review Board.
ClinicalTrials.gov number, NCT04441372.
RESULTS
426 participants were recruited, of which 89 (21.0%) met the primary outcome. 257 participants comprised the development, and 166 the validation, cohorts. The three models containing NLR, suPAR, or IL-6 demonstrated promising discrimination (c-statistics: 0.72 to 0.74) and calibration (calibration slopes: 1.01 to 1.05) in the held-out validation cohort. Furthermore, they provided greater utility than a model containing the clinical parameters alone (c-statistic = 0.66; calibration slope = 0.68). The inclusion of either NLR or suPAR improved predictive performance such that the ratio of correctly to incorrectly discharged patients increased from 10:1 to 23:1 or 25:1 respectively. Including IL-6 resulted in a similar proportion (~21%) of correctly discharged patients as the clinical model, but without missing any patients requiring supplemental oxygen.
CONCLUSION
We present three clinical prediction models that could help clinicians identify patients with moderate Covid-19 suitable for community-based management. These models are readily implementable and, if validated, could be of particular relevance for resource-limited settings.
CONFLICTS OF INTEREST
None declared.
In locations where few people have received Covid-19 vaccines, health systems remain vulnerable to spikes in SARS-CoV-2 infections. Triage tools, which could include biomarkers, to identify patients with moderate Covid-19 infection suitable for community-based management would be useful in the event of surges. In consultation with FIND (Geneva, Switzerland) we shortlisted seven biomarkers for evaluation, all measurable using point-of-care tests, and either currently available or in late-stage development.
METHODS
We prospectively recruited unvaccinated adults with laboratory-confirmed Covid-19 presenting to two hospitals in India with moderate symptoms, in order to develop and validate a clinical prediction model to rule-out progression to supplemental oxygen requirement. Moderate disease was defined as oxygen saturation (SpO2) ≥ 94% and respiratory rate < 30 breaths per minute (bpm), in the context of systemic symptoms (breathlessness or fever and chest pain, abdominal pain, diarrhoea, or severe myalgia). All patients had clinical observations and blood collected at presentation, and were followed up for 14 days for the primary outcome, defined as any of the following: SpO2 < 94%; respiratory rate > 30 bpm; SpO2/fraction of inspired oxygen (FiO2) < 400; or death. We specified a priori that each model would contain three easily ascertained clinical parameters (age, sex, and SpO2) and one of the seven biomarkers (C-reactive protein (CRP), D-dimer, interleukin-6 (IL-6), neutrophil-to-lymphocyte ratio (NLR), procalcitonin (PCT), soluble triggering receptor expressed on myeloid cells-1 (sTREM-1), or soluble urokinase plasminogen activator receptor (suPAR)), to ensure the models would be implementable in high patient-throughput, low-resource settings. We evaluated the models’ discrimination, calibration, and clinical utility in a held-out external temporal validation cohort.
ETHICS
Ethical approval was given by the ethics committees of AIIMS and CMC, India, the Oxford Tropical Research Ethics Committee, UK; and by the MSF Ethics Review Board.
ClinicalTrials.gov number, NCT04441372.
RESULTS
426 participants were recruited, of which 89 (21.0%) met the primary outcome. 257 participants comprised the development, and 166 the validation, cohorts. The three models containing NLR, suPAR, or IL-6 demonstrated promising discrimination (c-statistics: 0.72 to 0.74) and calibration (calibration slopes: 1.01 to 1.05) in the held-out validation cohort. Furthermore, they provided greater utility than a model containing the clinical parameters alone (c-statistic = 0.66; calibration slope = 0.68). The inclusion of either NLR or suPAR improved predictive performance such that the ratio of correctly to incorrectly discharged patients increased from 10:1 to 23:1 or 25:1 respectively. Including IL-6 resulted in a similar proportion (~21%) of correctly discharged patients as the clinical model, but without missing any patients requiring supplemental oxygen.
CONCLUSION
We present three clinical prediction models that could help clinicians identify patients with moderate Covid-19 suitable for community-based management. These models are readily implementable and, if validated, could be of particular relevance for resource-limited settings.
CONFLICTS OF INTEREST
None declared.
Journal Article > ResearchFull Text
PLOS One. 2019 July 25 (Issue 7)
Roddy P, Dalrymple U, Jensen TO, Dittrich S, Rao VB, et al.
PLOS One. 2019 July 25 (Issue 7)
Severe-febrile-illness (SFI) is a common cause of morbidity and mortality across sub-Saharan Africa (SSA). The burden of SFI in SSA is currently unknown and its estimation is fraught with challenges. This is due to a lack of diagnostic capacity for SFI in SSA, and thus a dearth of baseline data on the underlying etiology of SFI cases and scant SFI-specific causative-agent prevalence data. To highlight the public health significance of SFI in SSA, we developed a Bayesian model to quantify the incidence of SFI hospital admissions in SSA. Our estimates indicate a mean population-weighted SFI-inpatient-admission incidence rate of 18.4 (6.8-31.1, 68% CrI) per 1000 people for the year 2014, across all ages within areas of SSA with stable Plasmodium falciparum transmission. We further estimated a total of 16,200,337 (5,993,249-27,321,779, 68% CrI) SFI hospital admissions. This analysis reveals the significant burden of SFI in hospitals in SSA, but also highlights the paucity of pathogen-specific prevalence and incidence data for SFI in SSA. Future improvements in pathogen-specific diagnostics for causative agents of SFI will increase the abundance of SFI-specific prevalence and incidence data, aid future estimations of SFI burden, and enable clinicians to identify SFI-specific pathogens, administer appropriate treatment and management, and facilitate appropriate antibiotic use.
Journal Article > ResearchFull Text
Clin Infect Dis. 2022 March 21; Volume ciac224; DOI:10.1093/cid/ciac224
Chandna A, Mahajan R, Gautam P, Mwandigha L, Gunasekaran K, et al.
Clin Infect Dis. 2022 March 21; Volume ciac224; DOI:10.1093/cid/ciac224
BACKGROUND
In locations where few people have received COVID-19 vaccines, health systems remain vulnerable to surges in SARS-CoV-2 infections. Tools to identify patients suitable for community-based management are urgently needed.
METHODS
We prospectively recruited adults presenting to two hospitals in India with moderate symptoms of laboratory-confirmed COVID-19 in order to develop and validate a clinical prediction model to rule-out progression to supplemental oxygen requirement. The primary outcome was defined as any of the following: SpO2 < 94%; respiratory rate > 30 bpm; SpO2/FiO2 < 400; or death. We specified a priori that each model would contain three clinical parameters (age, sex and SpO2) and one of seven shortlisted biochemical biomarkers measurable using commercially-available rapid tests (CRP, D-dimer, IL-6, NLR, PCT, sTREM-1 or suPAR), to ensure the models would be suitable for resource-limited settings. We evaluated discrimination, calibration and clinical utility of the models in a held-out temporal external validation cohort.
RESULTS
426 participants were recruited, of whom 89 (21.0%) met the primary outcome. 257 participants comprised the development cohort and 166 comprised the validation cohort. The three models containing NLR, suPAR or IL-6 demonstrated promising discrimination (c-statistics: 0.72 to 0.74) and calibration (calibration slopes: 1.01 to 1.05) in the validation cohort, and provided greater utility than a model containing the clinical parameters alone.
CONCLUSIONS
We present three clinical prediction models that could help clinicians identify patients with moderate COVID-19 suitable for community-based management. The models are readily implementable and of particular relevance for locations with limited resources.
In locations where few people have received COVID-19 vaccines, health systems remain vulnerable to surges in SARS-CoV-2 infections. Tools to identify patients suitable for community-based management are urgently needed.
METHODS
We prospectively recruited adults presenting to two hospitals in India with moderate symptoms of laboratory-confirmed COVID-19 in order to develop and validate a clinical prediction model to rule-out progression to supplemental oxygen requirement. The primary outcome was defined as any of the following: SpO2 < 94%; respiratory rate > 30 bpm; SpO2/FiO2 < 400; or death. We specified a priori that each model would contain three clinical parameters (age, sex and SpO2) and one of seven shortlisted biochemical biomarkers measurable using commercially-available rapid tests (CRP, D-dimer, IL-6, NLR, PCT, sTREM-1 or suPAR), to ensure the models would be suitable for resource-limited settings. We evaluated discrimination, calibration and clinical utility of the models in a held-out temporal external validation cohort.
RESULTS
426 participants were recruited, of whom 89 (21.0%) met the primary outcome. 257 participants comprised the development cohort and 166 comprised the validation cohort. The three models containing NLR, suPAR or IL-6 demonstrated promising discrimination (c-statistics: 0.72 to 0.74) and calibration (calibration slopes: 1.01 to 1.05) in the validation cohort, and provided greater utility than a model containing the clinical parameters alone.
CONCLUSIONS
We present three clinical prediction models that could help clinicians identify patients with moderate COVID-19 suitable for community-based management. The models are readily implementable and of particular relevance for locations with limited resources.
Journal Article > ResearchFull Text
PLOS One. 2016 August 25; Volume 11 (Issue 8); DOI:10.1371/journal.pone.0161721
Dittrich S, Tadesse BT, Moussy FG, Chua AC, Zorzet A, et al.
PLOS One. 2016 August 25; Volume 11 (Issue 8); DOI:10.1371/journal.pone.0161721
Acute fever is one of the most common presenting symptoms globally. In order to reduce the empiric use of antimicrobial drugs and improve outcomes, it is essential to improve diagnostic capabilities. In the absence of microbiology facilities in low-income settings, an assay to distinguish bacterial from non-bacterial causes would be a critical first step. To ensure that patient and market needs are met, the requirements of such a test should be specified in a target product profile (TPP). To identify minimal/optimal characteristics for a bacterial vs. non-bacterial fever test, experts from academia and international organizations with expertise in infectious diseases, diagnostic test development, laboratory medicine, global health, and health economics were convened. Proposed TPPs were reviewed by this working group, and consensus characteristics were defined. The working group defined non-severely ill, non-malaria infected children as the target population for the desired assay. To provide access to the most patients, the test should be deployable to community health centers and informal health settings, and staff should require <2 days of training to perform the assay. Further, given that the aim is to reduce inappropriate antimicrobial use as well as to deliver appropriate treatment for patients with bacterial infections, the group agreed on minimal diagnostic performance requirements of >90% and >80% for sensitivity and specificity, respectively. Other key characteristics, to account for the challenging environment at which the test is targeted, included: i) time-to-result <10 min (but maximally <2 hrs); ii) storage conditions at 0-40°C, ≤90% non-condensing humidity with a minimal shelf life of 12 months; iii) operational conditions of 5-40°C, ≤90% non-condensing humidity; and iv) minimal sample collection needs (50-100μL, capillary blood). This expert approach to define assay requirements for a bacterial vs. non-bacterial assay should guide product development, and enable targeted and timely efforts by industry partners and academic institutions.
Journal Article > ResearchFull Text
PLOS Glob Public Health. 2023 August 21; Volume 3 (Issue 8); e0001538.; DOI:10.1371/journal.pgph.0001538
Chandna A, Mahajan R, Gautam P, Mwandigha L, Dittrich S, et al.
PLOS Glob Public Health. 2023 August 21; Volume 3 (Issue 8); e0001538.; DOI:10.1371/journal.pgph.0001538
The soluble urokinase plasminogen activator receptor (suPAR) has been proposed as a biomarker for risk stratification of patients presenting with acute infections. However, most studies evaluating suPAR have used platform-based assays, the accuracy of which may differ from point-of-care tests capable of informing timely triage in settings without established laboratory capacity. Using samples and data collected during a prospective cohort study of 425 patients presenting with moderate Covid-19 to two hospitals in India, we evaluated the analytical performance and prognostic accuracy of a commercially-available rapid diagnostic test (RDT) for suPAR, using an enzyme-linked immunosorbent assay (ELISA) as the reference standard. Our hypothesis was that the suPAR RDT might be useful for triage of patients presenting with moderate Covid-19 irrespective of its analytical performance when compared with the reference test. Although agreement between the two tests was limited (bias = -2.46 ng/mL [95% CI = -2.65 to -2.27 ng/mL]), prognostic accuracy to predict supplemental oxygen requirement was comparable, whether suPAR was used alone (area under the receiver operating characteristic curve [AUC] of RDT = 0.73 [95% CI = 0.68 to 0.79] vs. AUC of ELISA = 0.70 [95% CI = 0.63 to 0.76]; p = 0.12) or as part of a published multivariable prediction model (AUC of RDT-based model = 0.74 [95% CI = 0.66 to 0.83] vs. AUC of ELISA-based model = 0.72 [95% CI = 0.64 to 0.81]; p = 0.78). Lack of agreement between the RDT and ELISA in our cohort warrants further investigation and highlights the importance of assessing candidate point-of-care tests to ensure management algorithms reflect the assay that will ultimately be used to inform patient care. Availability of a quantitative point-of-care test for suPAR opens the door to suPAR-guided risk stratification of patients with Covid-19 and other acute infections in settings with limited laboratory capacity.
Journal Article > ResearchFull Text
BMJ Glob Health. 2020 February 28; Volume 5 (Issue 2); e002067.; DOI:10.1136/bmjgh-2019-002067
Pelle KG, Rambaud-Althaus C, d’Acremont V, Moran G, Sampath R, et al.
BMJ Glob Health. 2020 February 28; Volume 5 (Issue 2); e002067.; DOI:10.1136/bmjgh-2019-002067
Health workers in low-resource settings often lack the support and tools to follow evidence-based clinical recommendations for diagnosing, treating and managing sick patients. Digital technologies, by combining patient health information and point-of-care diagnostics with evidence-based clinical protocols, can help improve the quality of care and the rational use of resources, and save patient lives. A growing number of electronic clinical decision support algorithms (CDSAs) on mobile devices are being developed and piloted without evidence of safety or impact. Here, we present a target product profile (TPP) for CDSAs aimed at guiding preventive or curative consultations in low-resource settings. This document will help align developer and implementer processes and product specifications with the needs of end users, in terms of quality, safety, performance and operational functionality. To identify the characteristics of CDSAs, a multidisciplinary group of experts (academia, industry and policy makers) with expertise in diagnostic and CDSA development and implementation in low-income and middle-income countries were convened to discuss a draft TPP. The TPP was finalised through a Delphi process to facilitate consensus building. An agreement greater than 75% was reached for all 40 TPP characteristics. In general, experts were in overwhelming agreement that, given that CDSAs provide patient management recommendations, the underlying clinical algorithms should be human-interpretable and evidence-based. Whenever possible, the algorithm’s patient management output should take into account pretest disease probabilities and likelihood ratios of clinical and diagnostic predictors. In addition, validation processes should at a minimum show that CDSAs are implementing faithfully the evidence they are based on, and ideally the impact on patient health outcomes. In terms of operational needs, CDSAs should be designed to fit within clinic workflows and function in connectivity-challenged and high-volume settings. Data collected through the tool should conform to local patient privacy regulations and international data standards.
Journal Article > Meta-AnalysisFull Text
BMC Infect Dis. 2022 February 4; Volume 22 (Issue 1); 121.; DOI:10.1186/s12879-021-07023-5
Slater HC, Ding XC, Knudson S, Bridges DJ, Moonga H, et al.
BMC Infect Dis. 2022 February 4; Volume 22 (Issue 1); 121.; DOI:10.1186/s12879-021-07023-5
BACKGROUND
A new more highly sensitive rapid diagnostic test (HS-RDT) for Plasmodium falciparum malaria (Alere™/Abbott Malaria Ag P.f RDT [05FK140], now called NxTek™ Eliminate Malaria Ag Pf) was launched in 2017. The test has already been used in many research studies in a wide range of geographies and use cases.
METHODS
In this study, we collate all published and available unpublished studies that use the HS-RDT and assess its performance in (i) prevalence surveys, (ii) clinical diagnosis, (iii) screening pregnant women, and (iv) active case detection. Two individual-level data sets from asymptomatic populations are used to fit logistic regression models to estimate the probability of HS-RDT positivity based on histidine-rich protein 2 (HRP2) concentration and parasite density. The performance of the HS-RDT in prevalence surveys is estimated by calculating the sensitivity and positive proportion in comparison to polymerase chain reaction (PCR) and conventional malaria RDTs.
RESULTS
We find that across 18 studies, in prevalence surveys, the mean sensitivity of the HS-RDT is estimated to be 56.1% (95% confidence interval [CI] 46.9-65.4%) compared to 44.3% (95% CI 32.6-56.0%) for a conventional RDT (co-RDT) when using nucleic acid amplification techniques as the reference standard. In studies where prevalence was estimated using both the HS-RDT and a co-RDT, we found that prevalence was on average 46% higher using a HS-RDT compared to a co-RDT. For use in clinical diagnosis and screening pregnant women, the HS-RDT was not significantly more sensitive than a co-RDT.
CONCLUSIONS
Overall, the evidence presented here suggests that the HS-RDT is more sensitive in asymptomatic populations and could provide a marginal improvement in clinical diagnosis and screening pregnant women. Although the HS-RDT has limited temperature stability and shelf-life claims compared to co-RDTs, there is no evidence to suggest, given this test has the same cost as current RDTs, it would have any negative impacts in terms of malaria misdiagnosis if it were widely used in all four population groups explored here.
A new more highly sensitive rapid diagnostic test (HS-RDT) for Plasmodium falciparum malaria (Alere™/Abbott Malaria Ag P.f RDT [05FK140], now called NxTek™ Eliminate Malaria Ag Pf) was launched in 2017. The test has already been used in many research studies in a wide range of geographies and use cases.
METHODS
In this study, we collate all published and available unpublished studies that use the HS-RDT and assess its performance in (i) prevalence surveys, (ii) clinical diagnosis, (iii) screening pregnant women, and (iv) active case detection. Two individual-level data sets from asymptomatic populations are used to fit logistic regression models to estimate the probability of HS-RDT positivity based on histidine-rich protein 2 (HRP2) concentration and parasite density. The performance of the HS-RDT in prevalence surveys is estimated by calculating the sensitivity and positive proportion in comparison to polymerase chain reaction (PCR) and conventional malaria RDTs.
RESULTS
We find that across 18 studies, in prevalence surveys, the mean sensitivity of the HS-RDT is estimated to be 56.1% (95% confidence interval [CI] 46.9-65.4%) compared to 44.3% (95% CI 32.6-56.0%) for a conventional RDT (co-RDT) when using nucleic acid amplification techniques as the reference standard. In studies where prevalence was estimated using both the HS-RDT and a co-RDT, we found that prevalence was on average 46% higher using a HS-RDT compared to a co-RDT. For use in clinical diagnosis and screening pregnant women, the HS-RDT was not significantly more sensitive than a co-RDT.
CONCLUSIONS
Overall, the evidence presented here suggests that the HS-RDT is more sensitive in asymptomatic populations and could provide a marginal improvement in clinical diagnosis and screening pregnant women. Although the HS-RDT has limited temperature stability and shelf-life claims compared to co-RDTs, there is no evidence to suggest, given this test has the same cost as current RDTs, it would have any negative impacts in terms of malaria misdiagnosis if it were widely used in all four population groups explored here.