Other > Pre-Print
bioRxiv. 2017 August 18; DOI:10.1101/177451
Funk S, Camacho A, Kucharski AJ, Lowe R, Eggo RM, et al.
bioRxiv. 2017 August 18; DOI:10.1101/177451
Real-time forecasts based on mathematical models can inform critical decision-making during infectious disease outbreaks. Yet, epidemic forecasts are rarely evaluated during or after the event, and there is little guidance on the best metrics for assessment. Here, we propose an evaluation approach that disentangles different components of forecasting ability using metrics that separately assess the calibration, sharpness and unbiasedness of forecasts. This makes it possible to assess not just how close a forecast was to reality but also how well uncertainty has been quantified. We used this approach to analyse the performance of weekly forecasts we generated in real time in Western Area, Sierra Leone, during the 2013–16 Ebola epidemic in West Africa. We investigated a range of forecast model variants based on the model fits generated at the time with a semi-mechanistic model, and found that good probabilistic calibration was achievable at short time horizons of one or two weeks ahead but models were increasingly inaccurate at longer forecasting horizons. This suggests that forecasts may have been of good enough quality to inform decision making requiring predictions a few weeks ahead of time but not longer, reflecting the high level of uncertainty in the processes driving the trajectory of the epidemic. Comparing forecasts based on the semi-mechanistic model to simpler null models showed that the best semi-mechanistic model variant performed better than the null models with respect to probabilistic calibration, and that this would have been identified from the earliest stages of the outbreak. As forecasts become a routine part of the toolkit in public health, standards for evaluation of performance will be important for assessing quality and improving credibility of mathematical models, and for elucidating difficulties and trade-offs when aiming to make the most useful and reliable forecasts.
Journal Article > ResearchFull Text
PLoS Comput Biol. 2019 February 11; Volume 15 (Issue 2); e1006785.; DOI:10.1371/journal.pcbi.1006785
Funk S, Camacho A, Kucharski AJ, Lowe R, Eggo RM, et al.
PLoS Comput Biol. 2019 February 11; Volume 15 (Issue 2); e1006785.; DOI:10.1371/journal.pcbi.1006785
Real-time forecasts based on mathematical models can inform critical decision-making during infectious disease outbreaks. Yet, epidemic forecasts are rarely evaluated during or after the event, and there is little guidance on the best metrics for assessment. Here, we propose an evaluation approach that disentangles different components of forecasting ability using metrics that separately assess the calibration, sharpness and bias of forecasts. This makes it possible to assess not just how close a forecast was to reality but also how well uncertainty has been quantified. We used this approach to analyse the performance of weekly forecasts we generated in real time for Western Area, Sierra Leone, during the 2013-16 Ebola epidemic in West Africa. We investigated a range of forecast model variants based on the model fits generated at the time with a semi-mechanistic model, and found that good probabilistic calibration was achievable at short time horizons of one or two weeks ahead but model predictions were increasingly unreliable at longer forecasting horizons. This suggests that forecasts may have been of good enough quality to inform decision making based on predictions a few weeks ahead of time but not longer, reflecting the high level of uncertainty in the processes driving the trajectory of the epidemic. Comparing forecasts based on the semi-mechanistic model to simpler null models showed that the best semi-mechanistic model variant performed better than the null models with respect to probabilistic calibration, and that this would have been identified from the earliest stages of the outbreak. As forecasts become a routine part of the toolkit in public health, standards for evaluation of performance will be important for assessing quality and improving credibility of mathematical models, and for elucidating difficulties and trade-offs when aiming to make the most useful and reliable forecasts.
Journal Article > ReviewFull Text
One Earth. 2022 April 15; Volume 5 (Issue 4); 336-350.; DOI:10.1016/j.oneear.2022.03.011
Alcayna T, Fletcher I, Gibb R, Tremblay LL, Funk S, et al.
One Earth. 2022 April 15; Volume 5 (Issue 4); 336-350.; DOI:10.1016/j.oneear.2022.03.011
Outbreaks of climate-sensitive infectious diseases (CSID) in the aftermath of extreme climatic events, such as floods, droughts, tropical cyclones, and heatwaves, are of high public health concern. Recent advances in forecasting of extreme climatic events have prompted a growing interest in the development of prediction models to anticipate CSID risk, yet the evidence base linking extreme climate events to CSID outbreaks to date has not been collated and synthesized. This review identifies potential hydrometeorological triggers of outbreaks and highlights gaps in knowledge on the causal chain between extreme events and outbreaks. We found higher evidence and higher agreement on the links between extreme climatic events and water-borne diseases than for vector-borne diseases. In addition, we found a substantial lack of evidence on the links between extreme climatic events and underlying vulnerability and exposure factors. This review helps inform trigger design for CSID prediction models for anticipatory public health action.