Jumpiness in ensemble forecasts of Atlantic tropical cyclone tracks

We investigate the run-to-run consistency (jumpiness) of ensemble forecasts of tropical cyclone tracks from three global centers: ECMWF, the Met Office and NCEP. We use a divergence function to quantify the change in cross-track position between consecutive ensemble forecasts initialized at 12-hour...

Full description

Bibliographic Details
Published in:Weather and Forecasting
Main Authors: Richardson, David S., Cloke, Hannah L., Methven, John A., Pappenberger, Florian
Format: Article in Journal/Newspaper
Language:English
Published: American Meteorological Society 2024
Subjects:
Online Access:https://centaur.reading.ac.uk/114359/
https://centaur.reading.ac.uk/114359/3/114359%20VoR.pdf
https://centaur.reading.ac.uk/114359/1/DRichardson_TC_jumpiness_WAF_revision_clean.docx
Description
Summary:We investigate the run-to-run consistency (jumpiness) of ensemble forecasts of tropical cyclone tracks from three global centers: ECMWF, the Met Office and NCEP. We use a divergence function to quantify the change in cross-track position between consecutive ensemble forecasts initialized at 12-hour intervals. Results for the 2019-2021 North Atlantic hurricane season show that the jumpiness varied substantially between cases and centers, with no common cause across the different ensemble systems. Recent upgrades to the Met Office and NCEP ensembles reduced their overall jumpiness to match that of the ECMWF ensemble. The average divergence over the set of cases provides an objective measure of the expected change in cross-track position from one forecast to the next. For example, a user should expect on average that the ensemble mean position will change by around 80-90 km in the cross-track direction between a forecast for 120 hours ahead and the updated forecast made 12 hours later for the same valid time. This quantitative information can support users’ decision making, for example in deciding whether to act now or wait for the next forecast. We did not find any link between jumpiness and skill, indicating that users should not rely on the consistency between successive forecasts as a measure of confidence. Instead, we suggest that users should use ensemble spread and probabilistic information to assess forecast uncertainty, and consider multi-model combinations to reduce the effects of jumpiness.