Multi-model Ensembling of Probabilistic Streamflow Forecasts: Role of Predictor State Space in skill evaluation Institute of Statistics Mimeo Series 2595

Seasonal streamflow forecasts contingent on climate information are essential for short-term planning and for setting up contingency measures during extreme years. Recent research shows that operational climate forecasts obtained by combining different General Circulation Models (GCM) have improved...

Full description

Bibliographic Details
Main Authors: A. Sankarasubramanian, Naresh Devineni, Sujit Ghosh
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Subjects:
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.67.9004
http://www.stat.ncsu.edu/library/papers/mimeo2595.pdf
Description
Summary:Seasonal streamflow forecasts contingent on climate information are essential for short-term planning and for setting up contingency measures during extreme years. Recent research shows that operational climate forecasts obtained by combining different General Circulation Models (GCM) have improved predictability/skill in comparison to the predictability from single GCMs [Rajagopalan et al., 2002; Doblas-Reyes et al., 2005]. In this study, we present a new approach for developing multi-model ensembles that combines streamflow forecasts from various models by evaluating their performance from the predictor state space. Based on this, we show that any systematic errors in model prediction with reference to specific predictor conditions could be reduced by combining forecasts with multiple models and with climatology. The methodology is demonstrated by obtaining seasonal streamflow forecasts for the Neuse river basin by combining two low dimensional probabilistic streamflow forecasting models that uses SST conditions in tropical Pacific, North Atlantic and North Carolina Coast. Using Rank Probability Score (RPS) for evaluating the probabilistic streamflow forecasts developed contingent on SSTs, the methodology gives higher weights in drawing ensembles from a model that has better predictability under similar predictor conditions. The performance of the multi-model forecasts are compared with the individual model’s performance using various forecast verification measures such as anomaly correlation, root mean square error (RMSE), Rank Probability Skill Score (RPSS) and reliability diagrams. By developing multi-model ensembles for both leave-one out cross validated forecasts and adaptive forecasts using the proposed methodology, we show that evaluating the model performance from predictor state space is a better alternative in developing multi-model ensembles instead of combining model’s based on their predictability of the marginal distribution. 1.0