Summary: | While the accuracy of flood predictions is likely to improve with increasing gauging station networks and robust radar coverage, challenges arise when such sources are spatially limited [1]. For instance, severe rainfall events in the UK come mostly from the North Atlantic area where gauges are ineffective and radar instruments are limited to it 250km range. In these cases, NASA’s IMERG is an alternative source of precipitation estimates offering global coverage with 0.1-degree spatial resolution at 30-minute intervals. The IMERG estimates for the UK’s case can offer an opportunity to extend the zone of rainfall detection beyond the radar range and increase lead time on flood risk predictions [2]. This study investigates the ability of machine learning (ML) models to capture the patterns between rainfall and stream level, observed during 20 years in the River Crane in the UK. To compare performances, the models use two sources of rainfall data as input for stream level prediction, the IMERG final run estimates and rain gauge readings. Among the three IMERG products (early, late, and final), the final run was selected for this study due to its higher accuracy in rainfall estimates. The rainfall data was retrieved from rain gauges and the pixel in the IMERG dataset grid closest to the point where stream level readings were taken. These datasets were assessed regarding their correlation with stream level using cross-correlation analysis. The assessment revealed a small variance in the lags and correlation coefficients between the stream-level and the IMERG dataset compared to the lags and coefficients found between stream-level and the gauge’s datasets. To evaluate and compare the performance of each dataset as input in ML models for stream-level predictions, three models were selected: NARX, LSTM, and GRU. Both inputs performed well in the NARX model and produced stream-level predictions of high precision with MSE equal to 1.5×10-5 while using gauge data and 1.9×10-5 for the IMERG data. The LSTM model also ...
|