Deep reinforcement learning of model error corrections ...

<!--!introduction!--> Deep reinforcement learning has empowered recent advances in games like chess or in language modelling with chatGPT, but can also control nuclear fusion in a Tokamak reactor. In the often-used actor-critic framework, a neural network is trained to control while another ne...

Full description

Bibliographic Details
Main Authors: Finn, Tobias, Durand, Charlotte, Farchi, Alban, Bocquet, Marc
Format: Conference Object
Language:English
Published: GFZ German Research Centre for Geosciences 2023
Subjects:
Online Access:https://dx.doi.org/10.57757/iugg23-3336
https://gfzpublic.gfz-potsdam.de/pubman/item/item_5019665
Description
Summary:<!--!introduction!--> Deep reinforcement learning has empowered recent advances in games like chess or in language modelling with chatGPT, but can also control nuclear fusion in a Tokamak reactor. In the often-used actor-critic framework, a neural network is trained to control while another neural network evaluates the actions of the first network. In this talk, we cast model error correction into a remarkably similar framework to learn from temporally sparse observations. A first neural network corrects model errors, while a second, simultaneously trained, estimates the future costs if the model error correction were applied. This allows us to circumvent the need for the model adjoint or any linear approximation for learning in a gradient-based optimization framework. We test this novel framework on low-order Lorenz and sea-ice models. Trained on already existing trajectories, the actor-critic framework can not only correct persisting model errors, but significantly surpasses linear and ensemble ... : The 28th IUGG General Assembly (IUGG2023) (Berlin 2023) ...