Avalos Martinez de Escobar, Raphaël, Florent Delgrange, Ann Nowé, Guillermo A. Pérez, Diederik M. Roijers
Partially Observable Markov Decision Processes (POMDPs) are used to model environments where the full state cannot be perceived by an agent. As such the agent needs to reason taking into account the past observations and actions. However, simply remembering the full history is generally intractable due to the exponential growth in the history space. Maintaining a probability distribution that models the belief over what the true state is can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is often intractable. While SOTA algorithms use Recurrent Neural Networks to compress the observation-action history aiming to learn a sufficient statistic, they lack guarantees of success and can lead to sub-optimal policies. To overcome this, we propose the Wasserstein Belief Updater, an RL algorithm that learns a latent model of the POMDP and an approximation of the belief update. Our approach comes with theoretical guarantees on the quality of our approximation ensuring that our outputted beliefs allow for learning the optimal value function.
Avalos, R , Delgrange, F , Nowé, A , Pérez, GA & Roijers, DM 2024, The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models . in The Twelfth International Conference on Learning Representations: ICLR 2024. OpenReview.net, The Twelfth International Conference on Learning Representations, Vienna, Austria, 7/05/24 .
Avalos, R. , Delgrange, F. , Nowé, A. , Pérez, G. A. , & Roijers, D. M. (2024). The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models . In The Twelfth International Conference on Learning Representations: ICLR 2024 OpenReview.net.
@inproceedings{cb7d09f4154042518aef3f961b2a1c53,
title = " The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models " ,
abstract = " Partially Observable Markov Decision Processes (POMDPs) are used to model environments where the full state cannot be perceived by an agent. As such the agent needs to reason taking into account the past observations and actions. However, simply remembering the full history is generally intractable due to the exponential growth in the history space. Maintaining a probability distribution that models the belief over what the true state is can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is often intractable. While SOTA algorithms use Recurrent Neural Networks to compress the observation-action history aiming to learn a sufficient statistic, they lack guarantees of success and can lead to sub-optimal policies. To overcome this, we propose the Wasserstein Belief Updater, an RL algorithm that learns a latent model of the POMDP and an approximation of the belief update. Our approach comes with theoretical guarantees on the quality of our approximation ensuring that our outputted beliefs allow for learning the optimal value function. " ,
keywords = " cs.LG, cs.AI, Reinforcement Learning, POMDP, Model-based, Representation Learning " ,
author = " Raphael Avalos and Florent Delgrange and Ann Now{'e} and P{'e}rez, {Guillermo A.} and Roijers, {Diederik M.} " ,
year = " 2024 " ,
month = may,
day = " 7 " ,
language = " English " ,
booktitle = " The Twelfth International Conference on Learning Representations " ,
publisher = " OpenReview.net " ,
note = " The Twelfth International Conference on Learning Representations, ICLR 2024 Conference date: 07-05-2024 Through 11-05-2024 " ,
url = " https://iclr.cc " ,
}