Avalos Martinez de Escobar, Raphaël, Florent Delgrange, , Guillermo A. Pérez, Diederik M. Roijers
Proc. of the Adaptive and Learning Agents Workshop (ALA 2023)
Partially Observable Markov Decision Processes (POMDPs) are useful tools to model environments where the full state cannot be perceived by an agent. As such the agent needs to reason taking into account the past observations and actions. However, simply remembering the full history is generally intractable due to the exponential growth in the history space. Keeping a probability distribution that models the belief over what the true state is can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is also intractable. Current state-of-the-art algorithms use Recurrent Neural Networks (RNNs) to compress the observation-action history aiming to learn a sufficient statistic, but they lack guarantees of success and can lead to suboptimal policies. To overcome this, we propose the Wasserstein-Belief-Updater (WBU), an RL algorithm that learns a latent model of the POMDP and an approximation of the belief update. Our approach comes with theoretical guarantees on the quality of our approximation ensuring that our outputted beliefs allow for learning the optimal value function.
Avalos, R , Delgrange, F , Nowé, A , Pérez, GA & Roijers, DM 2023, ' The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models ', Proc. of the Adaptive and Learning Agents Workshop (ALA 2023) , pp. 1-21.
Avalos, R. , Delgrange, F. , Nowé, A. , Pérez, G. A. , & Roijers, D. M. (2023). The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models . Proc. of the Adaptive and Learning Agents Workshop (ALA 2023) , 1-21. [52].
@article{27de2a46557d4dd78f889b9cd12ded7b,
title = " The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models " ,
abstract = " Partially Observable Markov Decision Processes (POMDPs) are useful tools to model environments where the full state cannot be perceived by an agent. As such the agent needs to reason taking into account the past observations and actions. However, simply remembering the full history is generally intractable due to the exponential growth in the history space. Keeping a probability distribution that models the belief over what the true state is can be used as a sufficient statistic of the history, but its computation requires access to the model of the environment and is also intractable. Current state-of-the-art algorithms use Recurrent Neural Networks (RNNs) to compress the observation-action history aiming to learn a sufficient statistic, but they lack guarantees of success and can lead to suboptimal policies. To overcome this, we propose the Wasserstein-Belief-Updater (WBU), an RL algorithm that learns a latent model of the POMDP and an approximation of the belief update. Our approach comes with theoretical guarantees on the quality of our approximation ensuring that our outputted beliefs allow for learning the optimal value function. " ,
keywords = " Reinforcement Learning, Representation Learning, Partial Observability, Model Based " ,
author = " Raphael Avalos and Florent Delgrange and Ann Now{'e} and P{'e}rez, {Guillermo A.} and Roijers, {Diederik M.} " ,
year = " 2023 " ,
month = may,
day = " 29 " ,
language = " English " ,
pages = " 121 " ,
journal = " Proc. of the Adaptive and Learning Agents Workshop (ALA 2023) " ,
note = " 2023 Adaptive and Learning Agents Workshop at AAMAS, ALA 2023 Conference date: 29-05-2023 Through 30-05-2023 " ,
url = " https://alaworkshop2023.github.io " ,
}