In multi-objective reinforcement learning (MORL), much attention is paid to generating optimal solution sets for unknown utility functions of users, based on the stochastic reward vectors only. In online MORL on the other hand, the agent will often be able to elicit preferences from the user, enabling it to learn about the utility function of its user directly. In this paper, we study online MORL with user interaction employing the multi-objective multi-armed bandit (MOMAB) setting — perhaps the most fundamental MORL setting. We use Bayesian learning algorithms to learn about the environment and the user simultaneously. Specifically, we propose two algorithms: Utility-MAP UCB (umap-UCB) and Interactive Thompson Sampling (ITS), and show empirically that the performance of these algorithms in terms of regret closely approximates the regret of UCB and regular Thompson sampling provided with the ground truth utility function of the user from the start, and that ITS outperforms umap-UCB.
Roijers, D, Zintgraf, L & Nowe, A 2017, Interactive Thompson Sampling for Multi-Objective Multi-Armed Bandits. in J Rothe (ed.), Algorithmic Decision Theory - 5th International Conference, ADT 2017, Proceedings: 5th International Conference, ADT 2017, Luxembourg, Luxembourg, October 25–27, 2017, Proceedings. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10576 LNAI, Springer, pp. 18-34, International Conference on Algorithmic Decision Theory, Luxembourg, Luxembourg, 25/10/17. https://doi.org/10.1007/978-3-319-67504-6_2
Roijers, D., Zintgraf, L., & Nowe, A. (2017). Interactive Thompson Sampling for Multi-Objective Multi-Armed Bandits. In J. Rothe (Ed.), Algorithmic Decision Theory - 5th International Conference, ADT 2017, Proceedings: 5th International Conference, ADT 2017, Luxembourg, Luxembourg, October 25–27, 2017, Proceedings (pp. 18-34). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10576 LNAI). Springer. https://doi.org/10.1007/978-3-319-67504-6_2
@inproceedings{889f214fdf54434ab5f179d8a329f98a,
title = "Interactive Thompson Sampling for Multi-Objective Multi-Armed Bandits",
abstract = "In multi-objective reinforcement learning (MORL), much attention is paid to generating optimal solution sets for unknown utility functions of users, based on the stochastic reward vectors only. In online MORL on the other hand, the agent will often be able to elicit preferences from the user, enabling it to learn about the utility function of its user directly. In this paper, we study online MORL with user interaction employing the multi-objective multi-armed bandit (MOMAB) setting — perhaps the most fundamental MORL setting. We use Bayesian learning algorithms to learn about the environment and the user simultaneously. Specifically, we propose two algorithms: Utility-MAP UCB (umap-UCB) and Interactive Thompson Sampling (ITS), and show empirically that the performance of these algorithms in terms of regret closely approximates the regret of UCB and regular Thompson sampling provided with the ground truth utility function of the user from the start, and that ITS outperforms umap-UCB.",
author = "Diederik Roijers and Luisa Zintgraf and Ann Nowe",
year = "2017",
month = oct,
day = "25",
doi = "10.1007/978-3-319-67504-6_2",
language = "English",
isbn = "978-3-319-67503-9",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "18--34",
editor = "{ Rothe}, J{\"o}rg",
booktitle = "Algorithmic Decision Theory - 5th International Conference, ADT 2017, Proceedings",
note = "International Conference on Algorithmic Decision Theory, ADT ; Conference date: 25-10-2017 Through 27-10-2017",
url = "https://sma.uni.lu/adt2017/",
}