Axel Abels, Diederik M. Roijers, Tom Lenaerts, Ann Nowe, Denis Steckelmacher
Many real world decision problems are characterized by multiple conflicting objectives which must be balanced based on their relative importance. In the dynamic weights setting the relative importance changes over time and specialized algorithms that deal with such change, such as the tabular Reinforcement Learning (RL) algorithm by Natarajan & Tadepalli (2005), are required. However, this earlier work is not feasible for RL settings that necessitate the use of function approximators. We generalize across weight changes and high-dimensional inputs by proposing a multi-objective Q-network whose outputs are conditioned on the relative importance of objectives, and introduce Diverse Experience Replay (DER) to counter the inherent non-stationarity of the dynamic weights setting. We perform an extensive experimental evaluation and compare our methods to adapted algorithms from Deep Multi-Task/Multi-Objective RL and show that our proposed network in combination with DER dominates these adapted algorithms across weight change scenarios and problem domains.
Abels, A, Roijers, DM, Lenaerts, T, Nowe, A & Steckelmacher, D 2019, Dynamic weights in multi-objective deep reinforcement learning. in 36th International Conference on Machine Learning, ICML 2019 . vol. 97, 36th International Conference on Machine Learning, ICML 2019, vol. 2019-June, International Machine Learning Society (IMLS), pp. 11-20, International Conference on Machine Learning, 9/06/19. <https://proceedings.mlr.press/v97/abels19a.html>
Abels, A., Roijers, D. M., Lenaerts, T., Nowe, A., & Steckelmacher, D. (2019). Dynamic weights in multi-objective deep reinforcement learning. In 36th International Conference on Machine Learning, ICML 2019 (Vol. 97, pp. 11-20). (36th International Conference on Machine Learning, ICML 2019; Vol. 2019-June). International Machine Learning Society (IMLS). https://proceedings.mlr.press/v97/abels19a.html
@inproceedings{9b96d072d9cd4019bd0c97e296b80e19,
title = "Dynamic weights in multi-objective deep reinforcement learning",
abstract = "Many real world decision problems are characterized by multiple conflicting objectives which must be balanced based on their relative importance. In the dynamic weights setting the relative importance changes over time and specialized algorithms that deal with such change, such as the tabular Reinforcement Learning (RL) algorithm by Natarajan & Tadepalli (2005), are required. However, this earlier work is not feasible for RL settings that necessitate the use of function approximators. We generalize across weight changes and high-dimensional inputs by proposing a multi-objective Q-network whose outputs are conditioned on the relative importance of objectives, and introduce Diverse Experience Replay (DER) to counter the inherent non-stationarity of the dynamic weights setting. We perform an extensive experimental evaluation and compare our methods to adapted algorithms from Deep Multi-Task/Multi-Objective RL and show that our proposed network in combination with DER dominates these adapted algorithms across weight change scenarios and problem domains.",
author = "Axel Abels and Roijers, {Diederik M.} and Tom Lenaerts and Ann Nowe and Denis Steckelmacher",
year = "2019",
language = "English",
isbn = "9781510886988",
volume = "97",
series = "36th International Conference on Machine Learning, ICML 2019",
publisher = "International Machine Learning Society (IMLS)",
pages = "11--20",
booktitle = "36th International Conference on Machine Learning, ICML 2019",
note = "International Conference on Machine Learning, ICML ; Conference date: 09-06-2019 Through 15-06-2019",
}