Many recent successful off-policy multi-agent reinforcement learning (MARL) algorithms for cooperative partially observable environments focus on finding factorized value functions, leading to convoluted network structures. Building on the structure of independent Q-learners, our LAN algorithm takes a radically different approach, leveraging a dueling architecture to learn for each agent a decentralized best-response policies via individual advantage functions. The learning is stabilized by a centralized critic whose primary objective is to reduce the moving target problem of the individual advantages. The critic, whose network's size is independent of the number of agents, is cast aside after learning. Evaluation on the StarCraft II multi-agent challenge benchmark shows that LAN reaches state-of-the-art performance and is highly scalable with respect to the number of agents, opening up a promising alternative direction for MARL research.
Avalos, R, Reymond, M, Nowe, A & Roijers, DM 2023, 'Local Advantage Networks for Multi-Agent Reinforcement Learning in Dec-POMDPs', Transactions on Machine Learning Research (TMLR). <https://openreview.net/forum?id=adpKzWQunW>
Avalos, R., Reymond, M., Nowe, A., & Roijers, D. M. (2023). Local Advantage Networks for Multi-Agent Reinforcement Learning in Dec-POMDPs. Transactions on Machine Learning Research (TMLR). https://openreview.net/forum?id=adpKzWQunW
@article{29f8986d040040d8aa88963a6e2ebe40,
title = "Local Advantage Networks for Multi-Agent Reinforcement Learning in Dec-POMDPs",
abstract = "Many recent successful off-policy multi-agent reinforcement learning (MARL) algorithms for cooperative partially observable environments focus on finding factorized value functions, leading to convoluted network structures. Building on the structure of independent Q-learners, our LAN algorithm takes a radically different approach, leveraging a dueling architecture to learn for each agent a decentralized best-response policies via individual advantage functions. The learning is stabilized by a centralized critic whose primary objective is to reduce the moving target problem of the individual advantages. The critic, whose network's size is independent of the number of agents, is cast aside after learning. Evaluation on the StarCraft II multi-agent challenge benchmark shows that LAN reaches state-of-the-art performance and is highly scalable with respect to the number of agents, opening up a promising alternative direction for MARL research.",
author = "Rapha{\"e}l Avalos and Mathieu Reymond and Ann Nowe and Roijers, {Diederik M}",
note = "Publisher Copyright: {\textcopyright} 2023, Transactions on Machine Learning Research. All rights reserved.",
year = "2023",
month = oct,
language = "English",
journal = "Transactions on Machine Learning Research (TMLR)",
issn = "2835-8856",
}