Deep Reinforcement Learning is one of the state-of-the-art methods for producing near-optimal system controllers. However, deep RL algorithms train a deep neural network, that lacks transparency, which poses challenges when the controller has to meet regulations, or foster trust. To alleviate this, one could transfer the learned behaviour into a model that is human-readable by design using knowledge distilla- tion. Often this is done with a single model which mimics the original model on average but could struggle in more dynamic situations. A key challenge is that this simpler model should have the right balance be- tween flexibility and complexity or right balance between balance bias and accuracy. We propose a new model-agnostic method to divide the state space into regions where a simplified, human-understandable model can operate in. In this paper, we use Voronoi partitioning to find regions where linear models can achieve similar performance to the original con- troller. We evaluate our approach on a gridworld environment and a classic control task. We observe that our proposed distillation to locally- specialized linear models produces policies that are explainable and show that the distillation matches or even slightly outperforms the black-box policy they are distilled from.
Deproost, S, Steckelmacher, D & Nowe, A 2025, 'Explainable RL Policies by Distilling to Locally-Specialized Linear Policies with Voronoi State Partitioning', Proceedings of the Benelux Conference on Artificial Intelligence, pp. 1-21. <https://bnaic2025.unamur.be/accepted-submissions/accepted_poster/006%20-%20Explainable%20RL%20Policies%20by%20Distilling%20to%20Locally-Specialized%20Linear%20Policies%20with%20Voronoi%20State%20Partitioning.pdf>
Deproost, S., Steckelmacher, D., & Nowe, A. (2025). Explainable RL Policies by Distilling to Locally-Specialized Linear Policies with Voronoi State Partitioning. Proceedings of the Benelux Conference on Artificial Intelligence, 1-21. https://bnaic2025.unamur.be/accepted-submissions/accepted_poster/006%20-%20Explainable%20RL%20Policies%20by%20Distilling%20to%20Locally-Specialized%20Linear%20Policies%20with%20Voronoi%20State%20Partitioning.pdf
@article{8751fb2b352d4ffcbe5d6817642aca8c,
title = "Explainable RL Policies by Distilling to Locally-Specialized Linear Policies with Voronoi State Partitioning",
abstract = "Deep Reinforcement Learning is one of the state-of-the-art methods for producing near-optimal system controllers. However, deep RL algorithms train a deep neural network, that lacks transparency, which poses challenges when the controller has to meet regulations, or foster trust. To alleviate this, one could transfer the learned behaviour into a model that is human-readable by design using knowledge distilla- tion. Often this is done with a single model which mimics the original model on average but could struggle in more dynamic situations. A key challenge is that this simpler model should have the right balance be- tween flexibility and complexity or right balance between balance bias and accuracy. We propose a new model-agnostic method to divide the state space into regions where a simplified, human-understandable model can operate in. In this paper, we use Voronoi partitioning to find regions where linear models can achieve similar performance to the original con- troller. We evaluate our approach on a gridworld environment and a classic control task. We observe that our proposed distillation to locally- specialized linear models produces policies that are explainable and show that the distillation matches or even slightly outperforms the black-box policy they are distilled from.",
keywords = "Reinforcement Learning, Explainable AI",
author = "Senne Deproost and Denis Steckelmacher and Ann Nowe",
year = "2025",
month = nov,
day = "19",
language = "English",
pages = "1--21",
journal = "Proceedings of the Benelux Conference on Artificial Intelligence",
issn = "1568-7805",
}