Jeroen Willems, Denis Steckelmacher, Woulte Schoulte, Bruno Depraetere, Edward Kikken, Abdellatif Bey-Temsamani, Ann Nowe
Optimal control of complex systems often requires access to a high-fidelity model, and information about the (future) external stimuli applied to the system (load, demand, …). An example of such a system is a cooling network, in which one or more chillers provide cooled liquid to a set of users with a variable demand. In this paper, we propose a Reinforcement Learning (RL) method for such a system with 3 chillers. It does not assume any model, and does not observe the future cooling demand, nor approximations of it. Still, we show that, after a training phase in a simulator, the learned controller achieves a performance better than classical rule-based controllers, and similar to a model predictive controller that does rely on a model and demand predictions. We show that the RL algorithm has learned implicitly how to anticipate, without requiring explicit predictions. This demonstrates that RL can allow to produce high-quality controllers in challenging industrial contexts.
Willems, J, Steckelmacher, D, Schoulte, W, Depraetere, B, Kikken, E, Bey-Temsamani, A & Nowe, A 2025, Reinforcement Learning for Model-Free Control of a Cooling Network with Uncertain Future Demands. in Proceedings of the 22nd International Conference on Informatics in Control, Automation and Robotics. ICINCO edn, vol. 1, Proceedings of the International Conference on Informatics in Control, Automation and Robotics, Scitepress, pp. 59-70. https://doi.org/10.5220/0013708300003982
Willems, J., Steckelmacher, D., Schoulte, W., Depraetere, B., Kikken, E., Bey-Temsamani, A., & Nowe, A. (2025). Reinforcement Learning for Model-Free Control of a Cooling Network with Uncertain Future Demands. In Proceedings of the 22nd International Conference on Informatics in Control, Automation and Robotics (ICINCO ed., Vol. 1, pp. 59-70). (Proceedings of the International Conference on Informatics in Control, Automation and Robotics). Scitepress. https://doi.org/10.5220/0013708300003982
@inproceedings{6168b93c4b66438a9ea1eb26c9c2dc67,
title = "Reinforcement Learning for Model-Free Control of a Cooling Network with Uncertain Future Demands",
abstract = "Optimal control of complex systems often requires access to a high-fidelity model, and information about the (future) external stimuli applied to the system (load, demand, …). An example of such a system is a cooling network, in which one or more chillers provide cooled liquid to a set of users with a variable demand. In this paper, we propose a Reinforcement Learning (RL) method for such a system with 3 chillers. It does not assume any model, and does not observe the future cooling demand, nor approximations of it. Still, we show that, after a training phase in a simulator, the learned controller achieves a performance better than classical rule-based controllers, and similar to a model predictive controller that does rely on a model and demand predictions. We show that the RL algorithm has learned implicitly how to anticipate, without requiring explicit predictions. This demonstrates that RL can allow to produce high-quality controllers in challenging industrial contexts.",
author = "Jeroen Willems and Denis Steckelmacher and Woulte Schoulte and Bruno Depraetere and Edward Kikken and Abdellatif Bey-Temsamani and Ann Nowe",
year = "2025",
doi = "10.5220/0013708300003982",
language = "English",
volume = "1",
series = "Proceedings of the International Conference on Informatics in Control, Automation and Robotics",
publisher = "Scitepress",
pages = "59--70",
booktitle = "Proceedings of the 22nd International Conference on Informatics in Control, Automation and Robotics",
edition = "ICINCO",
}