Gaoyuan Liu, Joris De Winter, Denis Steckelmacher, Roshan Kumar Hota, Ann Nowe, Bram Vanderborght
obotic manipulation in cluttered environments requires synergistic planning among prehensile and non-prehensile actions. Previous works on sampling-based Task and Motion Planning (TAMP) algorithms, e.g. PDDLStream, provide a fast and generalizable solution for multi-modal manipulation. However, they are likely to fail in cluttered scenarios where no collision-free grasping approaches can be sampled without preliminary manipulations. To extend the ability of sampling-based algorithms, we integrate a vision-based Reinforcement Learning (RL) non-prehensile procedure, pusher. The pushing actions generated by pusher can eliminate interlocked situations and make the grasping problem solvable. Also, the sampling-based algorithm evaluates the pushing actions by providing rewards in the training process, thus the pusher can learn to avoid situations leading to irreversible failures. The proposed hybrid planning method is validated on a cluttered bin-picking problem and implemented in both simulation and real world. Results show that the pusher can effectively improve the success ratio of the previous sampling-based algorithm, while the sampling-based algorithm can help the pusher learn pushing skills.
Liu, G, De Winter, J, Steckelmacher, D, Hota, RK, Nowe, A & Vanderborght, B 2023, 'Synergistic Task and Motion Planning With Reinforcement Learning-Based Non-Prehensile Actions', IEEE Robotics and Automation Letters, vol. 8, no. 5, pp. 2764-2771. https://doi.org/10.1109/LRA.2023.3261708
Liu, G., De Winter, J., Steckelmacher, D., Hota, R. K., Nowe, A., & Vanderborght, B. (2023). Synergistic Task and Motion Planning With Reinforcement Learning-Based Non-Prehensile Actions. IEEE Robotics and Automation Letters, 8(5), 2764-2771. https://doi.org/10.1109/LRA.2023.3261708
@article{5f77e0cacdaf4889a0777bbad12d6bac,
title = "Synergistic Task and Motion Planning With Reinforcement Learning-Based Non-Prehensile Actions",
abstract = "obotic manipulation in cluttered environments requires synergistic planning among prehensile and non-prehensile actions. Previous works on sampling-based Task and Motion Planning (TAMP) algorithms, e.g. PDDLStream, provide a fast and generalizable solution for multi-modal manipulation. However, they are likely to fail in cluttered scenarios where no collision-free grasping approaches can be sampled without preliminary manipulations. To extend the ability of sampling-based algorithms, we integrate a vision-based Reinforcement Learning (RL) non-prehensile procedure, pusher. The pushing actions generated by pusher can eliminate interlocked situations and make the grasping problem solvable. Also, the sampling-based algorithm evaluates the pushing actions by providing rewards in the training process, thus the pusher can learn to avoid situations leading to irreversible failures. The proposed hybrid planning method is validated on a cluttered bin-picking problem and implemented in both simulation and real world. Results show that the pusher can effectively improve the success ratio of the previous sampling-based algorithm, while the sampling-based algorithm can help the pusher learn pushing skills.",
keywords = "Task and Motion Planning, Reinforcement Learning, Manipulation Planning",
author = "Gaoyuan Liu and {De Winter}, Joris and Denis Steckelmacher and Hota, {Roshan Kumar} and Ann Nowe and Bram Vanderborght",
note = "Publisher Copyright: IEEE Copyright: Copyright 2023 Elsevier B.V., All rights reserved.",
year = "2023",
month = may,
doi = "10.1109/LRA.2023.3261708",
language = "English",
volume = "8",
pages = "2764--2771",
journal = "IEEE Robotics and Automation Letters",
issn = "2377-3766",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "5",
}