Development of a reinforcement learningbased agent for Simultaneous Localization and Mapping (SLAM) ■
In classical Simultaneous Localization and Mapping (SLAM) pipelines, a multi-sensor state estimation system typically assigns each sensor (for example, camera and lidar) a fixed covariance (or noise model) chosen a priori and manually tuned, which can be suboptimal or brittle when conditions change. In this master thesis, the goal is to develop a reinforcement learningbased agent that instead learns these covariances from data so that the system can adaptively and dynamically adjust the weighting of each sensor during fusion, improving robustness and accuracy across varying environments and motion regimes.
The student will design and implement an offline reinforcement learning agent whose policy is represented by a neural network outputting covariance matrices (or a suitable parameterization of them) as its actions.
This includes: extracting and structuring trajectories from existing rosbag recordings, defining the RL environment (state representation, action space, reward tied to estimation quality), training the agent in Python using standard deep RL libraries, and evaluating the learned covariances by benchmarking them using a multi-sensor fusion algorithm and analysing the resulting estimation performance.
Framework of the Thesis ■
The thesis will start with a literature review on multi-sensor fusion, uncertainty modelling (covariance design, covariance intersection, adaptive covariance estimation) and reinforcement learning methods applied to data fusion and control in robotics.
Next, the student will define the complete experimental framework: preparation of internal datasets (rosbag parsing and preprocessing), formal definition of the RL environment and reward, implementation of the training pipeline, and integration of the learned covariance model into an existing estimation or SLAM framework. In the final phase, the student will conduct extensive experimental validation, comparing different network architectures and reward formulations, and will perform a detailed quantitative analysis of how the learned covariances impact robustness and accuracy of the overall multi-sensor fusion system.
Expected Student Profile ■
The ideal candidate has a solid background in robotics, control or signal processing, with good knowledge of probability and statistics (Gaussian models, covariance, Bayesian estimation), and strong foundations in computer vision (e.g., image processing, feature extraction, deep learningbased vision methods).
Strong Python programming skills are required prior experience with PyTorch or TensorFlow and basic notions of reinforcement learning are highly desirable.