Publication Details
Quentin Bolsee, Quentin Bolsée



Capturing and preprocessing 3D data is an increasingly important field, with advances in 3D displays, virtual reality, and digital fabrication driving the need for high-quality 3D data. Systems capable of acquiring 3D content can be separated into two classes: active and passive. Time-of-Flight (ToF) cameras are popular active depth sensors that suffer from significant noise when reconstructing geometry. In the first half of this thesis, we show a novel two-stage noise removal strategy for multiview ToF systems in which the depth maps are first processed by a Convolutional Neural Network (CNN) to remove most of the depth map noise, and then further processed by a special 3D neural network to resolve inter-camera inconsistencies. Results on real data show a systematic improvement over noise removal tools from the literature. We then present a calibration method for accurately estimating the pose of multiple ToF cameras by using a 3D calibration object, ensuring proper alignment of the reprojected point clouds over multiple positions within the scene’s 3D volume. In the second half of this thesis, Light Field (LF) systems are presented as passive depth sensors. A new geometric model and calibration method for plenoptic 1.0 devices is showcased, describing distortion effects at the microlens level for the first time. Improved quality in 3D reconstruction shows the validity of the model. A 3D Convolutional Neural Network is then shown for depth estimation from an LF image array, allowing for faster and lighter training than regular CNN from the literature. Finally, a novel spherical LF acquisition system is presented, capable of placing a camera at arbitrary positions around an object with promising results in photogrammetry and view synthesis