A Time-of-Flight (ToF) camera acquires 3D information about a scene, which can be represented as a depth map or re-projected to a point cloud. Any ToF camera suffers from a significant amount of noise, which is non-stationary and gets more intense when an insufficient amount of infrared light comes back to the sensor. One of the main goals of this project is to investigate novel methodologies for ToF denoising in single and multi-camera ToF systems. The first part of the project focuses on deep-learning methods for depth image denoising, addressing the different types of noise encountered in ToF imaging. The second part focuses on multiview ToF camera systems, addresses the calibration problem and proposes novel methodologies for fusion and denoising of 3D point clouds in multi-camera ToF systems. Such systems will enable capturing dynamic colored point clouds and will require novel methods for denoising for this type of data. The last part of the project focuses on the design of novel methods for 3D object recognition based on deep learning. The proposed multiview ToF camera paradigm and the methods devised during the project are key to enable the acquisition of high-quality geometry of dynamic 3D scenery. The application domains are numerous, including depth-camera manufacturing, 3D scanning for multimedia and medical applications, object tracking and surveillance, and 3D printing, to name a few.