Project Details
Project description 

A major goal of this project is to devise sparse representations that allow for efficient representation and processing of point cloud data. To cope with the noise occurring during the acquisition process, a first major problem to be solved is accurate multimodal depth fusion, by which the depth maps acquired using time-of-flight sensors as well as depth maps computed by multi-stereo techniques with light field camera arrays will be combined. The purpose is to obtain in each camera location a much more accurate and robust fused depth map compared to single modality depth estimates or depth measurements.
The second problem is related to the quantity and quality of points in the captured point clouds. In this project, we will devise new techniques to represent 3D objects, using an accurate point cloud and an efficient surface reconstruction based on pre-computed atoms. This research aims at finding other surface reconstruction primitives than the simple circles used in splat-based representations, and to use them in order to define an efficient representation for dynamic point clouds. In summary, another major goal is thus to find efficient representations for dynamic point clouds and to generate highly accurate 3D reconstructions based on them.
The project is expected to impact a wide range of applications, as interactive free viewpoint television, 3D display systems, 3D content management, compression, 3D video production, and 3D graphics.

Runtime: 2018 - 2019