Publication Details
Overview
 
 
Ségolène Rogge, Ségolène Rogge
 

Thesis

Abstract 

In this research, we went through the main stages to render a 3D scene in 6Degrees of Freedom: data acquisition, point cloud or depth map estimation, surfacereconstruction and view synthesis rendering.We generated datasets, some of them computer-generated using Blender, andsome of them real scenery for which we built two acquisition rigs. On the first onewe could put any type of camera – RGB or Time-of-Flight – which could then bemoved in space in X, Y or Z direction thus sampling the scene anywhere withinga cube meter the second one, more rigid, holding an array of 3-by-3 Light Fieldcameras. The acquired data was used to devise some depth estimation algorithmsbased on multi stereo matching algorithms improved with deep learning techniques.We also worked on the triangulation of a fast laser dot moving through the scene. Weproved that the resulting depth map from one light field camera can be improved byusing neighboring cameras while increasing the parallax. As each of the generateddepth maps can be reprojected in space to form a point cloud, we implemented animproved a registration algorithm to put multiple point clouds together, enforcingthe 3D structure of the scene. Finally, we rendered scenes on two different devices:a head mounted display and an holographic display. To render a point cloud in realtime within an oculus, we stored it using an efficient data structure and optimizedthe rendering time using sub samples of the point cloud selected using level-of-detail based on the distance between the user and the various parts of the scene,and frustum culling. The visual comfort was brought to the user by the means ofsplatting techniques to simulate the surfaces of the scene. To render a scene on anholographic screen, we first captured it from different view points with light fieldcameras, then estimated the depth to be able to synthesize views in between, anduse them as input to display on the screen.Our work achieved state-of-the-art results, in particular in depth estimation forlight field images and point clouds registration, and was published in various journalsand conferences. It also led to multiple contributions in the Moving Picture ExpertsGroup.

Reference