On November 14 2022 at 17.00, Ségolène Rogge will defend her PhD entitled “’Depth estimation in multiview light field camera system”.
Everybody is invited to attend the presentation in room D.2.01.
In this research, we went through the main stages to render a 3D scene in 6 Degrees of Freedom: data acquisition, point cloud or depth map estimation, surface reconstruction and view synthesis rendering.
We generated datasets, some of them computer-generated using Blender, and some of them real scenery for which we built two acquisition rigs. On the first one we could put any type of camera – RGB or Time-of-Flight – which could then be moved in space in X, Y or Z direction thus sampling the scene anywhere withing a cube meter; the second one, more rigid, holding an array of 3-by-3 Light Field cameras. The acquired data was used to devise some depth estimation algorithms based on multi stereo matching algorithms improved with deep learning techniques. We also worked on the triangulation of a fast laser dot moving through the scene. We proved that the resulting depth map from one light field camera can be improved by using neighboring cameras while increasing the parallax. As each of the generated depth maps can be reprojected in space to form a point cloud, we implemented an improved a registration algorithm to put multiple point clouds together, enforcing the 3D structure of the scene. Finally, we rendered scenes on two different devices: a head mounted display and an holographic display. To render a point cloud in real time within an oculus, we stored it using an efficient data structure and optimized the rendering time using sub samples of the point cloud selected using level-of-detail based on the distance between the user and the various parts of the scene, and frustum culling. The visual comfort was brought to the user by the means of splatting techniques to simulate the surfaces of the scene. To render a scene on an holographic screen, we first captured it from different view points with light field cameras, then estimated the depth to be able to synthesize views in between, and use them as input to display on the screen.
Our work achieved state-of-the-art results, in particular in depth estimation for light field images and point clouds registration, and was published in various journals and conferences. It also led to multiple contributions in the Moving Picture Experts Group.