There is a significant growth of interest in adopting advanced contentbrrepresentations allowing for higher resolution, richer portrayals of thebrreal world. Currently deployed modalities aim at offering richerbrexperiences: point clouds are providing a representation of the scenebrspace, light fields are providing a projection of the 3D scene in thebrobserver plane, while holograms are providing a projection using abrwave-based light propagation model that requires a coherentbrillumination.brAll of them are based on discrete representation of the 3D scene orbrits projection, meaning that the representation is incomplete.brConsequently, intermediate viewpoints will require additionalbrcalculations which is often the case in interactive experiences withbrphoto-realistic content. Unfortunately, this leads to suboptimal imagebrquality because nonlinearities such as occlusions and edges, are notbrcorrectly represented.brIn this research project, we aim at designing a versatile, compact butbrcomplete plenoptic representation of a unified image modality thatbrfacilitates, among others, an efficient extraction of light field orbrholographic modalities in the static photographic scenario. Such abrrepresentation would require storing a more continuous 5D function,braccounting for the observation position, and gaze angle. Researchbrwill be centred on investigating suitable representation models,brassociated coding algorithms and visual quality evaluations.br
Runtime: 2021 - 2024