Synthesis of Shaking Video Using Motion Capture Data and Dynamic 3D Scene Modeling
Host Publication: 25th IEEE International Conference on Image Processing (ICIP)
Authors: S. Lu, B. Ceulemans, M. Wang and A. Munteanu
Publication Date: May. 2018
Number of Pages: 5
Important video processing methods such as video stabilization and deblurring often do not have ground-truth data available. This poses a great challenge in the development and parameter tunning of such methods. Synthetic shaken video is very useful to generate well-defined ground-truth datasets. Existing shaking video synthesis methods simulate shaky camera motion by performing 2D view warping using only a single 2D video, which does not always correspond to realistic 3D motions. In this paper, we introduce a novel shaking video synthesis approach. The proposed framework constructs the camera motion trajectory by making use of human motion information that is captured in the real-world. Moreover, we render the shaken video from man-made dynamic 3D scenes with detailed camera pose information. Our novel approach provides both accurate 2D visual content and camera motion trajectory in the 3D scene, which allows for evaluating the visual distortion as well as the offsets of the recovered camera trajectory. The proposed synthesis method of shaking video will benefit and ease future research on 3D-aware video stabilization.