Automatic extrinsic calibration of camera networks based on pedestrians
Host Publication: International Conference on Distributed Smart Cameras
Authors: A. Minh Truong, W. Philips, J. Guan, N. Deligiannis and L. Abrahamyan
Publication Year: 2019
Number of Pages: 6
Extrinsic camera calibration is essential for any computer vision task in a camera network. Usually, researchers place calibration objects in the scene to calibrate the cameras. How- ever, when installing cameras in the field, this approach can be costly and impractical, especially when recalibration is needed. This paper proposes a novel accurate and fully auto- matic extrinsic calibration framework for camera networks with partially overlapping views. It is based on the analysis of pedestrian tracks and does not require calibration objects. Compared to the state of the art, the new method is fully automatic and robust. Our method detect human poses in the camera images and then models walking persons as vertical sticks. We propose a brute-force method to determines the correspondence between persons in multiple camera images. This information along with 3D estimated locations of the head and feet of the pedestrians are then used to compute the extrinsic calibration matrices. We verified the robust- ness of the method in different camera setups and for both single and multiple walking people. The results show that the triangulation error of a few centimeters can be obtained. Typically, it requires 40 seconds of viewing walking people to reach this accuracy in controlled environments and a few minutes for uncontrolled environments. As well as compute relative extrinsic parameters connecting the coordinate sys- tems of cameras in a pairwise fashion automatically. Our proposed method could perform well in various situations such as multi-person, occlusions, or even at real intersections on the street.