Kyu-Yul Lee
UNIST
Jae-Young Sim
UNIST
Abstract
Conventional stitching techniques for images and videos are based on smooth warping models, and therefore, they often fail to work on multi-view images and videos with large parallax captured by cameras with wide baselines. In this paper, we propose a novel video stitching algorithm for such challenging multi-view videos. We estimate the parameters of ground plane homography, fundamental matrix, and vertical vanishing points reliably, using both of the appearance and activity based feature matches validated by geometric constraints. We alleviate the parallax artifacts in stitching by adaptively warping the off-plane pixels into geometrically accurate matching positions through their ground plane pixels based on the epipolar geometry. We also exploit the inter-view and inter-frame correspondence matching information together to estimate the ground plane pixels reliably, which are then refined by energy minimization. Experimental results show that the proposed algorithm provides geometrically accurate stitching results of multi-view videos with large parallax and outperforms the state-of-the-art stitching methods qualitatively and quantitatively.
Qualitative Comparison of Stitching Performance
Figure 1: Comparison of video stitching results of the proposed algorithm and the four existing methods: Homography, CPW [34], SPHP [15], and APAP [13]. From top to bottom, “Fountain,” “Tennis,” “Lawn,” “Badminton,” “Square,” “Office,” “Trail,” “Stadium,” “Soccer,” “Street,” “School,” and ‘‘Garden’’ sequences.
Quantitative Comparison of Stitching Performance
Figure 2: Quantitative comparison of the stitching performance of the proposed algorithm with that of the conventional methods. The matching error measures the average RMSE between the warped pixels and the ground truth corresponding pixels.
Supplementary Material
Publication
Kyu-Yul Lee and Jae-Young Sim, “Stitching for Multi-View Videos With Large Parallax Based on Adaptive Pixel Warping,” IEEE Access, May. 2018.