Multi-view Video Stitching

  • Multi-view video stitching

Conventional stitching techniques for images and videos are based on planar warping models, and therefore, they often fail to work on multi-view images and videos which exhibit large parallax and include diverse non-planar structures at different scene depths. In this paper, we propose a novel video stitching algorithm for such challenging multi-view videos. We first separate the multiple foreground objects from the background, which are then warped adaptively based on the epipolar geometry. We estimate the parameters of ground plane homography, fundamental matrix, and vertical vanishing points reliably, using both of the appearance and activity based feature matches validated by geometric constraints. While the ground plane pixels are warped by the homography, we warp the off-plane pixels into geometrically accurate matching positions through their ground plane pixels to alleviate the parallax artifacts adaptively. We also exploit the inter-view and inter-frame correspondence matching information together to estimate the ground plane pixels, which are then refined by energy minimization. Experimental results show that the proposed algorithm provides geometrically accurate stitching results of multi-view videos with large parallax and outperforms the state-of-the-art image stitching methods qualitatively and quantitatively.