Single image reflection removal using non-linearly synthesized glass images and semantic context

Byeong-Ju Han
UNIST

Jae-Young Sim
UNIST

Abstract

An image captured through a glass plane usually contains both of a target transmitted scene behind the glass plane and a reflected scene in front of the glass plane. We propose a semantic context based network to remove reflection artifacts from a single glass image. We first investigate a non-linear intensity mapping relationship for glass images to synthesize more realistic training sets. Then we devise an efficient reflection removal network using multi-scale generators and an interpreter, where the semantic context of the transmission image is adopted as a high level cue for the interpreter to guide the generators. We also provide a new test data set of real glass images including the ground truth transmission and reflection images. Experiments are performed on four test data sets and we show that the proposed algorithm decomposes an input glass image into a transmission image and a reflection image more faithfully compared with the four existing state-of-the-art methods.

Experimental Results

The effect of non-linearly synthesized training images. (a) Glass images. (b) The results of Network I. (c) The results of the proposed algorithm. (d) Ground truth images. The left and right images in (b∼d) are the restored transmission and reflection images, respectively.

Qualitative comparison on our test set. (a) Input glass images. (b) The ground truth transmission (left) and reflection (right) images. The pairs of a transmission image (left) and a reflection image (right) restored by using (c) DFR [14], (d) CEILNet [16], (e) PRRNet [18], (f) BDNet [19], and (g) the proposed network.

Publication

Byeong-Ju Han and Jae-Young Sim, “Single image reflection removal using non-linearly synthesized glass images and semantic context,” IEEE Access, vol. 7, no. 1, pp. 170796-170806, Nov. 2019.