Under unfavorable conditions, fusion images of infrared and visible images often lack edge contrast and details. To address this issue, we propose an edge-oriented unrolling network, which comprises a feature extraction network and a feature fusion network. In our approach, after respective enhancement processes, the original infrared/visible image pair with their enhancement version is combined as the input to get more prior information acquisition. First, the feature extraction network consists of four independent iterative edge-oriented unrolling feature extraction networks based on the edge-oriented deep unrolling residual module (EURM), in which the convolutions in the EURM modules are replaced with edge-oriented convolution blocks to enhance the edge features. Then, the convolutional feature fusion network with differential structure is proposed to obtain the final fusion result, through utilizing the concatenate operation to map multidimensional features. In addition, the loss function in the fusion network is optimized to balance multiple features with significant differences in order to achieve better visual effect. Experimental results on multiple datasets demonstrate that the proposed method produces competitive fusion images as evaluated subjectively and objectively, with balanced luminance, sharper edge, and better details. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Image fusion
Image enhancement
Infrared imaging
Infrared radiation
Visible radiation
Feature extraction
Feature fusion