Paper
2 November 2022 Deep visible and thermal image fusion for enhancement visibility for surveillance application
Author Affiliations +
Abstract
The additional sources of information (such as depth sensors, thermal sensors) allow to get more informative features and thus increase the reliability and stability of recognition. In this research, we focus on how to combine the multi-level deep fusion for visible and thermal information. We present the algorithm, combining information from visible cameras and thermal sensors based on the deep learning and parameterized model of logarithmic image processing (PLIP). The proposed neural network based on the principle of an autoencoder. We use an encoder to extract the features of images, and the fused image is obtained by a decoding network. The encoder consists of a convolutional layer and a dense block, which also consists of convolutional layers. Fusing images are in the decoder and the fusion layer operating to the principle of PLIP which close to the human visual system's perception. This fusion approach applied for surveillance application. Experimental results showed the effectiveness of the proposed algorithm.
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
V. Voronin, M. Zhdanova, N. Gapon, A. Alepko, A. Zelensky, and E. Semenishchev "Deep visible and thermal image fusion for enhancement visibility for surveillance application", Proc. SPIE 12271, Electro-optical and Infrared Systems: Technology and Applications XIX, 122710P (2 November 2022); https://doi.org/10.1117/12.2641857
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image fusion

Image processing

Infrared imaging

Image enhancement

Surveillance

Thermal modeling

Back to Top