Infrared technology plays a crucial role in various fields. However, defocus blur occurs in infrared images due to improper focusing, resulting in undesirable blurring effects. In recent years, deep learning-based methods have achieved remarkable success in image restoration. However, the spatial variability of defocus blur, combined with the low resolution and lack of textural details in infrared images, still presents significant challenges for deblurring. In this paper, we propose a CNN-based neural network that employs dynamic large separate kernel convolutions to adapt to real-world defocus blur patterns and to effectively extract blur features. Furthermore, we introduce an encoder-decoder feature fusion module that incorporates edge attention, spatial attention, and channel attention to enhance the network’s focus on edges while selectively processing relevant information, thereby improving the network’s deblurring performance. Experimental results demonstrate that our method outperforms recent advanced methods in handling defocus blur in infrared images. Additionally, ablation studies are provided to show the effectiveness of the proposed modules.
The grayscale mapping of infrared images is an important research direction in the field of infrared image visualization. The grayscale mapping method of infrared images directly determines important visualization indicators such as detail preservation and overall perception of the original infrared images and can be considered as the foundation and guarantee for detail enhancement. Although the current mainstream grayscale mapping methods for infrared images can achieve good mapping results, there is still room for improvement in terms of preserving image details and enhancing image contrast. In this paper, we propose a grayscale mapping method for infrared images based on generative adversarial networks. Firstly, our discriminator adopts a unique global-local structure, which allows the network to consider both global and local losses when calculating the loss, effectively improving the image quality in local regions of the mapped image. Secondly, we introduce perceptual loss in the loss function, which ensures that the generated image and the target image have consistent features as much as possible. We conducted subjective and objective evaluations on the mapping results of our method and eight mainstream methods. The evaluation results show that our method is superior in terms of preserving image details and enhancing image contrast. Further comparison with a parameter-free tone mapping operator using generative adversarial network (TMO-Net) indicates that our method avoids problems such as target edge blur and artifacts in the mapped images, resulting in higher visual quality of the mapped images.
Leakage of volatile organic compounds (VOC) gas is one of the main sources of air pollution, and it poses a great threat to health and safety in many ways. Optical gas imaging (OGI) technique utilizes mid-wave infrared camera to visualize VOC gas and helps people observe the leakage of VOC gas. In this paper, we propose a novel method that utilizes deep learning technique and convolutional neural networks to detect the leakage of VOC gas from single-frame mid-wave infrared image. The proposed method consists of three components: color transformation pre-processing unit, feature extraction networks, and single-stage object detection sub-networks. Location-aware deformable convolution, which adjusts its sampling grid to fit the ever-changing shape of VOC gases, is employed for better feature extraction. Besides, a new loss function called leakage center loss is introduced to estimate where leakage comes from, and it forces the network to pay more attention to leakage center where the density of VOC gases is higher than dissipated parts. The proposed method is evaluated using a self-collected dataset where thousands of gas images are captured and annotated. Experimental results show that location-aware deformable convolution contributes to around 7%mAP improvement, while leakage center loss contributes to around 4% mAP improvement. Finally, our method achieves 81%mAP, which is better than existing general-purpose object detection methods. By simplifying the network architecture, our proposed method can also be implemented on embedded system for handheld VOC leakage detection devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.