Point-clutter and strip wave often exist in infrared images of sea surface, which will cause great disturbance to ship target detection. In most cases, the noise in infrared image can be suppressed by image denoising. However these sea clutter are different from typical noise in infrared images and cannot be removed by traditional denoising methods. We study the background characteristics of infrared sea clutter and propose a sea clutter suppression method based on gradient filtering. Our method can keep the details of ship target while smoothing the clutter background, and can be used as an effective preprocessing method for infrared ship target detection. The experimental results show that our method is superior to the other four methods in sea clutter suppression and measurement indexes. Our method can effectively suppress the infrared clutter background and keep the structural characteristics and details of the ship target, which greatly enhances the separability of the ship target. In the future, our method needs to further improve the timeliness and reduce the algorithm complexity.
The original image bit width of a cooled thermal imaging camera is 14Bit. 14Bit has a wider grayscale range, higher accuracy, and more image details compared to 8Bit, but the maximum grayscale range that can be displayed by a general display is 8Bit, so the original image data of 14Bit needs to be compressed effectively. In this paper, we propose an image compression and display algorithm based on guided image filtering (GIF). First, the image is layered using guided image filtering, i.e., the base layer and the detail layer. Then the adaptive platform histogram equalization (APHE) process is applied to the base layer to improve the contrast of the image; the adaptive detail enhancement is applied to the detail layer to enhance the details while reducing the noise. Finally, the images processed separately after layering are linearly fused to achieve dynamic compression and detail enhancement of high Bit images. Through simulation and comparison experiments with commonly used algorithms, as well as comprehensive comparison of visual effects and quantitative evaluation parameters, the proposed algorithm in this paper achieves significant improvements in both performance and effectiveness.
Aiming at the problems of large recognition error and low detection accuracy of small targets in target detection, this paper proposes a multi-target detection algorithm based on deep learning. This method is improved on the basis of YOLOv3. First, the Darknet network is modified, and a set of residual blocks is added at the end of the network to obtain feature maps of 4 scales. Second, replace IOU with DIOU to increase network robustness. Finally, the K-means++ algorithm is used to generate 12 anchor boxes on the training set. The results show that when the input is 576x576, based on the Nvidia 980Ti graphics card, the average accuracy of this algorithm on the VOC 2007 data set reaches 80.1% and the speed reaches 52.64fps, which improves the detection accuracy of multi-target objects while ensuring real-time performance.
Use the redundancy of the super complete dictionary can capture the structural features of the image effectively, can achieving the effective representation of the image. However, the commonly used atomic sparse representation without regard the structure of the dictionary and the unrelated non-zero-term in the process of the computation, though structure sparse consider the structure feature of dictionary, the majority coefficients of the blocks maybe are non-zero, it may affect the identification efficiency. For the disadvantages of these two sparse expressions, a weighted parallel atomic sparse and sparse structure is proposed, and the recognition efficiency is improved by the adaptive computation of the optimal weights. The atomic sparse expression and structure sparse expression are respectively, and the optimal weights are calculated by the adaptive method.
Methods are as follows: training by using the less part of the identification sample, the recognition rate is calculated by the increase of the certain step size and t the constraint between weight. The recognition rate as the Z axis, two weight values respectively as X, Y axis, the resulting points can be connected in a straight line in the 3 dimensional coordinate system, by solving the highest recognition rate, the optimal weights can be obtained. Through simulation experiments can be known, the optimal weights based on adaptive method are better in the recognition rate, weights obtained by adaptive computation of a few samples, suitable for parallel recognition calculation, can effectively improve the recognition rate of infrared images.
KEYWORDS: X-ray imaging, Imaging systems, X-rays, Digital imaging, Digital x-ray imaging, Image processing, X-ray technology, CCD image sensors, Digital signal processing, Analog electronics
According to the main characteristics of X-ray imaging, the X-ray display card is successfully designed and debugged
using the basic principle of correlated double sampling (CDS) and combined with embedded computer technology. CCD
sensor drive circuit and the corresponding procedures have been designed. Filtering and sampling hold circuit have been
designed. The data exchange with PC104 bus has been implemented. Using complex programmable logic device as a
device to provide gating and timing logic, the functions which counting, reading CPU control instructions, corresponding
exposure and controlling sample-and-hold have been completed. According to the image effect and noise analysis, the
circuit components have been adjusted. And high-quality images have been obtained.
The solving question of image compressing is how to reduce the image data to a minimum. The rebuilt image using the data is satisfying. The characteristics of the infrared image are analyzed. After the analysis of infrared image wavelet coefficients, a vector quantization algorithm based on wavelet transform and the advanced one are proposed. The two algorithms are both realized by programming, and the results of the experiments are analyzed and compared, which shows that the proposed algorithms for infrared image compression can be feasible.
Low light level (LLL) image communication has received more and more attentions in the night vision field along with the advance of the importance of image communication. LLL image compression technique is the key of LLL image wireless transmission. LLL image, which is different from the common visible light image, has its special characteristics. As still image compression, we propose in this paper a wavelet-based image compression algorithm suitable for LLL image. Because the information in the LLL image is significant, near lossless data compression is required. The LLL image is compressed based on improved EZW (Embedded Zerotree Wavelet) algorithm. We encode the lowest frequency subband data using DPCM (Differential Pulse Code Modulation). All the information in the lowest frequency is kept. Considering the HVS (Human Visual System) characteristics and the LLL image characteristics, we detect the edge contour in the high frequency subband image first using templet and then encode the high frequency subband data using EZW algorithm. And two guiding matrix is set to avoid redundant scanning and replicate encoding of significant wavelet coefficients in the above coding. The experiment results show that the decoded image quality is good and the encoding time is shorter than that of the original EZW algorithm.
In the area of low light level (LLL) night vision, improving the LLL image quality has received more and more attentions. In this paper, we use the instantaneous laser (near infrared) assistant vision technology to obtain the laser assistant vision (LAV) image and realized the fusion of LLL image and LAV image using wavelet transform. The information feature of the two kinds of images is different because the spectrums they respond are different. In the fusion process, the images are first decomposed based on wavelet transform. We construct the multiresolution analysis of the fused image by considering the multiresolution contrast of the source images. The interested image features can be enhanced. At last the fused image is reconstructed by the inverse wavelet transform. The experiment results show that the fused image is better than any of the individual source images and the fusion of the LAV image and the LLL image can improve the image quality of LLL TV. This technology is meaningful for night vision.
The area of low light level (LLL) night vision, improving the LLL image quality by using infrared laser assistant vision technology has been proposed as an important subject. In this paper, we realized the fusion of the instantaneous laser assistant vision image and LLL image in frequency-domain. The features of the two kinds of images are different because the spectrums they respond are different. In frequency-domain, we assign different threshold of the high and the low frequency in order to realize the fusion processing. The details and the contours of the scene are enhanced respectively. The experiment results show that the fusion of instantaneous laser assistant vision infrared image and LLL image can improve the image quality effectively. The fused images and the source images are presented in this paper.
Multispectral images fusion is a kind of process of synthesis and processing for multispectral images data. Wavelet transform is a multiresolution method that is used to decompose images into detail and average channels. In this paper, on the basis of optoelectronic imaging technique, the imaging process and feature of low light level (LLL) TV system are analyzed and the principle and processing construction of multispectral LLL TV image fusion are discussed. The algorithm and experiment study on the fusion for LLL dual spectrum images within the wavelet coefficient space are carried out with the help of LLL CCD TV system and computer. The proper combination of LLL camera and optical filters is used to implement dual spectrum image fusion experiment at night for outdoor static scene by the aid of computer. The results of experiment indicate that multispectral LLL image fusion has improved the capability for recognizing targets.
The electrically heated thin film resistor array presents promising future to provide dynamic IR scenes for hardware- in-the-loop simulation. Its radiant output is principally affected by the time-dependent temperature distribution on the emitting surface. And the performance is limited principally by the thermal constraints. In this paper, the numerical analysis of heat transfer in the thin film resistor array is conducted by the use of finite difference technique. The investigation is concentrated on thermal crosstalk between neighboring elements and relation of temperature distribution with applied power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.