The existing target tracking methods are susceptible to the interference from complex backgrounds and have poor robustness. The maturity of real-time polarization imaging technology has expanded the dimensions of targets from the light intensity and spectral to polarization state, which can enhance the detection capability for concealed, camouflaged, and special material targets. The existing target tracking algorithms were used to verify the feasibility of the method for detection and tracking of UAV (Unmanned Aerial Vehicle) based on polarization imaging in three typical backgrounds (sky, building, and jungle). Experiments showed that under the background of the sky and buildings, the polarization image could track the UAV robustly and quickly. The tracking speed was about 2-3 times higher than that of ordinary grayscale images. The success rate was about doubled when the effect of ordinary grayscale images did not work well. Under the background of complex jungle, the effect of images of DoLP (Degree of Linear Polarization) was better than that of images of AoP (Angle of Polarization), but their robustness was weaker than ordinary grayscale images.
In this paper, we design a feature fusion module for multi-model imaging based on deep learning. The fused feature of multi-dimensional images is used for object detection, which can effectively avoid interference caused by complex environments. Feature fusion module consists of a convolution layer and an activation function. It can establish the connection between different images. The fusion rules are obtained through supervised learning. Compared with the traditional target detection structure, it can extract more detailed information from several source images. Feature maps extracted from each image are fused by the feature fusion module to form a new feature map. Such a feature map can be better used for the generation of objects masks and bounding boxes. We capture a series of multi-dimensional images with a flexible multi-model camera. When shooting, multi-dimensional information is simultaneously recorded in an image. Through decoding, multiple images of different types can be obtained, including polarization and spectral images. These images record the multi-dimensional optical characteristics of the object and background. Compared with the traditional single-input color or monochrome image method, the proposed method gets 0.25 of average precision and 0.75 of F1-score, which achieves higher detection accuracy in various natural backgrounds.
Lucky imaging technology is widely applied in astronomical imaging system because of its low cost and good performance. However, the probability of capturing an excellent lucky image is low, especially for a large aperture telescope. Thus a method of adaptive image partition is proposed in this paper to extract any lucky part of an image so as to increase the probability of obtaining the lucky image. This system is comprised of a telescope and three cameras running synchronously at the image plane, the front defocus plane and the back defocus plane respectively. Two out-focus cameras have the same defocus distance. Our algorithm of selecting each lucky part of the space object picture, which is influenced little by atmosphere turbulence, is based on the difference between pictures obtained by the front and the back defocus cameras. Then image stitching is used to obtain the entire sharp picture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.