Measuring the pose of non-cooperative targets in space is a critical supporting technology for cleaning up space debris and recovering items. However, most existing methods are simulation experiments conducted in good lighting environments and tend to show poor performance in dark lighting environments. A target pose measurement method based on binocular vision is proposed, which is suitable for dark lighting environments. First, the traditional features from accelerated segment test algorithm are improved to reduce the influence of illumination on the performance of feature point extraction under various postures. The point feature and line feature are combined to extract image features more easily in a dark lighting environment while retaining the high accuracy of the pose measurement algorithm based on point features. Second, the normalized cross-correlation coefficient matching method is combined with the epipolar constraint to narrow the search range of the matching points from the two-dimensional plane to the epipolar line, which substantially improves the matching efficiency and accuracy of the matching algorithm. Finally, post-processing through feature matching is performed to reduce the probability of mismatches. Simulation and physical experiment results show that our method can stably extract features and obtain high-precision target pose information in well-illuminated as well as dark lighting environments, making it suitable for high-precision target pose measurement under insufficient illumination.
In order to improve the accuracy and robustness of the small target detection for SSD (Single Shot Multi-Box Detector) algorithm, this paper proposes an improved SSD algorithm. Firstly, the Mish activation function is introduced in the backbone network to provide more linear relationships. Secondly, a copy-reduce-paste data enhancement method is proposed to ensure a more balanced train. In the multi-scale detection stage, feature enhancement module and feature fusion strategy guided by channel attention mechanism are used to improve the feature information. At the same time, modify the loss function to improve the model training effect. Experiments on the VOC2007+2012 dataset show that the detection performance of the proposed algorithm is better than that of the SSD algorithm.
Convolutional neural network is widely used in image fusion. However, the deep learning framework is only applied in some part of the fusion process in most existing methods. To generate a full end-to-end image fusion pipeline, a Yshaped Generator model based on Generative Adversarial Network for infrared and visible image fusion is proposed. The idea of this method is to establish an adversarial game between the generator and the discriminator. The generator consisting of two Pyramid networks and three convolutional layers works as an autoencoder to improve the characteristic information of the fused images. As for the discriminator, it adopts a network structure similar to the Visual Geometry Group (VGG) network. The loss function uses the ratio loss to control the trade-off among generation loss and reconstruction loss. Results on publicly available datasets demonstrate that our method can improve the quality of detail information and sharpen the edge of infrared targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.