In this paper, we design a feature fusion module for multi-model imaging based on deep learning. The fused feature of multi-dimensional images is used for object detection, which can effectively avoid interference caused by complex environments. Feature fusion module consists of a convolution layer and an activation function. It can establish the connection between different images. The fusion rules are obtained through supervised learning. Compared with the traditional target detection structure, it can extract more detailed information from several source images. Feature maps extracted from each image are fused by the feature fusion module to form a new feature map. Such a feature map can be better used for the generation of objects masks and bounding boxes. We capture a series of multi-dimensional images with a flexible multi-model camera. When shooting, multi-dimensional information is simultaneously recorded in an image. Through decoding, multiple images of different types can be obtained, including polarization and spectral images. These images record the multi-dimensional optical characteristics of the object and background. Compared with the traditional single-input color or monochrome image method, the proposed method gets 0.25 of average precision and 0.75 of F1-score, which achieves higher detection accuracy in various natural backgrounds.
Mode decomposition (MD) is essential to reveal the intrinsic mode properties of fiber beams. However, traditional numerical MD approaches are relatively time-consuming and sensitive to the initial values. To solve these problems, deep learning technique is introduced to perform non-iterative MD. In this paper, we focus on the real-time MD ability of the pre-trained convolutional neural network. The numerical simulation indicates that the averaged correlation between the reconstructed patterns and measured patterns is 0.9987 and the decomposing rate can reach about 125 Hz. As for the experimental case, the averaged correlation is 0.9719 and the decomposing rate is 29.9 Hz, which is restricted by the maximum frame rate of the CCD camera. The results of both simulation and experiment show the superb real-time ability of the deep learning-based MD methods.
We introduce deep learning technique to perform robust mode decomposition (MD) for few-mode optical fiber. Our goal is to learn a robust, fast and accurate mapping from near-field beam profiles to the complete mode coefficients, including both of the modal amplitudes and phases. Taking a few-mode fiber which supports 3 linearly polarized modes into consideration, simulated near-field beam profiles with known mode coefficient labels are generated and fed into the convolutional neural network (CNN) to carry out the training procedure. Further, saturated patterns are added into the training samples to increase the robustness. When the network gets convergence, ordinary and saturated beam patterns are both utilized to perform MD with pre-trained CNN. The average correlation value of the input and reconstructed patterns can reach as high as 0.9994 and 0.9959 respectively for two cases. The consuming time of MD for one beam pattern is about 10ms. The results have shown that deep learning techniques highly favors the accurate, robust and fast MD for few-mode fiber.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.