Recently, deep learning-based methods have been employed in optical measurement. The fringe to phase method based on deep learning can achieve high-precision 3D topography measurement and is applied to various optical metrology tasks, including phase extraction, phase unwrapping, fringe order determination, depth estimation, and other crucial steps. However, it appears simplistic to obtain images of each metrological task from a single fringe pattern. This paper proposes a novel network that effectively extracts the semantic features of fringe patterns by incorporating the design architecture of transformer while retaining the advantages of convolutional networks. The architecture primarily consists of a backbone, decoder, and feature extraction block which enhance the features at different frequencies within a single fringe pattern. The backbone and decoder are specifically designed for wrapped phase prediction tasks. Experimental results demonstrate that the network accurately predicts the wrapped phase from a single fringe pattern. In comparison with previous methods, this paper's approach offers several contributions: an efficient utilization of a new type of encoder for extracting high-level semantic features from fringe patterns; moreover, only a single grayscale image is required as input for the network without relying on color composite images or additional prior information.
Dual-station cross localization method is another promising technical route to realize cooperative localization in GNSSdenied environment, requiring only two anchor node UAVs instead of at least three nodes if inter-node distance measurement based localization method is used. Traditional Angle-of Arrival (AOA) estimation techniques cannot obtain accurate and highly available angle measurements used for position calculation, for example, optical measuring equipment can provide precise angle measurements but is restricted by weather condition, array antenna can be used in all situations but its measurement error would be too large. In order to acquire precise azimuth and pitch estimations, an optical phase scanning based AOA estimation method is utilized in this work. At the remote antenna unit (RAU), one received microwave signal is applied to a phase modulator (PM), and another received microwave signal coupled with a low-frequency large-voltage sawtooth-wave signal is applied to another PM, this sawtooth-wave signal is utilized to scan the phase of the optical sideband from 0 to 2π. By transmission through a segment of fiber link, the AOA value can be measured by processing the obtained low-frequency electrical signals at the central office (CO). Experimental results demonstrate that the AOA estimation precision could be less than 2.27° when the distance between array elements is half-wavelength of the microwave (d = λ/2 0.015 @10GHz), and smaller than 0.017° if the array elements spacing is bigger than 2m for medium-sized or large-sized UAV. Then the experimental result is applied into dual station cross localization simulation structure of UAV swarm, then localization precision distribution is evaluated in different scenarios, corresponding outcomes indicate that optical phase scanning based high precision AOA estimations are beneficial to cooperative localization, and in order to acquire more accurate cooperative localization results, the positions of anchor node UAVs need to be properly adjusted.
In the fringe projection profilometry (FPP), traditionally, no clear mathematical expression was developed to design the sinusoidal fringe patterns for various objects. For this reason, we present an adaptive algorithm to generate the optimum fringe patterns with an oriented bounding box (OBB) and homography transform. Firstly, the features of various objects, which are segmented with deep learning network Mask R-CNN, are represented by the spindle orientation and length of the OBB. Secondly, the adaptive fringe patterns in the field of view of a camera are generated by the fusion with the OBB and the mathematical expression of conventional intensity fringe patterns. Finally, the fringe patterns in the field of view of a camera is transformed into the in the field of view of a projector by homography. Experiments have been carried out to validate the performances of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.