The three-dimensional particle image velocimetry (3D PIV) technique, as a non-intrusive method for three-dimensional full-field velocity measurement, has garnered extensive utilization across diverse domains including biomimetic dynamics, combustion diagnostics, and the structural design of aerospace equipment. Synthetic Aperture Particle Image Velocimetry (SAPIV), based on camera arrays, digitally merges images obtained from different perspectives to simulate the imaging effects of large-aperture cameras. This approach allows for large-scale, high-resolution flow field measurements. In this study, the three-dimensional intensity characteristics of particles within sequences of refocused images are investigated. Leveraging the spatial distribution patterns of grayscale information for individual focused particles, we designed a three-dimensional convolutional neural network (3DCNN) capable of extracting focused particle positions. Throughout the particle extraction procedure, this three-dimensional CNN network systematically analyzes the sequence of refocused images and subsequently derives both particle positions and grayscale information for focused particles based on their distinctive characteristics. The tracer particle field in simulated experiments were reconstructed and the reconstruction quality was evaluated. The results demonstrate the high precision of our proposed method in reconstructing three-dimensional tracer particle information in SAPIV.
Quantitative analysis of spray droplet fields plays a pivotal role in various domains, encompassing internal combustion engine combustion diagnostics, equipment spray coating and corrosion prevention, and unmanned aerial vehicle-based agricultural pesticide dispersion. Precise measurement of the spatial distribution of spray droplet fields facilitates accurate control and orientation of spraying, thereby propelling the intelligent evolution of both industrial and agricultural sectors. In light of the substantial dimensions of spray fields, achieving focused imaging of all droplets on the camera imaging plane during reconstruction proves unattainable. Addressing this challenge, this study suggests employing a four-camera array configuration. According to the characteristics of the defocusing blur of spray droplets, the cameras on the array capture images of the droplets from diverse perspectives. Subsequently, these images are merged through a refocusing process. This method offers accurate extraction of out-of-focus droplet centers. Employing three-dimensional cross-correlation analysis, the motion trajectories of the spray droplet field can be inferred with precision.
Intelligent harvesting is one of the important criteria to measure the development level of agricultural modernization. Coordinated operation of harvester-grain truck clusters can improve grain harvesting efficiency and reduce post-production losses during large-scale rice/wheat concentrated harvests. Overloading the grain on the grain truck will cause serious scattering of grain, further, insufficient loading can result in wasted capacity. How to monitor the grain loading process dynamically is a pressing matter. In this paper, two cameras and a point laser were used to measure the status of grains in the truck in real-time. The loadable capacity of the grain truck can be obtained through the reconstruction of the grain truck carriage edge and the positioning of the bottom of the truck carriage. During the harvester grain unloading process, the cone tip of the wheat pile is irradiated by the laser, and the height of the wheat pile can be obtained by measuring the location of the laser point. Then insufficient loading or overloading can be avoided by controlling the speed of the unloading port. This method has been verified in the paper box. The results show that the dual-camera monitoring system can measure the volume of the grain truck in real-time, and feedback on the total amount of grain loaded in the grain truck in time, which can effectively avoid grain loss caused by excessive loading.
Background oriented schlieren(BOS) technology is an efficient measurement method for quantitative diagnostics of fluid field in recent decades, and it has a broad application prospect in flow field measurement. It not only has high spatial and temporal resolution, as well as can be employed for quantitative measurement of the density gradient distributions of convection fields. In this paper, a new method for reconstructing density distributions is proposed. Initially, we obtained the point displacement image according to the basic principle of BOS. Secondly we used the local basis function method to discretize the volume to obtain the coefficient matrix. We had to choose an appropriate finite support basis function to ensure the coefficient matrix was sparse. Finally we obtained the density field by using algebraic iteration method and other methods.Numerical simulation experiments are presented to verify our method. The experimental results of refractive index and density field distribution of flow field are obtained after the simulation experiment, which indicates that the method of CTBOS technology can obtain the quantitative refractive index and density distribution of flow field, while in the real condition, there is great application value.
Fourier transform profilometry method has great value in high-speed three-dimensional shape measurement. In the method of Fourier transform profilometry, it is necessary to obtain the phase of the deformation fringe containing the height information of the object through Fourier transform, frequency domain filtering and inverse Fourier transform. Filtering in frequency domain is a very important and essential process. Filtering window is usually selected manually, which is inefficient and subjective. Too large filtering window can not filter useless information, and too small filtering window will lose the height information of the object. In this paper, an adaptive spectrum extraction method is used. In order to be more convenient and simple, this paper presents a method of frequency domain filtering based on convolution neural network.Convolution neural network can realize image recognition and image feature extraction. The proposed method uses convolution neural network to identify the carrier frequency components carrying the details of the object in the spectrum image. This paper introduces the theoretical analysis and the training process of convolution neural network. The adaptive spectrum extraction method and the convolution neural network method are compared. The method of spectrum extraction based on convolution neural network is feasible.
Background Oriented Schlieren (BOS) technique can be applied for quantitative flow field diagnosis with simple experimental configurations. One of the crucial steps of BOS techniques is the extraction of image displacement vectors. The cross-correlation algorithm widely used in PIV techniques have been introduced to address the BOS extraction. However, the cross-correlation algorithm depends on interrogation windows, which usually results in low spatial resolving or unstable results. This paper proposes an improved BOS approach based on three step phase-shifting algorithm with the usage of a colored-fringe pattern as background. RGB coded carrier-fringe image is composed of three phase-shifted images. Displacement vectors are extracted by comparing the different phases between the corresponding points in three separated images. This technique avoids the problem of selecting the interrogation window. Only one image is required in this approach. An experimental setup on the measurement of hot air gun was carried out by use of our proposed method. The results demonstrate that this technique can be used for quantitative measurement in flow field.
Synthetic aperture particle image velocimetry (SAPIV) is a flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. In SAPIV, particle scattering images are captured from different cameras with camera array configuration. To acquire refocusing images, images are remapped and accumulated in pre-designed remapping planes. During the refocused images, particles that lie in the remapped plane are aligned and appear sharp, whereas particles off this surface are blurred due to parallax between the cameras. During the remapping process, captured images are back-projected to different remapped planes of different depth z within the volume. The projected images from different cameras, which are called remapped images, are merged to generate refocused images at different depth z. We developed a remap method based on the weight coefficient to improve the quality of the reconstructed velocity field. The images captured from the cameras are remapped into different remapped planes by use of homography matrix. The corresponding pixels of the remapped images in the same remapped plane are first added and averaged. The corresponding pixels of the remapped images in the same remapped plane are multiplied and the obtained intensity values act as the weight coefficients of the intensity in the added refocused image stacks. The unfocused speckles can be restrained to a great degree, and the focused particles are retained in the added refocused image stacks. A 16-camera array and a vortex ring field at two adjacent frames are simulated to evaluate the performance of our proposed method. In the simulation, a vortex ring can be clearly seen. An experimental system consisting of 16 cameras was also used to show the capability of our improved remap method. The results show that the proposed method can effectively restrain the unfocused speckles and reconstruct the velocity field in the flow field.
In 3D particle image velocimetry (PIV), when laser transports through dense trace particle field, scattering light intensity vary in different directions. In this article, we build 5 fields of different densities, and each field contains one vortex ring. The diameter of the vortex ring is 2mm, and the particles are dense in the ring and sparse outside the ring. Based on the Mie scattering theory and Monte Carlo method, we compute the laser intensity difference along the direction of incident light in each particle volume when the laser beam transports through it, and obtain the relationship of laser intensity, particle density and the distance of laser transportation. The variation of laser intensity could also be viewed from different directions. We also discussed the influence of light intensity variation on integrated imaging particle-imagevelocimetry (PIV) image’s quality in this paper. To deal with this variation, we propose a new light intensity equalized compensation method. By using this method, we can reduce the influence of attenuation when laser light transports through dense particle areas. During the simulation process, a camera array is set to detect the forward and back direction of the laser beam in the region, and the light intensity is recorded by different pixels. Light intensity attenuation of different positions is considered. All cameras are treated as pinhole models. The results show that front scattering and back scattering have great effects on integrated imaging PIV. The compensation method is used in experiment to preprocess particle images.
Tomographic particle image velocimetry (Tomo-PIV) is a new developed technique for three-component threedimensional (3C-3D) velocity measurement of the flow field based on the optical tomographic reconstruction method, and has been received extensive attention of the related industries. Three-dimensional light source illuminating the tracer particles of flow field is a critical application for tomographic particle image velocimetry. Three-dimensional light source not only determines the size of measurement volume and the range of the scope of application, but also has a great influence on the image quality. In this work, we propose a rectangular light amplification system using powell lens, prisms and two reflectors. The system can be optimized if given the system parameters based on the theoretical model. The rectangular light amplification system will be verified experimentally by measuring the cross section size of the illuminated light source. A 60mm×25mm cross section of rectangular three-dimensional light source can be obtained by using the rectangular light amplification system. The experiments demonstrate the the feasibility the proposed system.
Image deblurring is a fundamental problem in image processing. Conventional methods often deal with the degraded image as a whole while ignoring that an image contains two different components: cartoon and texture. Recently, total variation (TV) based image decomposition methods are introduced into image deblurring problem. However, these methods often suffer from the well-known stair-casing effects of TV. In this paper, a new cartoon -texture based sparsity regularization method is proposed for non-blind image deblurring. Based on image decomposition, it respectively regularizes the cartoon with a combined term including framelet-domain-based sparse prior and a quadratic regularization and the texture with the sparsity of discrete cosine transform domain. Then an adaptive alternative split Bregman iteration is proposed to solve the new multi-term sparsity regularization model. Experimental results demonstrate that our method can recover both cartoon and texture of images simultaneously, and therefore can improve the visual effect, the PSNR and the SSIM of the deblurred image efficiently than TV and the undecomposed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.