Motion target detection is a prerequisite for road monitoring, motion target tracking, instance segmentation and other tasks. UAV video images are easily affected by some unavoidable factors in the acquisition process, such as wind interference and own motion during the shooting process can lead to image background changes, target scale changes and intermittent motion, making the motion target detection task more challenging. To address the problems of poor accuracy of existing UAV video motion target detection methods based on deep optical flow networks and the limitation of target detection performance in complex scenes due to the complex and diverse features of UAV video data, this paper proposes a new UAV video motion target detection method based on optical flow networks. Firstly, a convolutional structure reparameterization method is used in the coding part to further fuse detailed and semantic information to improve the feature expression capability of video images; secondly, the self-attentive global motion feature enhancement module proposed in this paper is introduced to improve the network's ability to extract global information and better combine contextual information to achieve more accurate optical flow estimation; finally, the optical flow threshold segmentation is used to obtain different motion target detection results for different scenes by optical flow threshold segmentation. In this paper, three sets of low-altitude UAV video data from different scenes are selected for experiments on the public dataset AU-AIR2019, and the experimental results prove that the proposed method can achieve better motion target detection results in single-target, multi-target and occluded target scenes, and it is better than the current mainstream optical flow networks: FlowNet1, PWC-Net, HD3, PWC-Net and HD3 on the public dataset FlyingChairs. PWC-Net, HD3, GMA metrics EPE (end point error, EPE) on the public dataset FlyingChairs, and improves the RAFT by 0.10 over the benchmark network in this paper, which effectively improves the accuracy of UAV video motion target detection by deep optical flow networks.
Motion blur due to camera shaking during exposure is one common phenomena of image degradation. Image motion deblurring is an ill-posed problem, so regularization with image prior and (or) PSF prior is used to estimate PSF and (or) recover original image. In this paper, we exploit image edge prior to estimate PSF based on useful edge selection rule. And we still adopt L1 norm of PSF to ensure its sparsity and Tikhonov regularization to ensure its smoothing during the PSF estimation procedure. And the Laplacian image prior is adopted to restore latent image. The experiment shows that the proposed algorithm outperforms other algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.