Gait recognition is a biometric technology with important applications in identity verification and crime prevention. However, developing robust and accurate algorithms remains a challenge due to various factors such as environmental interference and clothing changes. Current gait recognition methods are categorized into template-based and sequence-based approaches, both of which have limitations. Template-based methods are simple but lose temporal information, while sequence-based methods preserve temporal information but are sensitive to unnecessary sequential constraints. Furthermore, these methods are not robust to various sources of noise and interference and often require a large number of gait frames for accurate recognition. Therefore, new approaches are needed to improve the accuracy and efficiency of gait recognition. We proposed the method, named Adaptive GaitSet, addresses these limitations by introducing an attention mechanism that allows the model to adaptively focus on important gait features. This method not only improves the accuracy and efficiency of gait recognition, but also enhances the robustness of the model to variations in walking conditions and sources of noise and interference. Experimental results show that the proposed method achieves the best performance on benchmark dataset.
Object detection models are at the core of various computer vision tasks and have shown excellent performance on public datasets, but they also inherit the disadvantage of neural networks that they are vulnerable to adversarial example attacks. Adversarial patches are specific forms of adversarial examples that, as shown in previous studies, can only make specific objects (such as pedestrians and traffic signs), but not all objects, disappear. In addition, a patch must be placed on every object to deceive the detector. To solve the above problems, we propose a location-independent adversarial patch generation method that can attack objects in the range to be detected with a single patch. By attacking the confidence loss of the object detector, we creatively assign a greater weight to the foreground region, which makes its confidence decrease faster and effectively guides the convergence direction of the adversarial patch in the training process. Furthermore, we glue the patches randomly on the images to make them less sensitive to location during patch training. Experimental results indicate that the patches generated using our proposed method are not restricted to specific areas of the image and provide a minimum recall of 29.5%.
With the development of adversarial attacks, the performance of object detection based on deep learning is threatened. When adversarial examples are introduced into the detection task, the detector will suffer from poor detection performance, causing a large number of false detections. To handle this problem, we propose a defense method by combing bilateral filtering and the denoising autoencoder. Taking the you only look once (YOLO) v4 detection model as the research target, the proposed method proceeds as follows. First, it performs weighted average in the spatial domain and the pixel-range domain. The method retains important edge texture information when it reduces the perturbations in the image. Then, a three-layer denoising reduction autoencoder is designed, and a new optimization algorithm is proposed to minimize the distance between the input and output. Finally, experiments show that the method proposed has a better defense effect than the existing defense methods. When facing the projected gradient descent-based object detection bounding box disappearance adversarial attack, our defense method can improve the detection true-box rate indicator to 83.04% on the visual object classes challenge (VOC) dataset and 72.20% on the common objects in context (COCO) data. The number of bounding boxes correctly detected is 88.09% and 86.09% of the original one on the PASCAL VOC dataset and the Microsoft COCO dataset, respectively.
With the development of artificial intelligence, the object detection model based on deep learning has also achieved great results. The detection model has also developed from the traditional manual extraction of features to the current neural network extraction. The classic single-stage detection model is based on YOLO series is representative. However, with constant research, it is discovered that the detection model based on deep neural network also inherits the shortcomings of neural network and is vulnerable to adversarial attacks. This paper proposes an optimized attack algorithm based on PGD, which realizes the adversarial attack on the YOLOv4 object detection model. Experiments have proved that this attack method in this paper reduces the mAP indicator from 87.61% to 0.12% on the VOC data set, and from 69.17% to 0.37% on the COCO data set. It has a certain improvement in the evaluation indicators PSNR and SSIM, and the attack effect Compared with the original PGD, the quality of the generated adversarial example is better.
Maliciously forged images generated by image translation networks can cause significant security threats to personal privacy and national security. An emerging solution for forged images is preventing image forgery models from tampering with user images through adversarial attacks. Currently, conventional adversarial generation algorithms use random noise as the starting point, which makes the final adversarial output similar to the original output but does not prevent image tampering. The output-correlated initialization is applied to improve the adversarial attack algorithm for the image translation network and improve the visual effect of adversarial attacks. Moreover, a comparative experiment is performed on multiple loss functions, and the loss function with the best performance is selected as the adversarial loss function to complete the adversarial attack on the image translation network. The selected initialization method makes the search process of adversarial examples more comprehensive and makes the generation results of adversarial examples more diverse. The analysis of the visual effects of the attack reveals how the proposed adversarial attack methods affect the forgery results of different image translation frameworks and generate more chaotic images. Comparison of multiple indicators demonstrates that the proposed method has a high attack success rate and expands the image distance between the adversarial output and the original output, thereby improving the attack efficiency, preventing malicious tampering with the image, and protecting the user image.
Generative adversary networks (GAN) have recently led to highly realistic synthesized image. For the current GAN-synthesized faces detection methods exist false prediction if the real faces with angles or occlusion. This paper proposes a GAN-synthesized faces detection method based on Deep Alignment Network (DAN), which improve prediction accuracy of real faces by makes the locations of facial landmark points more precise. Our method first uses DAN to obtain the locations of facial landmark points of real and synthesized faces; then the landmark points are converted into feature vectors by principal component analysis (PCA); finally, input feature vectors to the constructed Support Vector Machine (SVM)classifier for training. Experimental results show that our method achieves better performance than other method under face with angles or occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.