The purpose of person re-identification (ReID) is to retrieve a person of interest from a set of images taken by multiple cameras. In the person ReID task, the use of global-local features and attention mechanisms to construct robust pedestrian features has been shown to be effective. However, in these methods, the models only focus on extracting pedestrian features with strong discrimination while ignoring the potential features, which are equally valuable and can play an important role in person ReID tasks. To extract these potential features, which are hidden by salient features, we propose a person ReID network based on weight-driven saliency hierarchical utilization. Three improvements are exploited to extract more comprehensive and diverse information of pedestrians. First, we use the non-local module to enhance the feature extraction ability of the model. To mine potential features, we use a saliency enhancement and suppression operation in the non-local module. Second, we employ a new multi-stage global feature fusion module to increase the diversity of features. Third, we use the multi-branch attention module to extract more fine-grained part features to improve the model performance. Extensive experiments show that our model achieves excellent performance on the Market-1501, duke multi-tracking multi-camera re-identification, and multi-scene multi-time person ReID datasets. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Performance modeling
Feature extraction
Data modeling
Mining
Visualization
Cameras
Visual process modeling