Unsupervised embedding learning aims to learn highly discriminative features of images without using class labels. Existing instance-wise softmax embedding methods treat each instance as a distinct class and explore the underlying instance-to-instance visual similarity relationships. However, overfitting the instance features leads to insufficient discriminability and poor generalizability of networks. To tackle this issue, we introduce an instance-wise softmax embedding with cosine margin (SEwCM), which for the first time adds margin in the unsupervised instance softmax classification function from the cosine perspective. The cosine margin is used to separate the classification decision boundaries between instances. SEwCM explicitly optimizes the feature mapping of networks by maximizing the cosine similarity between instances, thus learning a highly discriminative model. Exhaustive experiments on three fine-grained image datasets demonstrate the effectiveness of our proposed method over existing methods.
Sample specificity learning aims to treat every single sample as a separate class and mine the underlying class-to-class visual similarity relationship, thus learning discriminative feature embeddings without using category labels. We introduce a correlational instance feature embedding approach to improve the representation ability of deep neural networks. It exploits the self-correlation and cross-correlation of instances in each training batch by learning a feature embedding with intrainstance variation and interinstance interpolation, resulting in stronger discriminability and better generalizability. The exhaustive experiments on several benchmarks show the performance advantages of our proposed method over the existing methods.
The core challenge of few-shot learning is the serious overfitting problem on new tasks because of the scarcity of labeled data. Self-supervision learning can mine powerful supervision signals from the data itself to enhance the generalization performance of the model. Thus a rotation self-supervision module has been directly integrated to a few-shot learning network to alleviate the overfitting problem. However, due to the level difference or convergence speed difference in the loss function for each task, the overall model can be alternately dominated or biased by a certain task during the training stages, which has disadvantages for the main task performance. Therefore, we design a network architecture with auxiliary task learning speed equalization (LSENet) for few-shot learning. The overall model improves the generalization capability using an auxiliary task. In addition, we design a speed equalization module to constrain the decline rate of the two tasks to achieve balanced learning. Our method alleviates the overfitting problem of few-shot learning and greatly improves classification accuracy. Extensive experiments are conducted on benchmark datasets to demonstrate the effectiveness of our method.
Can we automatically learn discriminative embedding features from images when human-annotated labels are absent? The problem of unsupervised embedded learning remains a significant and open challenge in image and vision community. A joint online deep embedded clustering and hard samples mining framework are proposed to improve the representation ability of embedded learning. In addition, to enhance the discriminability of feature representations, a structure-level pair-based loss is introduced to take full advantage of structure correlation between all the mined hard samples. Finally, the quantitative results of exhaustive experiments on three benchmarks show that our proposed method performs better than existing state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.