11 January 2022 Learning speed equalization network for few-shot learning
Cailing Wang, Chen Zhong, Guoping Jiang
Author Affiliations +
Abstract

The core challenge of few-shot learning is the serious overfitting problem on new tasks because of the scarcity of labeled data. Self-supervision learning can mine powerful supervision signals from the data itself to enhance the generalization performance of the model. Thus a rotation self-supervision module has been directly integrated to a few-shot learning network to alleviate the overfitting problem. However, due to the level difference or convergence speed difference in the loss function for each task, the overall model can be alternately dominated or biased by a certain task during the training stages, which has disadvantages for the main task performance. Therefore, we design a network architecture with auxiliary task learning speed equalization (LSENet) for few-shot learning. The overall model improves the generalization capability using an auxiliary task. In addition, we design a speed equalization module to constrain the decline rate of the two tasks to achieve balanced learning. Our method alleviates the overfitting problem of few-shot learning and greatly improves classification accuracy. Extensive experiments are conducted on benchmark datasets to demonstrate the effectiveness of our method.

© 2022 SPIE and IS&T 1017-9909/2022/$28.00 © 2022 SPIE and IS&T
Cailing Wang, Chen Zhong, and Guoping Jiang "Learning speed equalization network for few-shot learning," Journal of Electronic Imaging 31(1), 013004 (11 January 2022). https://doi.org/10.1117/1.JEI.31.1.013004
Received: 21 July 2021; Accepted: 14 December 2021; Published: 11 January 2022
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Convolution

Data modeling

Neural networks

Networks

Network architectures

Performance modeling

Image processing

Back to Top