The core challenge of few-shot learning is the serious overfitting problem on new tasks because of the scarcity of labeled data. Self-supervision learning can mine powerful supervision signals from the data itself to enhance the generalization performance of the model. Thus a rotation self-supervision module has been directly integrated to a few-shot learning network to alleviate the overfitting problem. However, due to the level difference or convergence speed difference in the loss function for each task, the overall model can be alternately dominated or biased by a certain task during the training stages, which has disadvantages for the main task performance. Therefore, we design a network architecture with auxiliary task learning speed equalization (LSENet) for few-shot learning. The overall model improves the generalization capability using an auxiliary task. In addition, we design a speed equalization module to constrain the decline rate of the two tasks to achieve balanced learning. Our method alleviates the overfitting problem of few-shot learning and greatly improves classification accuracy. Extensive experiments are conducted on benchmark datasets to demonstrate the effectiveness of our method. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Convolution
Data modeling
Neural networks
Networks
Network architectures
Performance modeling
Image processing