Grain is one of the basic human necessities, and its quality and safety directly impact human dietary health. Various issues occur during grain storage, primarily mold and pest infestation. With the development of artificial intelligence, increasingly, more technologies are applied to grain detection and classification. Transformer-based models are becoming popular in grain detection. Although transformer models exhibit excellent performance, they are often large and cumbersome, limiting practical applications. We propose a framework named KD-ASF based on intermediate layer knowledge distillation and one-shot neural architecture search, to optimize the hyperparameters of vision transformer (ViT) for detecting and classifying molded wheat kernels (MDK), Insect-Damaged wheat kernels (IDK), and undamaged wheat kernels (UDK). In KD-ASF, we use the ViT model as our teacher network. Next, we design a search space containing adjustable hyperparameters of transformer building blocks. The super-network stacks maximum transformer building blocks and is trained under the guidance of the teacher network. Subsequently, the trained super-network undergoes evolutionary search, and the resulting networks are used for classifying different wheat kernels. We conducted experiments using a five-fold cross-validation approach and obtained an |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Transformers
Education and training
Visual process modeling
Head
Performance modeling
Network architectures
Deep convolutional neural networks