Paper
3 June 2024 Bionic eye movement transformer for cross-domain semantic segmentation of high-resolution remote sensing images
Xinyao Wang, Haitao Wang, Jie Li
Author Affiliations +
Abstract
In the past few years, transformers have attracted the attention of many scholars due to their excellent performance in fields such as natural language and machine vision. However, the transformer's self-attention mechanism makes it difficult to extract long-distance contextual information and is difficult to apply to semantic segmentation tasks with high-resolution remote sensing images. Therefore, we propose a bionic eye movement transformer model with visual perception capabilities for cross-domain semantic segmentation of high-resolution remote sensing images. Specifically, we designed three different bionic eye movement attention models to enhance the transformer's ability to extract long-distance contextual features. Experiments conducted on two international standard remote sensing image data sets show that our proposed bionic eye movement transformer has excellent cross-domain semantic segmentation capabilities.
(2024) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Xinyao Wang, Haitao Wang, and Jie Li "Bionic eye movement transformer for cross-domain semantic segmentation of high-resolution remote sensing images", Proc. SPIE 13170, International Conference on Remote Sensing, Surveying, and Mapping (RSSM 2024), 131700T (3 June 2024); https://doi.org/10.1117/12.3032269
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Transformers

Semantics

Remote sensing

Eye models

Biomimetics

Visualization

Back to Top