Extracting roads from complex remote sensing images is a crucial task for applications, such as autonomous driving, path planning, and road navigation. However, conventional convolutional neural network-based road extraction methods mostly rely on square convolutions or dilated convolutions in the local spatial domain. In multi-directional continuous road segmentation, these approaches can lead to poor road connectivity and non-smooth boundaries. Additionally, road areas occluded by shadows, buildings, and vegetation cannot be accurately predicted, which can also affect the connectivity of road segmentation and the smoothness of boundaries. To address these issues, this work proposes a multi-directional spatial connectivity network (MDSC-Net) based on multi-directional strip convolutions. Specifically, we first design a multi-directional spatial pyramid module that utilizes a multi-scale and multi-directional feature fusion to capture the connectivity relationships between neighborhood pixels, effectively distinguishing narrow and scale different roads, and improving the topological connectivity of the roads. Second, we construct an edge residual connection module to continuously learn and integrate the road boundaries and detailed information of shallow feature maps into deep feature maps, which is crucial for the smoothness of road boundaries. Additionally, we devise a high-low threshold connectivity algorithm to extract road pixels obscured by shadows, buildings, and vegetation, further refining textures and road details. Extensive experiments on two distinct public benchmarks, DeepGlobe and Ottawa datasets, demonstrate that MDSC-Net outperforms state-of-the-art methods in extracting road connectivity and boundary smoothness. The source code will be made publicly available at https://github/LYY199873/MDSC-Net. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Roads
Convolution
Feature extraction
Remote sensing
Buildings
Ablation
Vegetation