The wide application of agricultural greenhouses globally has brought economic benefits; however, it has also led to many environmental problems. The timely and accurate acquisition of greenhouse areas and distribution is valuable for authorities seeking to optimize regional agricultural management and mitigate environmental pollution. Automatic extraction of the greenhouses from high spatial resolution remote sensing (RS) imagery based on deep learning can reduce labor costs and improve operational efficiency to have better application prospects. In this paper, we propose a multi-channel fused fully convolutional network (FCN) optimized by the optimal scale object-oriented segmentation results for agricultural greenhouse extraction from high spatial resolution RS imagery. First, to make full use of remote sensing feature images of target objects and to not increase the complexity of the deep learning network, we constructed a decision-level fusion FCN network that can simultaneously input multiple remote sensing images for preliminary extraction of greenhouse. Second, to address a defect in the classical FCN network causing the easy loss of ground object details, we optimized the preliminary extraction results from FCN by the results of object-oriented segmentation. Finally, the optimized greenhouse extraction results were processed by the mathematical morphology, and the final extraction results were obtained. The experimental results demonstrate that: (1) Multi-channel fused FCN model could use the unique spectral characteristics of different ground objects. (2) Optimized initial extraction results from FCN based on the optimal scale object-oriented segmentation results could fully maintain the edge details of the greenhouse. Experimental results show that the proposed method can extract the greenhouse effectively. The precision and F value of our proposed method are 92.68% and 0.94.
The use of the single machine learning classifier for high-resolution remote sensing (RS) image classification makes it difficult to improve the accuracy of classification results. To fully utilize the advantages of different classifiers for different types of ground objects, based on the Dempster–Shafer (DS) evidence theory, we propose a multi-classifier fusion method for classification of high-resolution RS images. Six machine learning classifiers: support vector machines, k-nearest neighbor, random forest, artificial neural network, classification and regression tree, and the C5.0 decision tree were selected for application in the fusion of multiple classifiers. We calculated a classifier difference index based on the accuracy and difference of the classification results of the base classifiers. Base classifiers with large differences were selected to perform integration based on the DS evidence theory. We also improved the classical DS evidence theory. First, based on the classification validity of the base classifier for different ground objects, the classification probability value of the base classifier for different samples was weight optimized. Then different fusion methods were selected according to the classification conflict coefficients between base classifiers. The results reveal that the overall accuracy and kappa coefficients of the fusion classifier are significantly better than those of the base classifier. The producer’s accuracy and user’s accuracy of the fusion results based on the improved DS evidence theory were higher than those of the fusion results based on the classical DS evidence theory.
The quality of multiresolution segmentation directly influences the accuracy of high-resolution remote sensing image classification using object-oriented analysis technology. However, a perfect segmentation scale optimization method has not yet been developed. Using the fact that the optimal segmentation scale of high-resolution remote sensing images is closely related to the complexity of the objects on the image, we propose an approach for calculating the optimal segmentation scale based on the scene complexity of an image. First, we calculate the scene complexity of high-resolution remote sensing images using Watson’s vision model. Then, we analyze the relationship between the image scene complexity and the optimal segmentation scale based on the model calculation. Optimal segmentation scales are found to be related to the scene complexity of high-resolution remote sensing images by an exponential function, allowing direct calculation of the optimal segmentation scale based on the fitted formulas and the image scene complexity. Finally, we propose a multilevel segmentation strategy to increase the object targeting in the optimal segmentation scale. The optimal segmentation scale calculation method proposed here is simple to perform and has a broad range of potential applications.
Automatic identification of landslides based on remote sensing images is important for investigating disasters and producing hazard maps. We propose a method to detect shallow landslides automatically using Wordview2 images. Features such as high soil brightness and low vegetation coverage can help identify shallow landslides on remote sensing images. Therefore, soil brightness and vegetation index were chosen as indexes for landslide remote sensing. The back scarp of a landslide can form dark shadow areas on the landslide mass, affecting the accuracy of landslide extraction. To eliminate this effect, the shadow index was chosen as an index. The first principal component (PC1) contained >90% of the image information; therefore, this was also selected as an index. The four selected indexes were used to synthesize a new image wherein information on shallow landslides was enhanced, while other background information was suppressed. Then, PC1 was extracted from the new synthetic image, and an automatic threshold segmentation algorithm was used for segmenting the image to obtain similar landslide areas. Based on landslide features such as slope, shape, and area, nonlandslide areas were eliminated. Finally, four experimental sites were used to verify the feasibility of the developed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.