Pansharpening involves synthesizing high resolution hyperspectral images (HRHS) by combining low spatial resolution hyperspectral (LRHS) images and panchromatic (PAN) images. Existing methods perform inadequately in extreme panchromatic sharpening, often resulting in excessively blurred HRHS images. The main reasons are the combination of inputs and training loss functions overly simplistic, and the spatial details of upsampling LRHS are severely distorted, which leads to the weakening of the neural network's ability to utilize the spatial information of PAN images. To address this issue, we propose a spectral-spatial dual injection network (SSDINet) combined with panchromatic loss for extreme pansharpening. SSDINet alleviates the blur problem of HRHS images during extreme pansharpening by adding an additional pseudo hyperspectral (PHS) image input and combining it with upsampling LRHS images to form an additional spectral injection branch that is different from spatial injection. Additionally, during network training, use an extra panchromatic loss to alleviates the problem of incomplete utilization of PAN images. Panchromatic mapping is realized by neural network. Experimental results demonstrate the superior performance of our approach compared to representative methods.
Recent studies have shown that the attention mechanism can further improve the detection accuracy of YOLOX algorithm in remote sensing images, and coordinate attention can well solve the long-range dependencies problem of the previous attention modules, but its attention weights is too redundant in the channel dimension. At the same time, there is a problem of example imbalance in the training of YOLOX algorithm for remote sensing image object detection. To solve the above problems, an improved YOLOX algorithm is proposed, which combines the improved coordinate attention and focal loss. The former not only further adopts pooling and convolution operations to make the attention weights no longer contain redundant channel information when it still has the potential to capture long-range dependencies, but also introduces 1D convolution layers to obtain the final attention weights in three different directions to make the model pay more attention to the effective parts of the features of remote sensing images. The latter optimizes the quality of the gradients, which makes the training more effective and improves the detection accuracy. Training and testing with open remote sensing image dataset. The detection results show the effectiveness and superiority of our method.
Image stitching is the process of combining multiple images with narrow fields of view into a panoramic image with high resolution. Conventional global warp-based image stitching algorithm has limited alignment accuracy and causes shape distortion while spatially-varying warp-based ones have high computational complexity. To address these problems, we proposed a novel regional warp which adopts various transformation models to handle different areas of the image. Images can be divided into overlapping and non-overlapping regions based on the distribution of matched features. For the overlapping area, two kinds of projective transformation are adopted in combination to warp each pixel in this region. For the non-overlapping area, it is further partitioned into two regions where a projective transformation and an affine transformation are utilized separately. Experimental results show that the proposed warp not only provides good alignment accuracy but also avoids severe shape distortion.
This work intends to deal with the problem of misalignment in image stitching caused by small overlap area. To reduce mismatches between matched features pairs in two connected images, random sample consensus (RANSAC) [1] is usually adopted, which works under the assumption that the sampling of matched feature points with the largest number of inliers should be utilized to compute geometric matrix. However, this assumption does not hold in the case of small overlap area between the connected images, as compressing or turning over the image may result in better spatial consistency of matched feature points. Therefore, we propose a turnover and shape filter based feature matching method for image stitching. In the method, a turnover and shape filter is firstly used to filter out the samplings resulted from turnover and compression, which is then connected to RANSAC to yield final inliers. Experimental results from real-world datasets validate the effectiveness of our method.
In hyperspectral image classification, small number of labeled samples versus high dimensional data is one of major challenges. Semi-supervised learning has shown potential to relief the dilemma. Compared with its supervised learning counterpart, semi-supervised learning exploits both intrinsic structure of labeled and unlabeled samples. In this work, we proposed a graph fusion based semi-supervised learning method for hyperspectral image classification. More specially, two graphs are constructed from spectral-spatial Gabor features and original spectral signatures, respectively, and then are integrated using an affine combination. Experimental results from an AVIRIS hyperspectral dataset verify the excellent classification performance of our method.
In hyperspectral image classification, small number of labeled samples versus high dimensional data is one of major challenges. Semi-supervised learning has shown potential to relieve the dilemma. Compared with its supervised learning counterpart, semi-supervised learning exploits both intrinsic structure of labeled and unlabeled samples. In this work, we proposed a graph-fusion based semi-supervised learning method for hyperspectral image classification. More specially, two graphs are constructed from spectral-spatial Gabor features and original spectral signatures, respectively, and then are integrated using an affine combination. Experimental results from an AVIRIS hyperspectral dataset verify the excellent classification performance of our method.
The application of convolutional neural network (CNN) in hyperspectral image (HSI) classification has aroused widespread concern, especially spectral 1D CNN and spatial 2D CNN. Due to intense requirements of calculations and memories, 3D CNN, which is able to process jointly spectral and spatial features, has not yet been widely adopted. Recently, researchers have proposed a hybrid CNN for HSI classification, which obtained better performance than 3D CNN alone. Nevertheless, such a hybrid network has excessive parameters and limited capacity for feature utilization, where smaller training samples are prone to lower accuracy. This paper proposes an improved hybrid CNN to enhance the classification performance, which involves global average pooling, skip connection and appropriate adjustments of the convolution kernels and overall structure. Experimental results from benchmark HSI datasets suggest the effectiveness of our CNN for HSI classification in the situation of limited training set.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.