Pan-sharpening of optical remote sensing multispectral imagery aims to include spatial information from a high-resolution image (high frequencies) into a low-resolution image (low frequencies) while preserving spectral properties of a low-resolution image. From a signal processing view, a general fusion filtering framework (GFF) can be formulated, which is very well suitable for a fusion of multiresolution and multisensor data such as optical-optical and optical-radar imagery. To reduce computation time, a simple and fast variant of GFF-high-pass filtering method (HPFM)—is proposed, which performs filtering in signal domain and thus avoids time-consuming FFT computations. A new joint quality measure based on the combination of two spectral and spatial measures was proposed for quality assessment by a proper normalization of the ranges of variables. Quality and speed of six pan-sharpening methods—component substitution (CS), Gram-Schmidt (GS) sharpening, Ehlers fusion, Amélioration de la Résolution Spatiale par Injection de Structures, GFF, and HPFM—were evaluated for WorldView-2 satellite remote sensing data. Experiments showed that the HPFM method outperforms all the fusion methods used in this study, even its parentage method GFF. Moreover, it is more than four times faster than GFF method and competitive with CS and GS methods in speed.
Pan-sharpening of remote sensing multispectral imagery directly influences the accuracy of interpretation, classification, and other data mining methods. Different tasks of multispectral image analysis and processing require specific properties of input pan-sharpened multispectral data such as spectral and spatial consistency, complexity of the pan-sharpening method, and other properties. The quality of a pan-sharpened image is assessed using quantitative measures. Generally, the quantitative measures for pan-sharpening assessment are taken from other topics of image processing (e.g., image similarity indexes), but the applicability basis of these measures (i.e., whether a measure provides correct and undistorted assessment of pan-sharpened imagery) is not checked and proven. For example, should (or should not) a quantitative measure be used for pan-sharpening assessment is still an open research topic. Also, there is a chance that some measures can provide distorted results of the quality assessment and the suitability of these quantitative measures as well as the application for pan-sharpened imagery assessment is under question. The aim of the authors is to perform statistical analysis of widely employed measures for remote sensing imagery pan-sharpening assessment and to show which of the measures are the most suitable for use. To find and prove which measures are the most suitable, sets of multispectral images are processed by the general fusion framework method (GFF) with varying parameters. The GFF is a type of general image fusion method. Variation of the method parameter set values allows one to produce imagery data with predefined quality (i.e., spatial and spectral consistency) for further statistical analysis of the assessment measures. The use of several main multispectral sensors (Landsat 7ETM+, IKONOS, and WorldView-2) imagery allows one to assess and compare available quality assessment measures and illustrate which of them are most suitable for each satellite.
Information extraction from multi-sensor remote sensing imagery is an important and challenging task for many
applications such as urban area mapping and change detection. A special acquisition (orthogonal) geometry is of great
importance for optical and radar data fusion. This acquisition geometry allows to minimize displacement effects due
inaccuracy of Digital Elevation Model (DEM) used for data ortho-rectification and existence of unknown 3D structures
in a scene. Final data spatial alignment is performed by recently proposed co-registration method based on a Mutual
Information measure. For a combination of features originating from different sources, which are quite often noncommensurable,
we propose an information fusion framework called INFOFUSE consisting of three main processing
steps: feature fission (feature extraction aiming at complete description of a scene), unsupervised clustering (complexity
reduction and feature representation in a common dictionary) and supervised classification realized by Bayesian or
Neural networks. An example of urban area classification is presented for the orthogonal acquisition of space borne very
high resolution WorldView-2 and TerraSAR-X Spotlight imagery over Munich city, South Germany. Experimental
results confirm our approach and show a great potential also for other applications such as change detection.
Today's flight safety, especially during aircraft landing approach to an airport, is often affected by adverse weather conditions. One of the promising technologies to increase the pilot's situation awareness is the Enhanced and Synthetic Vision System (ESVS): the combination of sensor vision and synthetic vision systems. In this paper we present one aspect of the sensor vision system, an algorithm that is using only a sequence of infrared images to detect possible runway structures and obstacles on it during the aircraft landing approach. No additional information from database, INS or GPS is used at this processing stage. The algorithm generates several runway and obstacle hypotheses and the final decision in ESVS is taken in the further processing stage: fusion of radar and IR data hypotheses and synthetic vision data. The functionality of the algorithm was tested extensively during several flight campaigns with landing approaches to different German and European airports in 2003 and 2004.
At the German Aerospace Center, DLR, an automatic and operational traffic processor for the TerraSAR-X ground segment is currently under development. The processor comprises the detection of moving objects on ground, their correct assignment to the road network, and the estimation of their velocities. Since traffic flow parameters are required for describing dynamics and efficiency of transportation, the estimation of the velocity of detected ground moving vehicles is an important task in traffic research science. In this paper we show for TerraSAR-X simulated data how the along-track velocity component of a moving vehicle can be derived indirectly by processing SAR data with varying frequency modulation (FM) rates and exploiting the specific behavior of the vehicle's signal through the FM rate space. An airborne ATI-SAR campaign with DLR's ESAR sensor has been conducted in April 2004 in order to investigate the different effects of ground moving objects on SAR data and to acquire a data basis for algorithm development and validation. Several test cars equipped with GPS sensors as well as vehicles of opportunity on motorways with unknown velocities were imaged with the radar under different conditions. To acquire reference data of superior quality, all vehicles were simultaneously imaged by an optical sensor on the same aircraft allowing their velocity estimation from sequences of images. The paper concentrates on the estimation of along-track velocities of moving vehicles from SAR data. Velocity measurements of vehicles in controlled experiments are presented, including data processing, comparison with GPS and optical reference data and error analysis.
The German radar satellite TerraSAR-X is a high resolution, dual receive antenna SAR satellite, which will be launched in spring 2006. Since it will have the capability to measure the velocity of moving targets, the acquired interferometric data can be useful for traffic monitoring applications on a global scale. DLR has started already the development of an automatic and operational processing system which will detect cars, measure their speed and assign them to a road. Statistical approaches are used to derive the vehicle detection algorithm, which require the knowledge of the radar signatures of vehicles, especially under consideration of the geometry of the radar look direction and the vehicle orientation. Simulation of radar signatures is a very difficult task due to the lack of realistic models of vehicles. In this paper the radar signatures of the parking cars are presented. They are estimated experimentally from airborne E-SAR X-band data, which have been collected during flight campaigns in 2003-2005. Several test cars of the same type placed in carefully selected orientation angles and several over-flights with different heading angles made it possible to cover the whole range of aspect angles from 0° to 180°. The large synthetic aperture length or beam width angle of 7° can be divided into several looks. Thus processing of each look separately allows to increase the angle resolution. Such a radar signature profile of one type of vehicle over the whole range of aspect angles in fine resolution can be used further for the verification of simulation studies and for the performance prediction for traffic monitoring with TerraSAR-X.
We propose an empirical radiometric correction method for the effects, such as atmospheric effects and anisotropic reflection of the surface, in optical remote sensing data. These distortions are sensor viewing (scanning) angle dependent, thus they can be significant for data received from airborne sensors due to their wide field of view. The procedure is based solely on the digital image data and consists of several steps. First, the initial image region near nadir (minimal distortions) is clustered by an extended k-means algorithm, which automatically detects the clusters (surface types) in an image. Then, for each cluster an average line profile is calculated. These profiles (initially defined in a middle part of an image line) are extrapolated to the whole line of an image by a polynomial approximation. Finally, from these polynomial functions the linear regression over all clusters is build using the radiative transfer equation, which allows the radiometric correction for each viewing angle in an image relative to the reference angle, usually nadir. The procedure is iterative, that is the correction is first performed for a narrow part around the initial region. Then the procedure is initialized with this newly corrected image region and repeated until the whole image is corrected. The experiments for data acquired by airborne multispectral scanner DAEDALUS AADS 1268 ATM show the effectiveness of the proposed method especially for the mosaicking and classification applications.
'Synthetic Vision' and 'Sensor Vision' complement to an ideal system for the pilot's situation awareness. To fuse these two data sets the sensor images are first segmented by a k-means algorithm and then features are extracted by blob analysis. These image features are compared with the features of the projected airport data using fuzzy logic in order to identify the runway in the sensor image and to improve the aircraft navigation data. This process is necessary due to inaccurate input data i.e. position and attitude of the aircraft. After identifying the runway, obstacles can be detected using the sensor image. The extracted information is presented to the pilot's display system and combined with the appropriate information from the MMW radar sensor in a subsequent fusion processor. A real time image processing procedure is discussed and demonstrated with IR measurements of a FLIR system during landing approaches.
A series of IR measurements with a FLIR (Forward Looking Infrared) system during landing approaches to various airports have been performed. A real time image processing procedure to detect and identify the runway and eventual obstacles is discussed and demonstrated. It is based on IR image segmentation and information derived from synthetic vision data. Thhe extracted information from IR images will be combined with the appropriate information from a MMW (millimeter wave) radar sensor in the subsequent fusion processor. This fused information aims to increase the pilot's situation awareness.
The problem of unsupervised clustering of data is formulated using a Bayesian inference. The entropy is considered to define a prior. In clustering problem we have to reduce the complexity of the gray level description. Therefore we minimize the entropy associated with the clustering histogram. This enables us to overcome the problem of defining a priori the number of clusters and an initialization of their centers. Under the assumption of a normal distribution of data the proposed clustering method reduces to a deterministic algorithm (very fast) which appears to be an extension of the standard k-means clustering algorithm. Our model depends on a parameter weighting the prior term and the goodness of fit term. This hyper-parameter allows us to define the coarseness of the clustering and is data independent. Heuristic argument is proposed to estimate this parameter. The new clustering approach was successfully tested on a database of 65 magnetic resonance images and remote sensing images.
The phase unwrapping is the key step in recovering the terrain elevations from Interferometric SAR data. Phase unwrapping deals with the problem of estimating the absolute phase from observations of its noisy wrapped values. It is an ill-posed inverse problem. We propose a Bayesian model fitting solution using a fractal prior in the form of a multiscale stochastic process.
The new data-driven weights initialization method for the back-propagation learning algorithm is proposed based on the generation of only those hyperplanes which are cutting the input data feature space. It allows to speed up the training of the learning algorithm and to decrease the possibility of getting trapped in a local minimum. The conventional way of weights initialization and the new method of weights initialization are investigated for synthetic XOR data and real remote sensing data, SAR. The back-propagation with the new weights initialization method showed the ability to provide consistently better results than the conventional way of weights initialization for the data investigated.
The two classification approaches based on texture and fuzzy sets were investigated for tropical forest regrowth mapping on Landsat TM (Manaus area, Brazil). Texture-based classifiers (based on Markov random field model consistently provided a higher classification accuracies (for testing set), indicating that they are more able to accurately characterize different tropical forest regeneration classes and two species of trees (cecropia and vismia). Memberships derived from the three classification algorithms: based on the probability density function, a posteriori probability, and the Mahalanobis distance were used for post- classification of fuzzy image. Post-classification (summation of memberships in the neighborhood or application of homogeneity approach for post-classification) of the fuzzy image can increase the classification accuracies (for training and testing data) by 10% in comparison with maximum likelihood classification for 11 classes of tropical forest region. Texture-based classification and post-classification of fuzzy image give the comparable classification accuracies for the same 11 classes of tropical forest region.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.