We explore the modeling and simulation of multispectral imaging through anisoplanatic atmospheric optical turbulence. We analyze the impact of wavelength on a number of key atmospheric optical turbulence statistics. This includes the impact of wavelength on tilt and tilt variance. The modeling analysis also includes the impact of wavelength on the atmospheric optical transfer function. Here, we investigate the balance between diffraction and turbulence degradation as a function of wavelength. We also present a method for simulating atmospheric degradation for multispectral imagery using numerical wave propagation. Our approach uses a phase screen resampling method to allow for modeling the same atmospheric realization but with sampling parameters tailored to each wavelength. A number of multispectral simulation results, along with a validation study that compares the empirical statistics from the simulation to their theoretical counterparts, are presented. Real image data are also studied to validate theoretical multispectral tilt statistics.
We present a deep learning approach for restoring images degraded by atmospheric optical turbulence. We consider the case of terrestrial imaging over long ranges with a wide field-of-view. This produces an anisoplanatic imaging scenario where turbulence warping and blurring vary spatially across the image. The proposed turbulence mitigation (TM) method assumes that a sequence of short-exposure images is acquired. A block matching (BM) registration algorithm is applied to the observed frames for dewarping, and the resulting images are averaged. A convolutional neural network (CNN) is then employed to perform spatially adaptive restoration. We refer to the proposed TM algorithm as the block matching and CNN (BM-CNN) method. Training the CNN is accomplished using simulated data from a fast turbulence simulation tool capable of producing a large amount of degraded imagery from declared truth images rapidly. Testing is done using independent data simulated with a different well-validated numerical wave-propagation simulator. Our proposed BM-CNN TM method is evaluated in a number of experiments using quantitative metrics. The quantitative analysis is made possible by virtue of having truth imagery from the simulations. A number of restored images are provided for subjective evaluation. We demonstrate that the BM-CNN TM method outperforms the benchmark methods in the scenarios tested.
In long-range imaging regimes, atmospheric turbulence degrades image quality. In addition to blurring, the turbulence causes geometric distortion effects that introduce apparent motion in acquired video. This is problematic for image processing tasks, including image enhancement and restoration (e.g., superresolution) and aided target recognition (e.g., vehicle trackers). To mitigate these warping effects from turbulence, it is necessary to distinguish between actual in-scene motion and apparent motion caused by atmospheric turbulence. Previously, the current authors generated a synthetic video by injecting moving objects into a static scene and then applying a well-validated anisoplanatic atmospheric optical turbulence simulator. With known per-pixel truth of all moving objects, a per-pixel Gaussian mixture model (GMM) was developed as a baseline technique. In this paper, the baseline technique has been modified to improve performance while decreasing computational complexity. Additionally, the technique is extended to patches such that spatial correlations are captured, which results in further performance improvement.
Inherent in the Air Force's mission of airborne intelligence, surveillance, and reconnaissance (ISR) is the need to collect
data from sensors. Technology is constantly advancing and, as such, new sensors are also being constantly produced.
The manufacturers of these sensors typically provide with their hardware free software for communication with their
sensor. These binaries work well for mature systems as interfaces and communication protocols are already firmly
established. However, most research software is, by its very nature, immature and typically unable to communicate with
sensor packages "out of the box." Because of this, researcher productivity is hindered as they have to focus on hardware
communication in addition to their immediate research goals. As such, the creation of a library to talk to common
sensors and other hardware is needed. This paper describes the various libraries currently available and their limitations.
It also documents a combined effort of the Air Force Research Lab (AFRL) and Wright State University (WSU) to create
a "super library" that removes as many of the limitations of each of the individual libraries as possible.
Historically, the Air Force's research into aerial platforms for sensing systems has focused on low-, mid-, and highaltitude
platforms. Though these systems are likely to comprise the majority of the Air Force's assets for the foreseeable
future, they have limitations. Specifically, these platforms, their sensor packages, and their data exploitation software are
unsuited for close-quarter surveillance, such as in alleys and inside of buildings. Micro-UAVs have been gaining in
popularity, especially non-fixed-wing platforms such as quad-rotors. These platforms are much more appropriate for
confined spaces. However, the types of video exploitation techniques that can effectively be used are different from the
typical nadir-looking aerial platform. This paper discusses the creation of a framework for testing existing and new video
exploitation algorithms, as well as describes a sample micro-UAV-based tracker.
Due to increased surveillance, information, and exploitation assets, and the wide variety of interfaces, protocols,
etc. that these systems use, the interactions between these systems is rapidly growing more complex. Likewise,
integrating a new component into existing systems is no longer a trivial challenge. In order to make modification
and integration of components into a larger system easier, the Air Force Research Labs have developed Sensor
Processing Architecture for Data Exploitation (SPADE). The contribution of this paper is to discuss the successful
integration of a vehicle tracker into the SPADE architecture, using Pursuer as the user interface.
Layered sensing is a relatively new construct in the repertoire of the US Air Force. Under the layered sensing
paradigm, an area is surveyed by a multitude of sensors at varying altitudes, and operating across many modalities.
One of the recent pushes is to incorporate multi-sensor systems and create from them a single image.
However, if the sensor parameters are not properly adjusted, the contrast amongst the images from camera to
camera will vary greatly. This can create issues when performing tracking and analysis work.
The contribution of this paper is to explore and provide an evaluation of various techniques for histogram
equalization of Electro-Optical (EO) video sequences whose views are centered on a city. In this paper, the
performance of several methods on histogram equalization are evaluated under the layered sensing construction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.