In 2014 we presented a concept for an Evolvable Space Telescope (EST) that was assembled on orbit in 3 stages, growing from a 4x12 meter telescope in Stage 1, to a 12-meter filled aperture in Stage 2, and then to a 20-meter filled aperture in Stage 3. Stage 1 is launched as a fully functional telescope and begins gathering science data immediately after checkout on orbit. This observatory is then periodically augmented in space with additional mirror segments, structures, and newer instruments to evolve the telescope over the years to a 20-meter space telescope. In this 2015 update of EST we focus upon three items: 1) a restructured Stage 1 EST with three mirror segments forming an off-axis telescope (half a 12-meter filled aperture); 2) more details on the value and architecture of the prime focus instrument accommodation; and 3) a more in depth discussion of the essential in-space infrastructure, early ground testing and a concept for an International Space Station testbed called MoDEST. In addition to the EST discussions we introduce a different alternative telescope architecture: a Rotating Synthetic Aperture (RSA). This is a rectangular primary mirror that can be rotated to fill the UV-plane. The original concept was developed by Raytheon Space and Airborne Systems for non-astronomical applications. In collaboration with Raytheon we have begun to explore the RSA approach as an astronomical space telescope and have initiated studies of science and cost performance.
We present real-time 3D image processing of flash ladar data using our recently developed GPU parallel processing
kernels. Our laboratory and airborne experiences with flash ladar focal planes have shown that per laser flash, typically
only a small fraction of the pixels on the focal plane array actually produce a meaningful range signal. Therefore, to
optimize overall data processing speed, the large quantity of uninformative data are filtered out and removed from the
data stream prior to the mathematically intensive point cloud transformation processing. This front-end pre-processing,
which largely consists of control flow instructions, is specific to the particular type of flash ladar focal plane array being
used and is performed by the computer's CPU. The valid signals along with their corresponding inertial and navigation
metadata are then transferred to a GPU device to perform range-correction, geo-location, and ortho-rectification on each
3D data point so that data from multiple frames can be properly tiled together either to create a wide-area map or to
reconstruct an object from multiple look angles. GPU parallel processing kernels were developed using OpenCL. Postprocessing
to perform fine registration between data frames via complex iterative steps also benefits greatly from this
type of high-performance computing. The performance improvements obtained using GPU processing to create
corrected 3D images and for frame-to-frame fine-registration are presented.
Northrop Grumman Aerospace Systems (NGAS) has a long
legacy developing and fielding hyperspectral sensors,
including airborne and space based systems covering the
visible through Long Wave Infrared (LWIR) wavelength
ranges. Most recently NGAS has developed the
Hyperspectral Airborne Terrestrial Instrument (HATI) family
of hyperspectral sensors, which are compact airborne
hyperspectral imagers designed to fly on a variety of
platforms and be integrated with other sensors in NGAS's
instrument suite. The current sensor under development is
the HATI-2500, a full range Visible Near Infrared (VNIR)
through Short Wave Infrared (SWIR) instrument covering the
0.4 - 2.5 micron wavelength range with high spectral
resolution (3nm). The system includes a framing camera
integrated with a GPS/INS to provide high-resolution
multispectral imagery and precision geolocation. Its compact
size and flexible acquisition parameters allow HATI-2500 to
be integrated on a large variety of aerial platforms. This
paper describes the HATI-2500 sensor and subsystems and its
expected performance specifications.
We demonstrate a numerical technique for registering multiple frames of point cloud data from an airborne 3D flash
ladar system that we have designed, built, and flown. This technique stitches together ladar data even in the presence of
inaccuracies in the line-of-sight pointing knowledge as well as instabilities in the time-of-flight clock frequency. The
technique performs frame-to-frame in-track as well as cross-track stitching of multiple flight line data to create large area
maps of urban areas and vegetation. Filters remove data with spurious range values and high intensity specular back
flashes such as those from bodies of water. Signal averaging of nearly overlapping pixel data creates monolithic, wide
area 3D maps that are geo-located and thus readily superimposable with other types of sensor data, such as hyperspectral
images, for fused-data exploitations. The accuracy of the numerical technique used for stitching data from different
target types, such as urban, vegetation, or bare earth, was studied. This analysis provides guidance for future
improvements of algorithms used for registering airborne 3D flash ladar point cloud data sets.
Northrop Grumman Aerospace Systems (NGAS) has developed the Hyperspectral Airborne Tactical Instrument (HATI), a compact
airborne hyperspectral imager designed to fly on a variety of platforms and to be integrated with other sensors in the NGAS
instrument suite. HATI has taken part in a variety of missions and flown in conjunction with other NGAS airborne sensors including
the recently-developed NGAS 3-D flash ladar system to demonstrate a multi-sensor data fusion approach. HATI is a push-broom
sensor which gathers information in the 400 nm to 1700 nm wavelength range. Its compact size allows HATI to be mounted on
commercial-of-the-shelf (COTS) aerial photography stabilization platforms and on a large variety of aerial platforms. In its most
recent flight season, the HATI sensor was used to gather data for applications including remote classification of vegetation, forests,
and man-made materials. The HATI instrument has undergone laboratory and in-situ performance validation and radiometric
calibration. This paper describes the HATI sensor and recent data collection campaigns.
KEYWORDS: Climatology, Satellites, Decision support systems, Sensors, Environmental sensing, System integration, Systems modeling, Clouds, Space operations, Earth sciences
Northrop Grumman Corporation (NGC) provides systems and technologies to ensure national security based on technologies - from undersea to outer space, and in cyberspace. With a heritage of developing and integrating science instruments on space platforms and airborne systems, NGC is conducting analysis of alternatives for a global observing system that integrates data collected from geostationary and polar-orbiting satellites with Unmanned Aerial System (UAS) platforms. This enhanced acquisition of environmental data will feed decision support systems such as the TouchTable ® to deliver improved decision making capabilities. Rapidly fusing and displaying multiple types of weather and ocean observations, imagery, and environmental data with geospatial data to create an integrated source of information for end users such as emergency managers and planners will deliver innovative solutions to improve disaster warning, mitigate disaster impacts, and reduce the loss of life and property.
We present analysis of alternatives of combinations of sensor platforms that integrate space and airborne systems with ground and ocean observing sensors and form the basis for vertically integrated global observing systems with the capacity to improve measurements associated with hazard and climate-related uncertainties.
The analyses include candidate sensors deployed on various configurations of satellites that include NPOESS, GOES R, and future configurations, augmented by UAS vehicles including Global Hawk, configured to deliver innovative environmental data collection capabilities over a range of environmental conditions, including severe hazards, such as hurricanes and extreme wildland fires. Resulting approaches are evaluated based on metrics that include their technical feasibility, capacity to be integrated with evolving Earth science models and relevant decision support tools, and life cycle costs.
Deriving water constituents: water clarity, turbidity, bottom type and depth from remote sensing continue to be a
challenge in coastal waters. Because relatively large regions can be observed in a short amount of time, the development
of data integration techniques to combine multiple elements from satellite and airborne sensors (i.e.: AVIRIS, Hyperion,
EOS, MODIS and NPOESS) is highly desirable. Proficient implementation is also multifaceted. As concerns for
homeland security have elevated to higher priority, characterization of littoral domains has moved from being driven by
environmentally sensitive issues to politically vital matters. In the vulnerable transitional area between ocean and land
there exists a void of defined parameters, confident characterization and reliable strategies for operational analysis. This
paper surveys traditional optical and photonic techniques for the classification of maritime features, predominantly in
the 0 to 100 meter depth range. We discuss the most recent methods and compare them by water depth and practicality
as well as present the inherent physical limitations and constraints. The research presented here updates the ocean
community and apprises security managers of the primary issues in using satellite and airborne data in littoral zones and
suggests perfunctory paths for immediate innovation based on available techniques. This field has great opportunity for
breakthroughs in technology such as the NGST "OnePicture Workstation" providing useful information for critical
decision making. This work provides an overview of this emerging technology designed to benefit harbor defense/port
security as well as promising strategies using data fusion, LUT, higher-dimensional analysis and new visualization
techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.