Current degraded visual environment (DVE) solutions primarily support aviation and augment a pilot’s ability to operate in a degraded environment, but the relevancy of information presented to the pilot to safely navigate isn’t necessarily the same format that should be presented to dismounting operators or for mission planners in the command and control center. The need exists for a real-time 3D common operating picture (COP) generating system that can provide enhanced mission planning capabilities as well as real-time status and location of operating forces within that 3D COP. Capabilities and challenges that will be addressed are: 1) real-time processing of all disparate sensor data; 2) database implementation that allows for clients to query the COP for specific users, devices, timeframes, and locations; and 3) representation of 3D COP to command and control elements as well as forward deployed users of HoloLens, flat panel display, and iOS devices. The proposed Real-time Intelligence Fusion Service (RIFS) will operate in real-time by receiving disparate data streams from sensors such as LiDARs, radars, and various localization methods. RIFS will then fuse them to a COP and send the COP to requesting clients. The application of RIFS would allow forward deployed personnel and commanders to maintain a high degree of real-time passive situational awareness in 3D space that would ultimately increase operational tempo and significantly mitigate risk to forward deployed forces.
Fusing 3-D generated scenes from multiple, spatially distributed sensors produces a higher quality data product with fewer shadows or islands in the data. As an example, while airborne LiDAR systems scan the exterior of a structure, a spatial mapping system generates a high resolution scan of the interior. Fusing the exterior and interior scanned data streams allows the construction of a fully realized 3D representation of the environment by asserting an absolute reference frame. The implementation of this fused system allows simultaneous real-time streaming of point clouds from multiple assets, tracking of personnel and assets in that fused 3D space, and visualizing it on a mixed-reality device. Several challenges that were solved: 1) the tracking and synchronization of multiple independent assets; 2) identification of the network throughput for large data sets; 3) the coordinate transformation of collected point cloud data to a common reference; and 4) the fused representation of all collected data. We leveraged our advancements in real-time point cloud processing to allow a user to view the singular fused 3D image on a HoloLens. The user is also able to show or hide the fused features of the image as well as alter it in six degrees of freedom and scaling. This fused 3D image allows a user to see a virtual representation of their immediate surroundings or allow remote users to gain knowledge of a distant location.
The well-known Langley extrapolation technique produces measurements of atmospheric optical depth (AOD) by collecting direct sun irradiance at multiple zenith angles. One common application of this technique is used by sun photometers such as in NASA’s AErosol Robotic Network (AERONET). This large, spatially distributed network collects time averaging data from across the globe and applying Beer’s Law, produces hourly estimates of AOD. While this technique has produced excellent data, the dependence on direct sun irradiance requires cloudless skies and line-ofsight to the sun. Atmospheric LIDARs, on the other hand, can operate in the presence of clouds and can also produce range-resolved measurements of AOD by applying the same Langley technique. For aerosol LIDARs, this technique requires that the LIDAR be capable of producing high quality waveforms within the atmospheric coherence time and also be capable of taking measurements off zenith. At least two unique angles are required to produce data, although 3+ are recommended. This paper will present an overview of the Langley technique applied with a 1064 nm atmospheric aerosol LIDAR, an overview of the LIDAR hardware and capabilities, sample data collected by the LIDAR, and challenges associated with this technique. It will be shown that while this technique is useful, it requires measurements at all three angles to be made when the atmosphere is reasonably horizontally homogenous. Furthermore, the system optics, alignment, and laser power must be kept constant (keeping the LIDAR’s system constant the same for all measurements) for the data to be useful in a Langley analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.