Paper
21 May 2015 Semantically enabled image similarity search
May V. Casterline, Timothy Emerick, Kolia Sadeghi, C. Alec Gosse, Brent Bartlett, Jason Casey
Author Affiliations +
Abstract
Georeferenced data of various modalities are increasingly available for intelligence and commercial use, however effectively exploiting these sources demands a unified data space capable of capturing the unique contribution of each input. This work presents a suite of software tools for representing geospatial vector data and overhead imagery in a shared high-dimension vector or embedding" space that supports fused learning and similarity search across dissimilar modalities. While the approach is suitable for fusing arbitrary input types, including free text, the present work exploits the obvious but computationally difficult relationship between GIS and overhead imagery. GIS is comprised of temporally-smoothed but information-limited content of a GIS, while overhead imagery provides an information-rich but temporally-limited perspective. This processing framework includes some important extensions of concepts in literature but, more critically, presents a means to accomplish them as a unified framework at scale on commodity cloud architectures.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
May V. Casterline, Timothy Emerick, Kolia Sadeghi, C. Alec Gosse, Brent Bartlett, and Jason Casey "Semantically enabled image similarity search", Proc. SPIE 9473, Geospatial Informatics, Fusion, and Motion Video Analytics V, 94730I (21 May 2015); https://doi.org/10.1117/12.2177409
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Geographic information systems

Data modeling

Feature extraction

Image segmentation

Associative arrays

Image fusion

Image processing

Back to Top