KEYWORDS: Video acceleration, Video, Video surveillance, Surveillance, Video processing, Cameras, Infrared cameras, Video coding, Neodymium, Target recognition
Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and
civilian applications, including surveillance, target recognition, border protection, forest fire monitoring,
traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using
digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good"
mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter
associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well
as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to
detect the features consistent between video frames. Utilizing these features, the next step is to estimate the
homography between two consecutives video frames, perform warping to properly register the image data,
and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great
deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic
on a single processor. Modern graphics processing units (GPUs) offer computational performance that far
exceeds current CPU technology, allowing for real-time operation.
This paper presents the development of a GPU-accelerated digital video mosaicking implementation and
compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS
aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we
can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video
capture of 30 frames per second is feasible.
Image de-noising is a widely-used technology in modern real-world surveillance systems. Methods can
seldom do both de-noising and texture preservation very well without a direct knowledge of the noise model.
Most of the neighborhood fusion-based de-noising methods tend to over-smooth the images, which causes
a significant loss of detail. Recently, a new non-local means method has been developed, which is based
on the similarities among the different pixels. This technique results in good preservation of the textures;
however, it also causes some artifacts. In this paper, we utilize the scale-invariant feature transform (SIFT)
[1] method to find the corresponding region between different images, and then reconstruct the de-noised
images by a weighted sum of these corresponding regions. Both hard and soft criteria are chosen in order to
minimize the artifacts. Experiments applied to real unmanned aerial vehicle thermal infrared surveillance
video show that our method is superior to popular methods in the literature.
In traditional super-resolution methods, researchers generally assume that accurate
subpixel image registration parameters are given a priori. In reality, accurate image registration on
a subpixel grid is the single most critically important step for the accuracy of super-resolution
image reconstruction. In this paper, we introduce affine invariant features to improve subpixel
image registration, which considerably reduces the number of mismatched points and hence makes
traditional image registration more efficient and more accurate for super-resolution video
enhancement. Affine invariant features are invariant to affine transformations, including scale,
rotation, and translation. They are extracted from the second moment matrix through the
integration and differentiation covariance matrices. The experimental results show that affine
invariant interest points are more robust to perspective distortion and present more accurate
matching than traditional Harris/SIFT corners. In our experiments, all matching affine invariant
interest points are found correctly. In addition, for the same super-resolution problem, we can use
much fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution
results.
In traditional super-resolution methods, researchers generally assume that accurate subpixel image registration parameters are given a priori. In reality, accurate image registration on a subpixel grid is the single most critically important step for the accuracy of super-resolution image reconstruction. In this paper, we introduce affine invariant features to improve subpixel image registration, which considerably reduces the number of mismatched points and hence makes traditional image registration more efficient and more accurate for super-resolution video enhancement. Affine invariant interest points include those corners that are invariant to affine transformations, including scale, rotation, and translation. They are extracted from the second moment matrix through the integration and differentiation covariance matrices. Our tests are based on two sets of real video captured by a small Unmanned Aircraft System (UAS) aircraft, which is highly susceptible to vibration from even light winds. The experimental results from real UAS surveillance video show that affine invariant interest points are more robust to perspective distortion and present more accurate matching than traditional Harris/SIFT corners. In our experiments on real video, all matching affine invariant interest points are found correctly. In addition, for the same super-resolution problem, we can use many fewer affine invariant points than Harris/SIFT corners to obtain good super-resolution results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.