KEYWORDS: Particle filters, Detection and tracking algorithms, Process modeling, Particles, Algorithm development, Visual process modeling, Signal to noise ratio, Machine vision, Computer vision technology, Dynamical systems
In designing a tracking algorithm, utilizing several different features, e.g., color histogram, gradient histogram and
other object descriptors, is preferable to increase robustness of tracking performance. In this paper, we propose a
multiple feature fusion framework to improve the tracking by assigning appropriate weights to individual features.
The feature weights are optimally obtained by a waterfilling procedure that maximizes mutual information
between target object features and query features. Especially, in this paper, we focus on a particle filter tracking
implementation of the multiple feature fusion framework. Our experiments show that object tracking with
multiple features outperforms single feature based tracking methods and illustrates that the proposed optimal
feature weighting increases robustness of multiple-feature based tracking performance.
KEYWORDS: Video, Nonlinear filtering, Linear filtering, Image filtering, Composites, Video processing, Digital filtering, Signal processing, Image enhancement, Image processing
A large number of comb filtering techniques for a national television system committee (NTSC) or phase alternate each line (PAL) color decoding have been researched and developed for the last three decades. Comb filtering can separately obtain the luminance and the quadrature amplitude modulation (QAM) modulated chrominance information from a composite video burst signal (CVBS). However there is a difficulty in extracting the luminance and chrominance components from a composite video image because the cross-talk between them gives undesirable image artifacts. The three-dimensional (3-D) comb filter using spatio-temporal filtering kernel and adaptive two-dimensional (2-D) neural-based comb-filtering approach was developed to alleviate the dot crawl artifacts; however it shows limitation on color decoding. This paper presents an effective dot crawl artifact reduction algorithm in a composite video signal, in which undesirable dot crawl artifact is significantly reduced without losing fine image details. The proposed composite video artifact removal algorithm filters only detected candidate regions specified by dot crawl artifact decision map. The possible comb-filtering error region is generated on video image using luminance and chrominance edge information. Simulation and analysis show that the proposed algorithm with nonlinear bilateral filtering removes efficiently the dot crawl artifacts on composite video image and supports improving further video enhancement techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.