This paper presents a region-based segmentation method extracting automatically moving objects from video sequences. Non-moving objects can also be segmented by using a graphical user interface. The segmentation scheme is inspired from existing methods based on the watershed algorithm. The over-segmented regions resulting from the watershed are first organized in a binary partition tree according to a similarity criterion. This tree aims to determine the fusion order. Every region is then fused with the most similar neighbour according to a spatio-temporal criterion regarding the region colors and the temporal colors continuity. The fusion can be stopped either by fixing a priori the final number of regions, or by markers given through the graphical user interface. Markers are also used to assign a class to non-moving objects. Classification of moving objects is automatically obtained by computing the Change Detection Mask. To get a better accuracy on the contours of the segmented objects, we perform a simple post-processing filter to refine the edges between different video object planes.
KEYWORDS: Digital watermarking, Video, Stars, Content addressable memory, Cameras, Detection and tracking algorithms, Sensors, Signal processing, Scene classification, Digital video discs
Many watermarking applications may benefit from the availability of both the original and the watermarked media content to perform detection. However, one must take into account that the watermarked media might have undergone deformations such that direct correspondence between original and watermarked signal is not possible anymore. Extra processing is therefore needed to benefit from the availability of both signals. In video applications, besides the spatial deformations, one must also consider a possible temporal structure modification of the watermarked content. Such distortions include frame rate modifications, scene removal and temporal cropping. In this paper, we present how one can perform an automatic frames alignment of spatially and temporally deformed video sequences. Our approach consist in establishing a correspondence between automatically detected key-frames in the two sequences. The key-frame detection is inspired from existing methods dealing with scene cuts localization and semantics (MPEG7-like) scene classification. Different simulations show that the method can cope with common temporal deformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.