To avoid grabbing the unintentional user motion in a video sequence, video stabilization techniques are used to obtain better-looking video for the final user. We present a low power rototranslational solution, extending our previous work specifically addressed for translational motion only. The proposed technique achieves a high degree of robustness with respect to common difficult conditions like noise perturbations, illumination changes, and motion blurring. Moreover, it is also able to cope with regular patterns, moving objects and it is very precise, reaching about 7% of improvement in jitter attenuation, compared to previous results. Overall performances are competitive also in terms of computational cost: it runs at more than 30 frames / s with VGA sequences, with a CPU ARM926EJ-S at just 100 MHz clock frequency.
The output quality of an image filter for reducing noise without damaging the underlying signal, strongly depends on the
accuracy of the noise model in characterizing the noise introduced by the acquisition device. In this paper we provide
a solution for characterizing signal dependent noise injected at shot time by the image sensor. Different fitting models
describing the behavior of noise samples are analyzed, with the aim of finding a model that offers the most accurate
coverage of the sensor noise under any of its operating conditions. The noise fitting equation minimizing the residual error
is then identified. Moreover, a novel algorithm able to obtain the noise profile of a generic image sensor without the need of
a controlled environment is proposed. Starting from a set of heterogeneous CFA images, by using a voting based estimator,
the parameters of the noise model are estimated.
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
Computational Aesthetics applied on digital photography is becoming an interesting issue in different frameworks
(e.g., photo album summarization, imaging acquisition devices). Although it is widely believed and can often be
experimentally demonstrated that aesthetics is mainly subjective, we aim to find some formal or mathematical
explanations of aesthetics in photographs. We propose a scoring function to give an aesthetic evaluation of
digital portraits and group pictures, taking into account faces aspect ratio, their perceptual goodness in terms
of lighting of the skin and their position. Also well-known composition rules (e.g., rule of thirds) are considered
especially for single portrait. Both subjective and quantitatively experiments have confirmed the effectiveness of
the proposed methodology.
Digital video stabilization allows to acquire video sequences without disturbing jerkiness by removing from the image
sequence the effects caused by unwanted camera movements. One of the bottlenecks of these approaches is the local motion
estimation step. In this paper we propose a Block Selector able to speed-up the block matching based video stabilization
techniques without considerably degrading the stabilization performances. Both history and random criteria are taken into
account in the selection process. Experiments on real cases confirm the effectiveness of the proposed approach even in
critical conditions.
Accurate noise level estimation is essential to assure good performance of noise reduction filters. Noise contaminating
raw images is typically modeled as additive white and Gaussian distributed (AWGN); however raw images
are affected by a mixture of noise sources that overlap according to a signal dependent noise model. Hence, the
assumption of constant noise level through all the dynamic range represents a simplification that does not allow
precise sensor noise characterization and filtering; consequently, local noise standard deviation depends on signal
levels measured at each location of the CFA (Color Filter Array) image.
This work proposes a method for determining the noise curves that map each CFA signal intensity to its
corresponding noise level, without the need of a controlled test environment and specific test patterns. The
process consists in analyzing sets of heterogeneous raw CFA images, allowing noise characterization of any image
sensor. In addition we show how the estimated noise level curves can be exploited to filter a CFA image, using
an adaptive signal dependent Gaussian filter.
Despite the great advances that have been made in the field of digital photography and CMOS/CCD sensors, several
sources of distortion continue to be responsible for image quality degradation. Among them, a great role is played by
sensor noise and motion blur. Of course, longer exposure times usually lead to better image quality, but the change in the
photocurrent over time, due to motion, can lead to motion blur effects. The proposed low-cost technique deals with the
aforementioned problem using a multi-capture denoising algorithm, obtaining a good quality with sensible reduction of
the motion blur effects.
This paper presents an efficient solution for digital images sharpening, the Adaptive Directional Sharpening with
Overshoot Control (ADSOC), a method based on a high-pass filter able to perform a stronger sharpening in the detailed
zones of the image, preserving the homogeneous regions. The basic objective of this approach is to reduce the undesired
effects. The sharpening introduced along strong edges or into uniform regions could provide unpleased ringing artifacts
and noise amplification, which are the most common drawbacks of the sharpening algorithms. The ADSOC allows to the
user to choose the ringing intensity and it doesn't increase the isolated noisy pixel luminance value. Moreover, the
ADSOC works the orthogonally respect to the direction of the edges in the blurred image, in order to yield a more
effective contrast enhancement. The experiments showed good algorithm performances in terms of booth visual quality
and computational complexity.
DCT based compression engines1,2 are well known to introduce color artifacts on the processed input frames, in
particular for low bit rates. In video standards, like MPEG-23, MPEG-44, H2635, and in still picture standards, like
JPEG6,7, blocking and ringing distortions are understood and considered, so different approaches have been developed to
reduce these effects8,9,10,11. On the other side, other kinds of phenomenon have not been deeply investigated. Among
them, the chromatic color bleeding effects has only recently received proper attention12,13. The scope of this paper is to
propose and describe an innovative and powerful algorithm to overcome this kind of color artifacts.
The illuminant estimation has an important role in many domain applications such as digital still cameras and mobile phones, where the final image quality could be heavily affected by a poor compensation of the ambient illumination effects. In this paper we present an algorithm, not dependent on the acquiring device, for illuminant estimation and compensation directly in the color filter array (CFA) domain of digital still cameras. The algorithm proposed takes into account both chromaticity and intensity information of the image data, and performs the illuminant compensation by a diagonal transform. It works by combining a spatial segmentation process with empirical designed weighting functions aimed to select the scene objects containing more information for the light chromaticity estimation. This algorithm has been designed exploiting an experimental framework developed by the authors and it has been evaluated on a database of real scene images acquired in different, carefully controlled, illuminant conditions. The results show that a combined multi domain pixel analysis leads to an improvement of the performance when compared to single domain pixel analysis.
New approaches to Color Interpolation based on Discrete Wavelet Transform are described. The Bayer data are split into the three colour components; for each component the Wavelet Coefficient Interpolation (WCI) algorithm is applied and results are combined to obtain the final colour interpolated image. A further anti-aliasing algorithm can be applied in order to reduce false colours. A first approach consists of interpolating wavelet coefficients starting from a spatial analysis of the input image. It was considered an interpolation step based on threshold levels associated to the spatial correlation of the input image pixel. A second approach consists of interpolating wavelet coefficients starting from the analysis of known coefficients of the second transform level. The resolution of each wavelet transform level is double as regards the successive one, so we can suppose a correspondence among wavelet coefficients belong to successive sub-bands. Visual quality of the interpolated RGB images is improved, reducing the zipper and aliasing effects. Moreover, in an embedded systems, which use JPEG2000 compression, a low computational cost is achieved in both cases since only some threshold evaluations and the IDWT step are performed for the first approach, while the second one involves only the DWT and IDWT steps.
In this paper we propose a new algorithm for the Compression Factor Control when the JPEG standard is used. It can be used, for example, when the memory size to store the image is fixed, like in a Digital Still Camera, or when a limited band channel is used to transmit the image. The JPEG standard is the image compression algorithm used 'de facto' by all the devices due the good trade off between compression ratio and quality, but it do not ensure a fixed stream size due to the run-length/variable-length encoding, so a compression factor control algorithm is required. This algorithm allows a very good rate control in a faster way compared to the known algorithms and a lower power consumption too, so it can be used in the portable devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.