Completely non-invasive digital cleaning of Fernando Amorsolo's 1948 oil on canvas, Malacañang by the River, is
implemented using a trained neural network. The digital cleaning process results to more vivid colors and a higher
luminosity for the digitally-cleaned painting. We propose three methods for visualizing the color change that occurred to a
painting image after digital cleaning. For the first two visualizations, the color change between original and digitally-cleaned
image is computed as a vector difference in RGB space. For the first visualization, the vector difference is projected on a
neutral color and rendered for the whole image. The second visualization renders the color change as a translucent dirt layer
that can be superimposed on a white image or on the digitally-cleaned image. For the third visualization, we model the color
change as a dirt layer that acts as a filter on the painting image. The resulting color change and dirt layer visualizations are
consistent with the actual perceived color change and could offer valuable insights to a painting's color changing process due
to exposure.
We present the results of a two-year project aimed at capturing quantifiable color signatures of oil paintings of Fernando
Amorsolo, the Philippine's first National Artists. Color signatures are found by comparing CIE xy measurements of skin
color in portraits and ground, sky and foliage in landscapes. The results are compared with results of visual examination
and art historical data as well as works done by Amorsolo's contemporaries and mentors.
When skin areas such as faces and hands are imaged under natural environments their color appearance is frequently affected by variations in illumination intensity and chromaticity. In color-based skin tracking and detection, changing intensity is often dismissed either with the use of normalized, intensity-invariant color coordinates or by additionally modeling possible skin intensities. Chromaticity variations are rarely considered, although they are common in practice. In most approaches considering chromaticity, the experiments are done with a small or undefined variation range. It is difficult to compare different approaches and assess their applicability range for this reason. To improve the situation, we evaluate the performance of four state-of-the-art methods under drastic but practically common illumination changes. The effect of illumination chromaticity for skin is clearly defined, and based on it we draw conclusion about the performance of these approaches.
The appearance of skin colors in the images depends among other things, on the camera, the calibration of the camera, and the illumination under which the image was taken. In this study, we investigate how the skin colors appear in the chromaticity coordinates of different color spaces like HSV/HSL, normalized rgb, YES and TSL. For this purpose, we have taken images of faces under 16 different illumination/camera calibration conditions using simulated illuminants (Horizon, A, fluorescent TL84 and daylight) with different RGB cameras (1CCD web cameras and a 3CCD camera). In the making of this series of 16 images, first the selected camera was calibrated to one of the four light sources and an image was taken. After that the light source was changed to the other light sources and at each time the person was imaged. The process was repeated to the other two light sources. The same procedure was done for all four light sources and for each camera. The skin regions were extracted from these images and this skin data was then converted to different color spaces. We inspected how the chromaticities of different skin color groups in these color spaces overlap in images taken in all 16 different cases and only in those cases in which the selected camera was calibrated to the current illuminant. These investigations were also made between different cameras. In addition to this, we examined the overlapping of all skin chromaticities from the different skin color groups between cameras.
Saturation here refers to electronic saturation of the camera sensors which produces clipped colors, and not the purity of color as in the hue-saturation and value scale. Saturated images are routinely discarded in image analysis yet there are situations when they cannot be avoided. This paper proposes two strategies to recover color information in facial images taken under non-ideal conditions to make them useful for further processing. The first assumes that the skin is matte and that there are parts of the image which are not clipped. Ratios between R, G and B values of unclipped pixels belonging to the same parts of the image may then be used to compute for lost channel values. The second approach uses color eigenfaces computed from our physics-based face database obtained under different illuminants and camera calibration conditions. Skin color is recovered by transforming the first few eigenface coefficients towards ideal condition values. Excellent color recovery for clipped images is achieved when these two techniques are combined and used on face images captured under daylight illuminant with a camera white balanced for incandescent light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.