The problem for proper rendering of spatial frequencies in digital imaging applications is to establish the relative contrast sensitivity of observers at suprathreshold contrast levels in typical viewing environments. In an experimental study two methods of evaluating spatial contrast sensitivity were investigated, using targets of graded tonal modulation, at which observers were asked to point to the perceived threshold locations. The results produced by these two methods were rather different from those of the classical methods of vision science, showing a much lower sensitivity over a broader range of spatial frequencies. These may be regarded as complementary to CSF data derived from single- frequency Gabor stimuli and prove to be better suited to the needs of practical imaging applications.
Multi-spectral imaging systems can be used recover estimates of the spectral reflectance properties of surfaces in an image. This process can be aided by utilizing a priori knowledge of the reflectance spectra derived from linear models of surface reflectance. We use this recovery method in a simulated camera system, and investigate the effect of varying sensor characteristics and illuminants on the accuracy of the system. We also investigate the effect of quantization noise and random sensor noise on the process. Unlike other recovery methods, increasing the number of sensors in the system - and hence the number of basis functions used in the linear model - does not necessarily improve performance. We find that increasing the amount of noise increases reconstruction error and it does so to a greater extent for large sensor numbers and large sensor bandwidths. The robustness of the process to noise is improved by using illuminants that have approximately equal power across all visible wavelengths of light.
We suggest that color constancy and perceptual transparency might be explained by the same underlying mechanism. For color constancy, Foster and Nascimento (1994) found that cone-excitation ratios between surfaces seen under one illuminant and cone-excitation ratios between the same surfaces seen under a different illuminant were almost constant. In the case of perceptual transparency we also found that cone-excitation ratios between surfaces illuminated directly and cone-excitation ratios between the same surfaces seen through a transparent filter were almost invariant (Westland and Ripamonti, 2000). We compare the ability of the cone-excitation-ratio invariance model to predict perceptual transparency with an alternative model based on convergence in color space (D'Zmura et al., 1997). Psychophysical data are reported from experiments where by subjects were asked to select which of two stimuli represented a Mondrian image partially covered by a homogeneous transparent filter. One of the stimuli was generated from the convergence model and the other was a modified version of the first stimulus such that the cone- excitation ratios were perfectly invariant. Subjects consistently selected the invariant stimulus confirming our hypothesis that perception of transparency is predicted by the degree of deviation frm an invariant ratio for the cone excitations.
Physical measurements of surfaces' color-causing properties are typically spectroradiometric, whereas color-differencing comparisons are typically colormetric ones performed in some 3-D color space. In general, this downprojection of high-dimensional spectral data into some 3-dimensional color space incurs a loss of information, a loss that could be more critical in one color space than in another. One ecologically valid way of assessing the extent of this information loss is to determine how likely it is that a pair of surfaces which have distinctly different spectral properties would be colorimetrically indistinguishable. We describe a virtual ideal color-difference detector which uses standard color-difference metrics but has access to the absolute spectral difference in the color signals of the surface pair. Only when this ideal detector classes a surface pair as "different" yet a standard color-difference detector classes them as "same" is the pair said to be metameric. This paradigm is applied to a dataset of hyperspectral natural images using a wide variety of 3-D color spaces. The results show that, around thresholds which approximate human performance, the overal metamerism rate is very low, yet most pixels in an image will be metameric with at least one other image pixel. Thus, downprojecting spectral data onto a 3-D color space may compromise color discriminability, but is unlikely to affect color categorization performance, a finding which is in accord with evolutionary theories regarding the function of human color vision.
Traditionally Computer Colorant Formulation has been implemented using a theory of radiation transfer known as Kubelka-Munk (K-M) theory. Kubelka-Munk theory allows the prediction of spectral reflectance for a mixture of components (colorants) that have been characterised by absorption K and scattering S coefficients. More recently it has been suggested that Artifical Neural Networks ANNs) may be able to provide alternative mappings between colorant concentrations and spectral reflectances and, more generally, are able to provide transforms between color spaces. This study investigates the ability of ANNs to predict spectral reflectance from colorant concentrations using a set of data measured from known mixtures of lithographic printing inks. The issue of over-training is addressed and we show that the number of hidden units in the network must be carefully selected. We show that it is difficult to train a conventional neural network to the level that matches the performance that can be achieved using the K-M theory. However, a hybrid model is proposed that may out-perform the K-M model.
We review the conditions that are necessary for the perception of transparency and describe the spatiochromatic constraints for achromatic and chromatic transparent displays. These constraints can be represented by the convergence model and are supported by psychophysical data. We present an alternative representation of the constraints necessary for transparency perception that is based on an analogy with a model of colour constancy and the invariance of cone-excitation ratios. Recent psychophysical experiments are described that suggest that displays where the cone-excitation ratios are invariant produce a stronger impression of transparency than displays where the cone excitations are convergent. We argue that the spatial relations in an image are preserved when a Mondrian-like surface is partially covered by a transparent filter and therefore show an intriguing link between transparency perception and colour constancy. Finally, we describe experiments to relate the strength of the transparency percept with the number of unique patches in the image display. We find that the greater the number of surfaces in the display that are partially covered by a transparent filter the stronger the impression of transparency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.