Primary selection plays a fundamental role in display design. Primaries affect not only the gamut of colors the
systems is able to reproduce, but also, they have an impact on the power consumption and other cost related
variables. Using more than the traditional three primaries has been shown to be a versatile way of extending
the color gamut, widening the angle view of LCD screens and improving power consumption of displays systems.
Adequate selection of primaries requires a trade-off between the multiple benefits the system offers, the costs
and the complexity it implies, among other design parameters.
The purpose of this work is to present a methodology for optimal design for three primary and multiprimary
display systems. We consider the gamut in perceptual spaces, which offer the advantage of an evaluation that
correlates with human perception, and determine a design that maximize the gamut volume, constrained to
a certain power budget, and analyze the benefits of increasing number of primaries, and their effect on other
variables of performance like gamut coverage.
KEYWORDS: Optical filters, Sensors, Image sensors, Digital filtering, RGB color model, Image filtering, Diodes, Signal to noise ratio, Reconstruction algorithms, Modulation transfer functions
We propose a modification to the standard Bayer color filter array (CFA) and photodiode structure for CMOS image sensors, which we call 2PFCTM (two pixels, full color). The blue and red filters of the Bayer pattern are replaced by a magenta filter. Under each magenta filter are two stacked, pinned photodiodes; the diode nearest the surface absorbs mostly blue light, and the deeper diode absorbs mostly red light. The magenta filter absorbs green light, improving color separation between the blue and red diodes. We first present a frequency-based demosaicing method, which takes advantage of the new 2PFC geometry. Due to the spatial arrangement of red, green, and blue pixels, luminance and chrominance are very well separated in the Fourier space, allowing for computationally inexpensive linear filtering. In comparison with state-of-the-art demosaicing methods for the Bayer CFA, we show that our sensor and demosaicing method outperform the others in terms of color aliasing, peak signal to noise ratio, and zipper effect. As demosaicing alone does not determine image quality, we also analyze the whole system performance in terms of resolution and noise.
KEYWORDS: Visualization, Visual process modeling, RGB color model, Bismuth, Modulation transfer functions, Algorithm development, Optical transfer functions, Data modeling, Image quality, Monochromatic aberrations
A Visual Model (VM) is used to aid in the design of an Ultra-high Definition (UHD) upscaling algorithm that renders
High Definition legacy content on a UHD display. The costly development of such algorithms is due, in part, to the time
spent subjectively evaluating the adjustment of algorithm structural variations and parameters. The VM provides an
image map that gives feedback to the design engineer about visual differences between algorithm variations, or about
whether a costly algorithm improvement will be visible at expected viewing distances. Such visual feedback reduces the
need for subjective evaluation.
This paper presents the results of experimentally verifying the VM against subjective tests of visibility improvement
versus viewing distance for three upscaling algorithms. Observers evaluated image differences for upscaled versions of
high-resolution stills and HD (Blu-ray) images, viewing a reference and test image, and controlled a linear blending
weight to determine the image discrimination threshold. The required thresholds vs. viewing distance varied as
expected, with larger amounts of the test image required at further distances. We verify the VM by comparison of
predicted discrimination thresholds versus the subjective data. After verification, VM visible difference maps are
presented to illustrate the practical use of the VM during design.
A modification to the standard Bayer CFA and photodiode structure for CMOS image sensors is proposed, which we call
2PFCTM, meaning "Two Pixel, Full Color". The blue and red filters of the Bayer pattern are replaced by magenta filters.
Under each magenta filter are two stacked, pinned photodiodes; the diode nearest the surface absorbs mostly blue light
and the deeper diode absorbs mostly red light. The magenta filter absorbs green light, improving color separation
between the resulting blue and red diodes. The dopant implant defining the bottom of the red-absorbing region can be
made the same as the green diodes, simplifying the fabrication. Since the spatial resolution for the red, green, and blue
channels are identical, color aliasing is greatly reduced. Luminance resolution can also be improved, the thinner diodes
lead to higher well capacity with resulting better dynamic range, and fabrication costs can be similar to or less than
standard Bayer CMOS imagers. Also, the geometry of the layout lends itself naturally to frequency-based demosaicing.
KEYWORDS: Visual process modeling, Visualization, RGB color model, Data modeling, Image scaling, Visibility, Signal detection, Image filtering, Image quality, Fluctuations and noise
While the use of visual models for assessing all aspects of the imaging chain is steadily increasing, one hindrance is the complexity of these models. This has impact in two ways - not only does it take longer to run the more complex visual model, making it difficult to place into optimization loops, but it also takes longer to code, test, and calibrate the model. As a result, a number of shortcut models have been proposed and used. Some of the shortcuts involve more efficient frequency transforms, such as using a Cartesian separable wavelet, while other types of shortcuts involve omitting the steps required to simulate certain visual mechanisms, such as masking. A key example of the latter is spatial CIELAB, which only models the opponent color CSFs and does not model the spatial frequency channels. Watson's recent analysis of the Modelfest data showed that while a multi-channel model did give the best performance, versions dispensing with the complex frequency bank and just using frequency attenuation did nearly as well. Of course, the Modelfest data addressed detection of a signal on a uniform field, so no masking properties were probed. On the other end of complexity is the model by D'Zmura, which not only includes radial and orientation channels, but also the interactions between the channels in both luminance and color. This talk will dissect several types of practical distortions that require more advanced visual models. One of these will be the need for orientation channels to predict edge jaggies due to aliasing. Other visual mechanisms in search of an exigent application that we will explore include cross luminance-chrominance masking and facilitation, local contrast, and cross-channel masking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.