In this article we study the use of the cubic-phase pupil function for extension of the depth of field of task-based imaging systems. In task-based design problems the resolution of interest varies as a function of the object distance due to change in magnification. This introduces a new challenge in the design process. We discuss how the optimal design criterion of task-based imaging systems is fundamentally different from that of visual imaging systems and formulate the optimization problem. We discuss how the use of the cubic-phase pupil function changes the spectral signal-to-noise-ratio (SNR) and modulation transfer function (MTF) in the range of the depth of field in order to fulfill our design requirements. We introduce an approximation to the problem of maximizing SNR and show that it is amenable to analytic treatment. We derive an explicit expression for the optimized cubic-phase pupil function parameters for a general problem of this class, thus establishing an upper bound for the extension of the depth of field using cubic-phase Wavefront Coding.
The human iris is an attractive biometric due to its high discrimination capability. However, capturing good quality images of human irises is challenging and requires considerable user cooperation. Iris capture systems with large depth of field, large field of view and excellent capacity for light capture can help considerably in such scenarios. In this paper we apply Wavefront Coding to increase the depth of field without increasing the optical F/# of an iris recognition system when the subject is at least 2 meters away. This computational imaging system is designed and optimized using the spectral-SNR as the fundamental metric. We present simulation and experimental results that show the benefits of this technology for biometric identification.
Iris recognition imaging is attracting considerable interest as a viable alternative for personal identification and verification in many defense and security applications. However current iris recognition systems suffer from limited depth of field, which makes usage of these systems more difficult by an untrained user. Traditionally, the depth of field is increased by reducing the imaging system aperture, which adversely impacts the light capturing power and thus the system signal-to-noise ratio (SNR). In this paper we discuss a computational imaging system, referred to as Wavefront Coded(R) imaging, for increasing the depth of field without sacrificing the SNR or the resolution of the imaging system. This system employs a especially designed Wavefront Coded lens customized for iris recognition. We present experimental results that show the benefits of this technology for biometric identification.
Computational imaging systems are modern systems that consist of generalized aspheric optics and image processing capability. These systems can be optimized to greatly increase the performance above systems consisting solely of traditional optics. Computational imaging technology can be used to advantage in iris recognition applications. A major difficulty in current iris recognition systems is a very shallow depth-of-field that limits system usability and increases system complexity. We first review some current iris recognition algorithms, and then describe computational imaging approaches to iris recognition using cubic phase wavefront encoding. These new approaches can greatly increase the depth-of-field over that possible with traditional optics, while keeping sufficient recognition accuracy. In these approaches the combination of optics, detectors, and image processing all contribute to the iris recognition accuracy and efficiency. We describe different optimization methods for designing the optics and the image processing algorithms, and provide laboratory and simulation results from applying these systems and results on restoring the intermediate phase encoded images using both direct Wiener filter and iterative conjugate gradient methods.
Automated iris recognition is a promising method for noninvasive verification of identity. Although it is noninvasive, the procedure requires considerable cooperation from the user. In typical acquisition systems, the subject must carefully position the head laterally to make sure that the captured iris falls within the field-of-view of the digital image acquisition system. Furthermore, the need for sufficient energy at the plane of the detector calls for a relatively fast optical system which results in a narrow depth-of-field. This latter issue requires the user to move the head back and forth until the iris is in good focus. In this paper, we address the depth-of-field problem by studying the effectiveness of specially designed aspheres that extend the depth-of-field of the image capture system. In this initial study, we concentrate on the cubic phase mask originally proposed by Dowski and Cathey. Laboratory experiments are used to produce representative captured irises with and without cubic asphere masks modifying the imaging system. The iris images are then presented to a well-known iris recognition algorithm proposed by Daugman. In some cases we present unrestored imagery and in other cases we attempt to restore the moderate blur introduced by the asphere. Our initial results show that the use of such aspheres does indeed relax the depth-of-field requirements even without restoration of the blurred images. Furthermore, we find that restorations that produce visually pleasing iris images often actually degrade the performance of the algorithm. Different restoration parameters are examined to determine their usefulness in relation to the recognition algorithm.
Imaging systems using aspheric imaging lenses with complementary computation can deliver performance unobtainable in conventional imaging systems. These new imaging systems, termed Wavefront coded imaging systems, use specialized optics to capture a coded image of the scene. Decoding the intermediate image provides the "human-usable" image expected of an imaging system. Computation for the decoding step can be made completely transparent to the user with today's technology. Real-time Wavefront coded systems are feasible and cost-effective. This "computational imaging" technology can be adapted to solve a wide range of imaging problems. Solutions include the ability to provide focus-free imaging, to increase the field of view, to increase the depth of read, to correct for aberrations (even in single lens systems), and to account for assembly and temperature induced misalignment. Wavefront coded imaging has been demonstrated across a wide range of applications, including microscopy, miniature cameras, machine vision systems, infrared imaging systems and telescopes.
Rare event applications are characterized by the event-of- interest being hidden in a large volume of routine data. The key to success in such situations is the development of a cascade of data elimination strategies, such that each stage enriches the probability of finding the event amidst the data retained for further processing. Automated detection of aberrant cells in cervical smear slides is an example of a rare event problem. Each slide can amount to 2.5 gigabytes of raw data and only 1 in 20 slides are abnormal. In this paper we examine the use of template matching, artificial neural networks, integrated optical density and morphological processing as algorithms for the first data elimination stage. Based on the experience gained, we develop a successful strategy with improves the overall event probability in the retained data from 0.01 initially to 0.87 after the second stage of processing.
An optoelectronic detection system using two electrically addressed spatial light modulators in an optical correlator has been constructed to find regions of interest in cervical smear slides using the hit/miss transform algorithm. The purpose of the detector is to locate abnormal cells in the cervical smear and mark the region of interest for further classification by a second stage to the overall system. In addition, an image database of characteristic monolayer cervical smear images has been constructed for testing the system. The optoelectronic processing of cytological specimens can in theory provide both an improvement in the speed of scanning a slide for a region of interest and also a decrease in current manual screening errors. Results of the optoelectronic correlator and corresponding computer simulations will be discussed as well as further means of improving the system. Conclusions about further steps in the implementation of a complete medical diagnostic system including classification of regions of interest and improvements for automation will also be addressed.
In this paper we consider the formation of morphological templates using adaptive resonance theory. We examine the role of object variability and noise on the clustering of different sized objects as a function of the vigilance parameter. We demonstrate that the fuzzy adaptive resonance theory is robust in the presence of noise but that for poor choice of vigilance there is a proliferation of prototypical categories. We apply the technique to detection of abnormal cells in pap smears.
An optoelectronic system has been designed to pre-screen pap-smear slides and detect the suspicious cells using the hit/miss transform. Computer simulation of the algorithm tested on 184 pap-smear images detected 95% of the suspicious region as suspect while tagging just 5% of the normal regions as suspect. An optoelectronic implementation of the hit/miss transform using a 4f Vander-Lugt correlator architecture is proposed and demonstrated with experimental results.
Automation of the Pap-smear cervical screening method is highly desirable as it relieves tedium for the human operators, reduces cost and should increase accuracy and provide repeatability. We present here the design for a high-throughput optoelectronic system which forms the first stage of a two stage system to automate pap-smear screening. We use a mathematical morphological technique called the hit-or-miss transform to identify the suspicious areas on a pap-smear slide. This algorithm is implemented using a VanderLugt architecture and a time-sequential ANDing smart pixel array.
Object structure is one of the most important features for many imaging applications. In many applications in space recording the spatial structure adequately is a challenge due to the the wide range of illumination conditions encountered. Moreover communication constraints often limit the amount of data that can be transmitted. Motivated by these concerns we have developed a coding scheme which is robust to the variations in illumination conditions preserves high structural fidelity and provides high compression ratios. The high correlation between the original and decoded images demonstrates the potential of this coding scheme for machine vision applications.
KEYWORDS: Optical transfer functions, Digital imaging, Signal to noise ratio, Imaging systems, Point spread functions, Super resolution, Cameras, Sensors, Imaging devices, Image acquisition
Despite the popularity of digital imaging devices (e.g., CCD array cameras) the problem of accurately characterizing the spatial frequency response of such systems has been largely neglected in the literature. This paper describes a simple method for accurately estimating the optical transfer function of digital image acquisition devices. The method is based on the traditional knife-edge technique but explicitly deals with fundamental sampled system considerations: insufficient and anisotropic sampling. Results for both simulated and real imaging systems demonstrate
the accuracy of the method.
The visual system is most sensitive to structure. Moreover, many other attributes of the scene are preserved in the retinal response to edges. In this paper we present a new coding process that is based on models suggested for retinal processing in human vision. This process extracts and codes only features that are preserved by the response of these filters to an edge (edge primitives). The decoded image, obtained by recovering the intensity levels between the outlined boundaries, using only the edge primitives, attains high structural fidelity. We demonstrate that a wide variety of images can be represented by their edge primitives with high compression ratios, typically two to three orders of magnitude, depending on the target. This method is particularly advantageous when high structural fidelity representation must be combined with high data compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.