In this work, we study the possibility of indexing color iris images. In the proposed approach, a clustering scheme on a training set of iris images is used to determine cluster centroids that capture the variations in chromaticity of the iris texture. An input iris image is indexed by comparing its pixels against these centroids and determining the dominant clusters - i.e., those clusters to which the majority of its pixels are assigned to. The cluster indices serve as an index code for the input iris image and are used during the search process, when an input probe has to be compared with a gallery of irides. Experiments using multiple color spaces convey the efficacy of the scheme on good quality images, with hit rates closes to 100% being achieved at low penetration rates.
We consider the problem of generating a biometric image from two different traits. Specifically, we focus on
generating an IrisPrint that inherits its structure from a fingerprint image and an iris image. To facilitate this,
the continuous phase of the fingerprint image, characterizing its ridge flow, is first extracted. Next, a scheme is
developed to extract “minutiae” from an iris image. Finally, an IrisPrint, that resembles a fingerprint, is created
by mixing the ridge flow of the fingerprint with the iris minutiae. Preliminary experiments suggest that the new
biometric image (i.e., IrisPrint) (a) can potentially be used for authentication by an existing fingerprint matcher,
and (b) can potentially conceal and preserve the privacy of the original fingerprint and iris images.
Researchers in face recognition have been using Gabor filters for image representation due to their robustness to complex variations in expression and illumination. Numerous methods have been proposed to model the output of filter responses by employing either local or global descriptors. In this work, we propose a novel but simple approach for encoding Gradient information on Gabor-transformed images to represent the face, which can be used for identity, gender and ethnicity assessment. Extensive experiments on the standard face benchmark FERET (Visible versus Visible), as well as the heterogeneous face dataset HFB (Near-infrared versus Visible), suggest that the matching performance due to the proposed descriptor is comparable against state-of-the-art descriptor-based approaches in face recognition applications. Furthermore, the same feature set is used in the framework of a Collaborative Representation Classification (CRC) scheme for deducing soft biometric traits such as gender and ethnicity from face images in the AR, Morph and CAS-PEAL databases.
Recent research in iris recognition has established the impact of non-cosmetic soft contact lenses on the recognition performance of iris matchers. Researchers in Notre Dame demonstrated an increase in False Reject Rate (FRR) when an iris without a contact lens was compared against the same iris with a transparent soft contact lens. Detecting the presence of a contact lens in ocular images can, therefore, be beneficial to iris recognition systems. This study proposes a method to automatically detect the presence of non-cosmetic soft contact lenses in ocular images of the eye acquired in the Near Infrared (NIR) spectrum. While cosmetic lenses are more easily discernible, the problem of detecting non-cosmetic lenses is substantially difficult and poses a significant challenge to iris researchers. In this work, the lens boundary is detected by traversing a small annular region in the vicinity of the outer boundary of the segmented iris and locating candidate points corresponding to the lens perimeter. Candidate points are identified by examining intensity profiles in the radial direction within the annular region. The proposed detection method is evaluated on two databases: ICE 2005 and MBGC Iris. In the ICE 2005 database, a correct lens detection rate of 72% is achieved with an overall classification accuracy of 76%. In the MBGC Iris database, a correct lens detection rate of 70% is obtained with an overall classification accuracy of 66:8%. To the best of our knowledge, this is one of the earliest work attempting to detect the presence of non-cosmetic soft contact lenses in NIR ocular images. The results of this research suggest the possibility of detecting soft contact lenses in ocular images but highlight the need for further research in this area.
In the biometrics community, challenge datasets are often released to determine the robustness of state-of-the- art algorithms to conditions that can confound recognition accuracy. In the context of automated human gait recognition, evaluation has predominantly been conducted on video data acquired in the active visible spectral band, although recent literature has explored recognition in the passive thermal band. The advent of sophisticated sensors has piqued interest in performing gait recognition in other spectral bands such as short-wave infrared (SWIR), due to their use in military-based tactical applications and the possibility of operating in nighttime environments. Further, in many operational scenarios, the environmental variables are not controlled, thereby posing several challenges to traditional recognition schemes. In this work, we discuss the possibility of performing gait recognition in the SWIR spectrum by first assembling a dataset, referred to as the WVU Outdoor SWIR Gait (WOSG) Dataset, and then evaluate the performance of three gait recognition algorithms on the dataset. The dataset consists of 155 subjects and represents gait information acquired under multiple walking paths in an uncontrolled, outdoor environment. Detailed experimental analysis suggests the benefits of distributing this new challenging dataset to the broader research community. In particular, the following observations were made: (a) the importance of SWIR imagery in acquiring data covertly for surveillance applications; (b) the difficulty in extracting human silhouettes in low-contrast SWIR imagery; (c) the impact of silhouette quality on overall recognition accuracy; (d) the possibility of matching gait sequences pertaining to different walking trajectories; and (e) the need for developing sophisticated gait recognition algorithms to handle data acquired in unconstrained environments.
A novel two-stage protection scheme for automatic iris recognition systems against masquerade attacks carried out with synthetically reconstructed iris images is presented. The method uses different characteristics of real iris images to differentiate them from the synthetic ones, thereby addressing important security flaws detected in state-of-the-art commercial systems. Experiments are carried out on the publicly available Biosecure Database and demonstrate the efficacy of the proposed security enhancing approach.
The problem of face identication in the Mid-Wave InfraRed (MWIR) spectrum is studied in order to understand
the performance of intra-spectral (MWIR to MWIR) and cross-spectral (visible to MWIR) matching. The
contributions of this work are two-fold. First, a database of 50 subjects is assembled and used to illustrate
the challenges associated with the problem. Second, a set of experiments is performed in order to demonstrate
the possibility of MWIR intra-spectral and cross-spectral matching. Experiments show that images captured in
the MWIR band can be eciently matched to MWIR images using existing techniques (originally not designed
to address such a problem). These results are comparable to the baseline results, i.e., when comparing visible
to visible face images. Experiments also show that cross-spectral matching (the heterogeneous problem, where
gallery and probe sets have face images acquired in dierent spectral bands) is a very challenging problem. In
order to perform cross-spectral matching, we use multiple texture descriptors and demonstrate that fusing these
descriptors improves recognition performance. Experiments on a small database, suggests that the problem of
cross-spectral matching requires further investigation.
Although human iris pattern is widely accepted as a stable biometric feature, recent research has found some evidences
on the aging effect of iris system. In order to investigate changes in iris recognition performance due to the elapsed time
between probe and gallery iris images, we examine the effect of elapsed time on iris recognition utilizing 7,628 iris
images from 46 subjects with an average of ten visits acquired over two years from a legacy database at Clarkson
University. Taken into consideration the impact of quality factors such as local contrast, illumination, blur and noise on
iris recognition performance, regression models are built with and without quality metrics to evaluate the degradation of
iris recognition performance based on time lapse factors. Our experimental results demonstrate the decrease of iris
recognition performance along with increased elapsed time based on two iris recognition system (the modified Masek
algorithm and a commercial software VeriEye SDK). These results also reveal the significance of quality factors in iris
recognition regression indicating the variability in match scores. According to the regression analysis, our study in this
paper helps provide the quantified decrease on match scores with increased elapsed time, which indicates the possibility
to implement the prediction scheme for iris recognition performance based on learning of impact on time lapse factors.
Ocular recognition is a new area of biometric investigation targeted at overcoming the limitations of iris recognition
performance in the presence of non-ideal data. There are several advantages for increasing the area beyond
the iris, yet there are also key issues that must be addressed such as size of the ocular region, factors affecting
performance, and appropriate corpora to study these factors in isolation. In this paper, we explore and identify
some of these issues with the goal of better defining parameters for ocular recognition. An empirical study is
performed where iris recognition methods are contrasted with texture and point operators on existing iris and
face datasets. The experimental results show a dramatic recognition performance gain when additional features
are considered in the presence of poor quality iris data, offering strong evidence for extending interest beyond
the iris. The experiments also highlight the need for the direct collection of additional ocular imagery.
The need for an automated surveillance system is pronounced at night when the capability of the human eye
to detect anomalies is reduced. While there have been significant efforts in the classification of individuals
using human metrology and gait, the majority of research assumes a day-time environment. The aim of this
study is to move beyond traditional image acquisition modalities and explore the issues of object detection and
human identification at night. To address these issues, a spatiotemporal gait curve that captures the shape
dynamics of a moving human silhouette is employed. Initially proposed by Wang et al., this representation
of the gait is expanded to incorporate modules for individual classification, backpack detection, and silhouette
restoration. Evaluation of these algorithms is conducted on the CASIA Night Gait Database, which includes 10
video sequences for each of 153 unique subjects. The video sequences were captured using a low resolution thermal
camera. Matching performance of the proposed algorithms is evaluated using a nearest neighbor classifier. The
outcome of this work is an efficient algorithm for backpack detection and human identification, and a basis for
further study in silhouette enhancement.
We discuss the problem of preserving the privacy of a digital face image stored in a central database. In the
proposed scheme, a private face image is dithered into two host face images such that it can be revealed only
when both host images are simultaneously available; at the same time, the individual host images do not reveal
the identity of the original image. In order to accomplish this, we appeal to the field of Visual Cryptography.
Experimental results confirm the following: (a) the possibility of hiding a private face image in two unrelated
host face images; (b) the successful matching of face images that are reconstructed by superimposing the host
images; and (c) the inability of the host images, known as sheets, to reveal the identity of the secret face image.
While fusion can be accomplished at multiple levels in a multibiometric system, score level fusion is commonly used as it
offers a good trade-off between fusion complexity and data availability. However, missing scores affect the implementation
of several biometric fusion rules. While there are several techniques for handling missing data, the imputation scheme -
which replaces missing values with predicted values - is preferred since this scheme can be followed by a standard fusion
scheme designed for complete data. This paper compares the performance of three imputation methods: Imputation
via Maximum Likelihood Estimation (MLE), Multiple Imputation (MI) and Random Draw Imputation through Gaussian
Mixture Model estimation (RD GMM). A novel method called Hot-deck GMM is also introduced and exhibits markedly
better performance than the other methods because of its ability to preserve the local structure of the score distribution.
Experiments on the MSU dataset indicate the robustness of the schemes in handling missing scores at various missing data
rates.
Given a query fingerprint, the goal of indexing is to identify and retrieve a set of candidate fingerprints from a
large database in order to determine a possible match. This significantly improves the response time of fingerprint
recognition systems operating in the identification mode. In this work, we extend the indexing framework based
on minutiae triplets by utilizing ridge curve parameters in conjunction with minutiae information to enhance
indexing performance. Further, we demonstrate that the proposed technique facilitates the indexing of fingerprint
images acquired using different sensors. Experiments on the publicly available FVC database confirm the utility
of the proposed approach in indexing fingerprints.
Biometric sensor interoperability refers to the ability of a system to compensate for the variability introduced in the biometric data of an individual due to the deployment of different sensors. We demonstrate that a simple non-linear calibration scheme, based on Thin Plate Splines (TPS), is sufficient to facilitate sensor interoperability in the context of fingerprints. In the proposed technique, the variation between the images acquired using two
different sensors is modeled using non-linear distortions. Experiments indicate that the proposed calibration scheme can significantly improve inter-sensor matching performance.
Fingerprint mosaicing entails the reconciliation of information presented by two or more impressions of a finger in order to generate composite information. It can be accomplished by blending these impressions into a single mosaic, or by integrating the feature sets (viz., minutiae information) pertaining to these impressions. In this work, we use Thin-plate Splines (TPS) to model the relative transformation between two impressions of a finger
thereby accounting for the non-linear distortion present between them. The estimated deformation is used (a) to register the two images and blend them into a single entity before extracting minutiae from the resulting mosaic (image mosaicing); and (b) to register the minutiae point sets corresponding to the two images and
integrate them into a single master minutiae set (feature mosaicing). Experiments conducted on the FVC 2002 DB1 database indicate that both mosaicing schemes result in improved matching performance although feature mosaicing is observed to outperform image mosaicing.
Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a user, multiple matchers, etc.) in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels, including the feature extraction level, match score level and decision level. While fusion at the match score and decision levels have been extensively studied in the literature, fusion at the feature level is a relatively understudied problem. In this paper we discuss fusion at the feature level in 3 different scenarios: (i) fusion of PCA and LDA coefficients of face; (ii) fusion of LDA coefficients corresponding to the R,G,B channels of a face image; (iii) fusion of face and hand modalities. Preliminary results are encouraging and help in highlighting the pros and cons of performing fusion at this level. The primary motivation of this work is to demonstrate the viability of such a fusion and to underscore the importance of pursuing further research in this direction.
We show that minutiae information can reveal substantial details such as the orientation field and the class of the associated fingerprint that can potentially be used to reconstruct the original fingerprint image. The proposed technique utilizes minutiae triplet
information to estimate the orientation map of the parent fingerprint. The estimated orientation map is observed to be remarkably consistent with the underlying ridge flow. We next discuss a classification technique that utilizes minutiae information alone to infer the class of the fingerprint. Preliminary results indicate that the seemingly random minutiae distribution of a fingerprint can reveal important class information. Furthermore, contrary to what has been claimed by several minutiae-based fingerprint system vendors, we demonstrate that the minutiae template of a user may be used to reconstruct fingerprint images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.