In this paper, we propose a novel method for performing robust super-resolution of face images by solving the practical
problems of the traditional manifold analysis. Face super-resolution is to recover a high-resolution face image from a
given low-resolution face image by modeling the face image space in view of multiple resolutions. In particular, face
super-resolution is useful to enhance face images captured from surveillance footage. Face super-resolution should be
preceded by analyzing the characteristics of the face image distribution. In literature, it has been shown that face images
lie on a nonlinear manifold by various manifold learning algorithms, so if the manifold structure is taken into
consideration for modeling the face image space, the results of face super-resolution can be improved. However, there
are some practical problems which prevent the manifold analysis from being applied to super-resolution. Almost all of
the manifold learning methods cannot generate mapping functions for new test images which are absent from a training
set. Also, there exists another significant problem when applying the manifold analysis to super-resolution; superresolution
seeks to recover a high-dimensional image from a low-dimensional one while manifold learning methods
perform the exact opposite for dimensionality reduction.
To break those limitations of applying the manifold analysis to super-resolution, we propose a novel face superresolution
method using Locality Preserving Projections (LPP). LPP gives an advantage over other manifold learning
methods in that it has well-defined linear projections which allow us to formulate well-defined mappings between highdimensional
data and low-dimensional data. Moreover, we show that LPP coefficients of an unknown high-resolution
image can be inferred from a given low-resolution image using a MAP estimator.
The Face Recognition Grand Challenge (FRGC) dataset is one of the most challenging datasets in the face recognition community, in this dataset we focus on the hardest experiment under the harsh un-controlled conditions. In this paper we compare how other popular face recognition algorithms like Direct Linear Discriminant Analysis (D-LDA) and Gram-Schmidt LDA methods compare to traditional eigenfaces, and fisherfaces. However, we also show that all these linear subspace methods can not discriminate faces well due to large nonlinear distortions in the face images. Thus we present our proposed Class dependence Feature Analysis (CFA) method which we demonstrate to produce superior performance compared to other methods by representing nonlinear features well. We perform this by extending the traditional CFA framework to use Kernel Methods and propose a proper choice of kernel parameters which improves the overall face recognition performance is significantly over the competing face recognition algorithms. We present results of this proposed approach on a large scale database from the Face Recognition Grand Challenge (FRGC)v2 which contains over 36,000 images focusing on Experiment 4 which poses the harshest scenario containing images captured under un-controlled indoor and outdoor conditions yielding significant illumination variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.