In this work, we investigate biometrics applied on 2D faces in order to secure areas requiring high security level. Based on emerging deep learning methods (more precisely transfer learning) as well as two classical machine learning techniques (Support Vector Machines and Random Forest), different approaches have been used to perform person authentication. Preprocessing filtering steps of input images have been included before features extraction and selection. The goal has been to compare those in terms of processing time, storage size and authentication accuracy according to the number of input images (for the learning task) and preprocessing tasks. We focus on data-related aspects to store biometric information on a low storage capacity remote card (10Ko), not only in a high security context but also in terms of privacy control. The proposed solutions guarantee users the control of their own biometrics data. The study highlights the impact of preprocessing to perform real-time computation, preserving a relevant accuracy while reducing the amount of biometric data. Considering application constraints, this study concludes with a discussion dealing with the tradeoff of the available resources against the required performances to determine the most appropriate method.
Despite the evolution of technologies, high-quality image acquisition systems design is a complex challenge. Indeed, during the image acquisition process, the recorded image does not fully represent the real visual scene. The recorded information could be partial due to dynamic range limitation and degraded due to distorsions of the acquisition system. Typically, these issues have several origins such as lens blur, or limited resolution of the image sensor. In this paper, we propose a full image enhancement system that includes lens blur correction based on a non-blind deconvolution followed by a spatial resolution enhancement based on a Super-Resolution technique. The lens correction has been software designed whereas the Super-Resolution has been both software and hardware (on an FPGA) implemented. The two processing steps have been validated using well-known image quality metrics, highlighting improvements of the quality of the resulting images.
We propose a supervised approach to detect falls in a home environment using an optimized descriptor adapted to real-time tasks. We introduce a realistic dataset of 222 videos, a new metric allowing evaluation of fall detection performance in a video stream, and an automatically optimized set of spatio-temporal descriptors which fed a supervised classifier. We build the initial spatio-temporal descriptor named STHF using several combinations of transformations of geometrical features (height and width of human body bounding box, the user’s trajectory with her/his orientation, projection histograms, and moments of orders 0, 1, and 2). We study the combinations of usual transformations of the features (Fourier transform, wavelet transform, first and second derivatives), and we show experimentally that it is possible to achieve high performance using support vector machine and Adaboost classifiers. Automatic feature selection allows to show that the best tradeoff between classification performance and processing time is obtained by combining the original low-level features with their first derivative. Hence, we evaluate the robustness of the fall detection regarding location changes. We propose a realistic and pragmatic protocol that enables performance to be improved by updating the training in the current location with normal activities records.
An architecture for fast video object recognition is proposed. This architecture is based on an approximation of featureextraction
function: Zernike moments and an approximation of a classification framework: Support Vector Machines
(SVM). We review the principles of the moment-based method and the principles of the approximation method:
dithering. We evaluate the performances of two moment-based methods: Hu invariants and Zernike moments. We
evaluate the implementation cost of the best method. We review the principles of classification method and present the
combination algorithm which consists in rejecting ambiguities in the learning set using SVM decision, before using the
learning step of the hyperrectangles-based method. We present result obtained on a standard database: COIL-100. The results are evaluated regarding hardware cost as well as classification performances.
We present a classification work performed on industrial parts using artificial vision, a support vector machine (SVM), boosting, and a combination of classifiers. The object to be controlled is a coated heater used in television sets. Our project consists of detecting anomalies under manufacturer production, as well as in classifying the anomalies among 20 listed categories. Manufacturer specifications require a minimum of ten inspections per second without a decrease in the quality of the produced parts. This problem is addressed by using a classification system relying on real-time machine vision. To fulfill both real-time and quality constraints, three classification algorithms and a tree-based classification method are compared. The first one, hyperrectangle based, proves to be well adapted for real-time constraints. The second one is based on the Adaboost algorithm, and the third one, based on SVM, has a better power of generalization. Finally, a decision tree allowing improving classification performances is presented.
We present a multiscale edge detection algorithm whose aim is to detect edges whatever their slope. Our work is based on a generalization of the Canny-Deriche filter, characterized by a more realistic edge than the traditional step shape edge. The filter impulse response is used to generate a multiscale edge detection scheme. For the merging of the edge information, we use a geometrical classifier developed in our laboratory. The segmentation system thus set up, after the training phase, does not require any adjustment or depend on any parameter. The main original property of this algorithm is that it leads to a binary edge image without any threshold setting. The quality of the results is inferior to that for classical multiscale merging approaches; nevertheless, this system, studied for real-time functioning, presents satisfactory performance for well-contrasted images and excellent performance for noisy images.
This paper presents a classification work performed on industrial parts using artificial vision, SVM and a combination of classifiers. Prior to this study, defect detection was performed by human inspectors. Unfortunately, the time involved in the inspection procedure was far too long and the misclassification rate too high. Our project consists in detecting anomalies under manufacturer production and cost constraints as well as in classifying the anomalies among twenty listed categories. Manufacturer’s specifications require a minimum of ten inspections per second without a decrease in the quality of the produced parts. This problem can be solved with a classification system relying on a real-time machine vision. To fulfill both real time and quality constraints, two classification algorithms and a tree based classification method were compared. The first one, Hyperrectangle based, has proved to be well adapted for real-time constraints. The second one, based on Support Vector Machine (SVM), is more robust, more complex and more greedy regarding the computing time. Finally, naïve rules were defined, to build a decision tree and to combine it with one of the previous classification algorithms.
In this paper, we propose a method of implementation improvement of the decision rule of the support vector machine, applied to real-time image segmentation. We present very high speed decisions (approximately 10 ns per pixel) which can be useful for detection of anomalies on manufactured parts. We propose an original combination of classifiers allowing fast and robust classification applied to image segmentation. The SVM is used during a first step, pre-processing the training set and thus rejecting any ambiguities. The hyperrectangles-based learning algorithm is applied using the SVM classified training set. We show that the hyperrectangle method imitates the SVM method in terms of performances, for a lower cost of implementation using reconfigurable computing. We review the principles of the two classifiers: the Hyperrectangles-based method and the SVM and we present our combination method applied on image segmentation of an industrial part.
KEYWORDS: Digital signal processing, Image processing, Logic, Cameras, 3D image processing, Field programmable gate arrays, Detection and tracking algorithms, Data processing, Calibration, Computer architecture
The problem to acquire 3D data of human face can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consistes in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple realtime implementation, based on a multiprocessor approach (FPGA-DSP) allowing to consider an embedded processing. Then we show our method which provides depth map of face, dense and reliable, and which can be implemented on an embedded architecture. A various architecture study led us to a judicious choice allowing to obtain the desired result. The real-time data processing is implemented an embedded architecture. We obtain a dense face disparity map, precise enough for considered applications (multimedia, virtual worlds, biometrics) and using a reliable method.
KEYWORDS: Electronic filtering, Defect detection, Signal to noise ratio, Image segmentation, Signal detection, Light sources and illumination, Machine vision, Gaussian filters, Digital filtering, Bandpass filters
Quality control by artificial vision is getting more and more widespread within the industry. Indeed, in many cases, industrial applications require a control with high stability performance, satisfying high production rate. The purpose of this paper is to present a method to detect in real time defects located on the circumference of industrial parts that have a circular shape. Meanwhile, production steps can lead to an oval shape or to different sizes of parts. These two phenomena can lead to miss defects. Therefore a constant mask will not be able to detect these defects correctly. The control of the circularity of these parts can be achieved in two steps.
Quality control by artificial vision is getting more and more widespread within the industry. Indeed, in many cases, industrial applications require a control with high stability performance, satisfying high production rate. For texture control, some major problems may occur: uneasiness to show different textures, segmentation features as well as classification and decision phases requiring still to much computation time. This article presents a comparison between two non-parametric classification methods used for real time control of textured objects moving at a cadence of 10 pieces per second. Four types of flaws have to be indifferently detected: smooth surfaces, bumps, hollow knocked surfaces and lacks of material. These defects generate texture variations which have to be detected and sorted out by our system, each flaw apparition being registered to carry out a survey over the production cycle. We previously presented a search for an optimal lighting system, in this case the acquired images were tremendously improved. On these optimal images, we described a method for selecting the best segmentation features. The third step, which is presented here, is a comparison between two multi-classes classification algorithms: the Parzen's estimator and the so-called 'stressed polytopes' method. These two algorithms which require a learning phase are both based on a non-parametric discrimination method of the flaw classes. In one hand, they are both relatively inexpensive in time calculation but on the other hand they present different assets relative to the easiness of the learning phase or the number of useable segmentation features. They also have a different behavior towards the cut out of the features space, especially on the 'cross-classes' border. Their comparison is made through the aforementioned quoted points which are relevant for the evaluation of the discrimination efficiency. Finally, through an industrial example we present the results of such a comparison. The control, a PC based machine, includes the calculation five classification features (calculations were carried out on the local neighborhood of each pixel), five distinct classes for the classification phase and the decision phase. This led to a 3,63% classification error ratio for the best compromise.
This paper presents a method of searching segmentation parameters which has been developed for an industrial study. The problem consists in the detection of four types of defects on textured industrial parts: smooth surfaces, bumps, lacks of material and hollow knocked surfaces. The lighting system used in this application is not described in this paper but is presented in a previous study concerning the characterization of lighting.
Quality control in industrial application has greatly benefited from the development of tools like artificial vision. In order to obtain a good quality image of the object under investigation, the first step is to use a good lighting system. This paper presents a reliable method which allows to compare several lighting with respect to their capabilities of bringing out defects. This study has been led on textured industrial parts on which four types of defects have to be detected: smooth surfaces, bumps, lacks of material and hollows knocked surfaces. The aim is to determine the best lighting among various experimental sets. This method has two stages: the first one is a definition, according to the knowledge and the shape of the defects, of a pertinent attribute vector which components are defect sensitive. In the second step, discrimination power property of this vector is computed and compared under various illumination using Parzen's kernel. This method insures a well-suited illumination in numerous applications in defects detection and leads to an efficient set of lighting system and segmentation's parameters. Work is under way to generalize this method to multidimensional cases in order to allow interactions between components of the attribute vector.
KEYWORDS: Edge detection, Wavelets, Image segmentation, Image filtering, Digital filtering, Electronic filtering, Signal to noise ratio, Detection and tracking algorithms, Linear filtering, Optimization (mathematics)
We present in the following work, a multiscale edge detection algorithm whose aim is to detect edges of any slope. Our work is based on a generalization of the Canny-Deriche filter, modelized by a more realistic edge than the traditional step shape edge. The filter impulse response is used to generate a frame of wavelets. For the merging of the wavelet coefficients, we use a geometrical classifier developed in our laboratory. The segmentation system thus set up and after the training phase does not require any adjustment nor parameter. The main original property of this algorithm is that it leads to a binary edge image without any threshold setting.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.