WO1999052059A2 - Method and apparatus for performing robust recognition - Google Patents
Method and apparatus for performing robust recognition Download PDFInfo
- Publication number
- WO1999052059A2 WO1999052059A2 PCT/IB1999/000975 IB9900975W WO9952059A2 WO 1999052059 A2 WO1999052059 A2 WO 1999052059A2 IB 9900975 W IB9900975 W IB 9900975W WO 9952059 A2 WO9952059 A2 WO 9952059A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- input
- subspace
- input item
- training data
- general characteristics
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Definitions
- the present invention relates to a robust face recognition system and method, and, more particularly, to a robust face recognition system and method capable of compensating for differences between the general characteristics of input data and training data in a classification system.
- PCA is a standard technique used to approximate original data with lower dimensional feature vectors . More specifically, using PCA, the number of vector dimensions required to represent original data is reduced, thereby simplifying calculations.
- the basic approach of PCA recognition is to compute the eigenvectors of a covariance matrix corresponding to vectors representing the original data, and to classify the original data based on a linear combination of only the highest-order eigenvectors.
- conventional application of PCA generally introduces error by considering less than all of the dimensions of the vector representing the original data, the error is generally small since the highest order eigenvalues are used.
- Subspace LDA has been employed to improve upon conventional PCA and LDA based systems.
- Subspace LDA involves performing LDA using a space or subspace that is generated based upon the original input space, e.g., through PCA.
- a description of systems employing conventional PCA, LDA, and subspace LDA, and related mathematics, can be found in "Statistical Pattern Recognition" by K. Fukunaga, "Using Discriminant Eigenfeatures for Image Retrieval” by D.L. Swets and J. Weng, and “Mathematical Statistics” by S.S. Wilks, which references are herein incorporated by reference in their entirety.
- the conventional PCA, LDA, and subspace LDA systems are each susceptible to errors introduced when the general characteristics of the original data differ from the general characteristics of the training data to which it is matched. Namely, classification errors result from differences between the general characteristics of input data and training data, such as rotational orientation, translational orientation, scale, and illumination.
- the present invention includes an apparatus and method for classifying input data.
- One of the methods of the present invention includes the steps of reducing differences between general characteristics of an input item and training data used to classify the input item, and classifying the input item through comparison with the training data.
- the step of reducing differences between general characteristics of the input item and the training data includes manipulating general characteristics of an original subspace defined by the training data, projecting the input item into the manipulated subspace before classifying the input item, and determining projection coefficients that are used to project the input item into the manipulated subspace.
- the step of classifying the input item includes comparing the projected input item to the training data, and classifying the input item by comparing the projected input item and the training data based on differences between the projection coefficients of the input item and the projection coefficients of the training data that are defined by a projection of the training data into the original subspace. These differences are determined by mapping the projection coefficients of the input item and the projection coefficients of the training data into a classification space before comparison.
- Another method of the present invention includes classifying input data by representing the input data using an input space, manipulating the input space, projecting the input data into the manipulated input space, and classifying the input data based on projection coefficients used to project the input data into the manipulated input space.
- the input item and training data may correspond to images, sounds, colors or other data of varying dimension, where manipulation is performed by comparing one or more of the general characteristics of the original subspace including rotational orientation, translational orientation, scale, and illumination.
- the space or subspace to be used for classification of the image is selected based on whether the input item corresponds more closely to the training data after being projected into the manipulated subspace or after being projected into the original space or subspace.
- An apparatus of the present invention includes a processor, collection of processors, or a program having one or more modules that are capable of performing the above-described functions. Further scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while -indicating preferred embodiments of the invention, are given by way of example only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description. BRIEF DESCRIPTION OF THE ATTACHED DRAWINGS The present invention will become more fully understood from the detailed description given below and the accompanying drawings , which are given by way of illustration only and which are therefore not limiting of the present invention, and wherein:
- Figure 1 illustrates the components of the face recognition system of the present invention,-
- FIG. 2 is a flowchart showing an example of the steps performed by the processor of the present invention.
- Figures 3 -3B illustrate a system and method for correcting a two-dimensional rotation of the input face image with respect to the images used to form the training data
- Figures 4A-4B illustrate a system and method for correcting misalignment of the input face image with respect to the images used to form the training data
- Figures 5A-5B illustrate a system and method for adjusting the scale of the input face image to more closely correspond with the scale of the images used to form the training data
- Figures 6A-6B illustrate methods for correcting differences in illumination between the input face image and the images used to form the training data, or to compensate for shadowing in the input face image
- Figures 7A-7B are a flowchart describing a method for integrating plural manipulations of. the face subspace and/or input face image
- Figure 8 is a block diagram showing the stages performed in one of many possible implementations of an integration between plural manipulations of the face subspace and/or input face image.
- the face recognition system of the present invention includes several modules that provide robustness against variances in the input face image .
- the modules operate to compensate for inconsistencies between general characteristics of the input face image and the training data before classification against pre-stored face images.
- the modules are capable of correcting inconsistencies in one or more general characteristics between the input image data and the training images, such as differences in illumination, scale, alignment and two-dimensional orientation/rotation. These modules can be used independently or in combination, depending upon the particular application.
- Figure 1 illustrates components of the face recognition system of the present invention including an image input device (101) , a image storage device (102) , and a processor (103) .
- Image input device 101 is a camera, scanner or some other input device capable of supplying an input face image .
- Image storage device 102 is a hard -drive, optical disk, RAM, ROM, or other memory device capable of storing predetermined training data, such as a predetermined set of training images, projection coefficients or data corresponding thereto.
- Processor 103 is a digital signal processor, a microprocessor, or other device capable of performing the manipulations and comparisons discussed hereinafter.
- Figure 2 is a flowchart showing an example of steps performed by the processor 103 of Figure 1.
- processor 103 generates an input vector corresponding to an input face image received from image input device 102 (step 201) , performs pre- processing to manipulate the face subspace if necessary (step 202) , projects the input vector into the manipulated face subspace (step 203) , maps the input vector or projection coefficients from the manipulated face subspace to the classification space (step 204) , and compares the input vector with vectors corresponding to training data in the classification space (step 205) . More specifically, in step 201, the processor generates an input vector corresponding to the input face image.
- One method of generating such an input vector is to concatenate the rows of pixels forming the input face image, where the dimension of the input vector is defined by the number of pixels forming the input face image .
- the processor performs one or more manipulations of a face subspace, if necessary. It is necessary to perform manipulatio (s) of the input face image when the input face image and the training images have inconsistent general characteristics such as rotational orientation, scale, illumination and translational position. If an input face image and the training images have inconsistent general characteristics, projection of each onto a common subspace will produce different projection coefficients, even if the input face image and the training image are of the same person/item. These differences will likely lead to classification errors since classification is ordinarily based on a comparison between coefficients necessary to project the input face image onto the face subspace, after projecting those coefficients into the classification space. However, these classification errors can be reduced or prevented by manipulating the face subspace when the general characteristics of the input face image are inconsistent with the training images. Specifically, the face subspace is defined based on characteristics of the training images that show differences between those different training images.
- the particular dimensions of the face subspace are generally determined through the use of principle component analysis (PCA) or artificial neural networks (ANN) . These and other techniques may be used to select a reduced group of dimensions that are representative of the trained or input face image, enabling a decrease in computational burden.
- PCA principle component analysis
- ANN artificial neural networks
- the relationship between the input face image and the manipulated subspace will become consistent with the relationship between the training images and the original subspace.
- the projection coefficients L required to project the input face image onto the manipulated subspace will be consistent with the projection coefficients a required to project the training images onto the original subspace. Since the projection coefficients are rendered consistent by this manipulation, classification error will be avoided.
- manipulation of the face subspace in step 202 is appropriate when the input face image is not perfectly normalized, imperfect normalization occurring when that input face image is rotated in two- dimensions, misaligned, changed in scale or illuminated differently relative to the images used to form the training data.
- the input face image is also imperfectly normalized when selective portions of that image are illuminated differently due to shadowing.
- Manipulations performed by the processor include changes in rotation as described with respect to Figures 3A-3B, changes in alignment as described with respect to Figures 4A-4B, changes in scale as described with respect to Figures 5A-5B, changes with respect to illumination as described with respect to Figures 6A- 6B, and other changes not explicitly illustrated that are useful in bringing the input image into conformity with the general characteristics of the training images .
- the input face image J can be described using components within the face subspace as follows:
- N is a dimension of the original image space
- M is a dimension of the subspace
- ⁇ represents a series of projection coefficients used to project the input face image into the face subspace.
- equation (2) can be represented as ollows:
- ⁇ t is obtained from ⁇ with the same mapping which transforms I to I.
- This mapping includes geometric mapping, intensity mapping, and geometrical-intensity mapping such as filtering.
- the face subspace may be manipulated to have general characteristics ⁇ ⁇ that are inconsistent with the general characteristics ⁇ t of the face subspace to the same extent that the general characteristics of the distorted input image data I are inconsistent with the general characteristics of the training image data J.
- the projection coefficients a ⁇ of the distorted input image data I can be compared with the projection coefficients o ⁇ t of the training image data without distortion or error.
- the processor projects the input vector into the manipulated face subspace.
- the coefficients required to project a distorted input face image onto the manipulated subspace are the same as the coefficients required to project a corresponding normal training image onto the original face subspace. As such, no error is introduced into the projection coefficients through the projection of the input face image in step 203.
- the input face image is classified by comparing the projection coefficients of the input face image to the projection coefficients of the training images.
- the input face image is effectively projected onto a classification space C, which is a space with dimensions that are the same or less than those of the original space. That is, the projection coefficients a produced when projecting the input face image from the image space into the face subspace are projected into the classi ication space C.
- the projection coefficients a. used to project the input face image onto that manipulated face subspace are mapped, in step 204, onto the classification space C.
- LDA linear discriminant analysis
- the coefficients may be separated to enable a more sensitive comparison.
- the projection coefficients a. of the input face image are compared to the projection coefficients oc t of the different training images within the classification space C. More specifically, once projected into the classification space C, the projected projection coefficients ⁇ ⁇ of each are compared to determine classification of the input face image. Similarity measurements, such as a distance like Euclidean distance, are determined for comparison of the projection coefficients a L within the classification space.
- One or more comparison rules such as the Nearest-Neighbor Rule, are then used to make comparisons based on the distance determined.
- the input face image is classified according to that training image. However, if the projection coefficients a__ of the input face image are not sufficiently similar to any of the projection coefficients L of the training images, the operation may be repeated, or the input face image may be rejected .
- steps 203-205 are performed in parallel with respect to both the original face subspace and one or more manipulated subspaces . If the projection coefficients a. used to project the input image data into the original face subspace correlate most closely with the projection coefficients a.
- FIGS 3A-6B illustrate examples of systems and methods performed by the processor during preprocessing.
- Figures 3A-3B illustrate a system and method for correcting a two-dimensional rotation of the input face image with respect to the images used to form the training data.
- Figure 3A shows a system in which a face image is input to processor 103 by image input device 101, and at least one training image is input to processor 103 from image storage device 102.
- Processor 103 is shown having multiple modules for projecting the images into original and rotating subspaces, performing linear discriminant mapping, and comparing the mapped coefficients using Euclidean measurements and the nearest neighbor rule.
- Figure 3B shows the method for correcting the two- dimensional rotation of the input face image with respect to the training images.
- Step 301 involves rotating the face subspace by one or more predetermined angles.
- Step 302 involves projecting the input face image onto each of the rotated subspaces generated in step 301, projection coefficients a. defining the projection of the input face image into each of the rotated subspaces.
- Step 303 involves mapping the different sets of projection coefficients ⁇ i to a classification space C. This process may be performed using a linear discriminant transform W ⁇ based on linear discriminant analysis (LDA) , or a like method.
- Step 304 involves comparing the projection coefficients a x of the input face image to the projection coefficients ⁇ 1 of the trained images in the classification space C.
- LDA linear discriminant analysis
- Step 305 involves classifying the input face image based on the comparisons made in step 304.
- the input face image may be projected into several rotated subspaces.
- the projection coefficients corresponding to each of those different rotated subspaces would then be mapped into classification space C via the same discriminant mapping W as applied to the pre-stored normal training images.
- the projection coefficients c_ of the input face image for each of the rotated subspaces are then compared to the projection coefficients a of the training images in the classification space C. Classification follows based on these comparisons and rules such as the nearest -neighbor rule.
- FIG. 4A-4B illustrate a system and method for correcting misalignment of the input face image with respect to the images used to form the training data.
- the system and method of Figures 4A-4B are similar to the system and method of Figures 3A-3B, except that the original subspace is translated rather than rotated.
- Figures 5A-5B illustrate a system and method for adjusting the scale of the input face image to more closely correspond with the scale of the images used to form the training data.
- the system and method of Figures .5A-5B are similar to the system and method of Figures 3A-3B, except that the original subspace is changed in scale rather than rotated.
- Figures 6A illustrates a method for detecting and compensating shadows appearing in the input face image. Shadows are recognized where an area of the input face image is significantly darker than the brightness under normal conditions.
- step 601 areas within the input face image are searched for shadows, excluding areas discovered based on geometric location information that are likely to experience hair growth.
- the search for shadows involves first and second order statistics and geometric location information. For instance, shadows may be detected by comparing a threshold against the illumination, and changes in illumination, of areas in the input face image. As such, areas of an image with relatively low intensity and low variance are candidates for shadow. More specifically, assuming the orientation of the face image is known, a shadow may be detected if the area of an image corresponding to one side is significantly darker than an area of the image corresponding to the other side.
- the shadows are compensated by replacing or modifying the detected shadow area with the illumination of a corresponding area of the image not having a shadow.
- the present invention replaces shadow areas detected on one side of the face image with corresponding non-shadow areas from the other side of the face image.
- the processed data is then transmitted for classification or further manipulation of the image or subspace.
- Figure 6B illustrates a method for correcting differences in illumination between the input face image and the images used to form the training data.
- step 611 for purposes of detection not training, pixels positioned outside the facial area of the input face image are reset to zero using a face- shaped mask.
- step 612 local and global textures are determined and adjusted based on a texture preserving filter having the characteristics noted in equation ( 4 ) below:
- h(n,m) is the 2D impulse response in the time domain
- (2M+1) (2L+1) is the window size.
- the intensity variation J(x,y) in a local neighborhood is assumed to comprise two components - J-'- ⁇ y) corresponding to local texture, and l ⁇ (_ ⁇ ,y) corresponding to smooth global variation.
- a balanced global intensity variation may be obtained for classification by averaging the intensities of two portions of the image, e.g., I g Leec (x,y) and I g r ⁇ ghc (x, y) .
- the texture preserving filter of Equation (4) includes a moving average filter, it essentially balances the global and smooth intensity variation and preserves the local textural variation. In addition, this filter will not change the illumination of the input face image under normal, balanced illumination.
- the texture preserving filter of Equation 4 represents a simple filter used in two dimensions, but other filters can be used in two dimensions to balance the global and/or smooth intensity variations of the input face image, particularly more complex variations such as a sharp specular. Also, if the data has less than two dimensions (e.g., sound) or more than two dimensions (e.g., color or a sequence of images), the filter can be modified to apply in a corresponding number of dimensions.
- step 613 after the global illumination is corrected, the projecting coefficients of the processed image are further pre-processed or projected into classification space C for classification.
- Figures 3A-6B describe multiple discrete methods of manipulating the face subspace and/or input face image, a combination of the manipulations may be used to address multiple sources of normalization error.
- Figures 7A-7B show a general method for integrating a combination of manipulations, such as those described with reference to Figures 3A-6B.
- step 701 involves projecting an input face image into one or more manipulated subspaces as well as the original subspace.
- Step 702 maps the projection coefficients ⁇ x corresponding to each of the manipulated subspaces into a classification space C, and compares those projection coefficients o_ L with projection coefficients ⁇ x of the training images similarly mapped.
- Step 703 involves calculating a correlation between the projection coefficients i for each manipulated subspace and the projection coefficients of the training images.
- Step 704 require the determination of a highest correlation among those calculated in step 703.
- steps 705A- 705B the input face is automatically classified if the highest correlation exceeds a threshold (e.g., 0.99).
- the highest correlation is used to select the manipulated subspace or the original subspace corresponding to that highest correlation.
- one or more additional manipulations may be performed on the selected subspace to correct additional inconsistencies between the general characteristics of the input image data and the training data.
- shadow detection and correction and/or illumination composition may be performed in accordance with the method described in Figures 6A and 6B .
- the resulting projection coefficients a i are mapped into a classification space C for comparison with projection coefficients a. of the training images, and, in step 710, classification of the input face image is performed.
- Figure 8 shows a block diagram describing steps of a system capable of performing the manipulations and corrections described in the process of Figure 7A-7B.
- an input face image I is simultaneously projected into an original subspace 801, a rotated subspace 802, a scaled subspace 803, and a translated subspace 804.
- Projection coefficients a ⁇ resulting from each projection are mapped to a linear discriminant classifier 805 for comparison with projection coefficients o> i used to project the trained images onto the original face subspace.
- Classification is attempted based on the coefficients that correlate most closely with the coefficients of a training image. If the highest correspondence between the input image and any of the training images exceeds a predetermined threshold, the results of the classification are used for identification and the process is concluded.
- the projection coefficients are passed through the switch 806 to the front -view detection module 807.
- the projection coefficients a t of the selected subspace are compared against projection coefficients ex. corresponding to a three-dimensional rotation. If the input face image is deemed to be rotated in three dimensions based on this comparison, either of two conditions exist depending upon the correlation between the selected image and the training image. Specifically, if the correlation between the selected input face image and the training image exceeds a second lower predetermined threshold (e.g., 0.85), the results of the attempted classification performed in module 805 are used for identification, and the process is concluded. Alternatively, if the correlation does not exceed the second lower predetermined threshold, the attempted classification of the face image is rejected.
- a second lower predetermined threshold e.g. 0.85
- the selected subspace and/or corresponding projection coefficients ⁇ are passed to the shadow detection module 808 and light balancing module 809.
- the shadow detecting and light balancing modules operate as described previously with respect to Figures 6A and 6B.
- the projection coefficients c ⁇ L are mapped into the classification space C by subspace LDA classifier 810 for comparison with projection coefficients a. of training images, and classification is ultimately performed.
- the present invention provides a system and method for compensating inconsistencies between the general characteristics -of an input face image and training images, thus enabling more accurate classification of input face images.
- General characteristics that may be compensated using the present invention included, but are not limited to, rotational orientation, scale, translational orientation, and illumination.
- the present invention is described as manipulating the face subspace to compensate inconsistencies in the general characteristics of the input image data and the training data, it is also possible to manipulate the input image data to compensate for sueh inconsistencies.
- the present invention has been described with respect to classification of facial images, it is also applicable to classification systems for other types of image and non-image data, particularly sound.
- the input data and the training data are likely to differ with regard to different general characteristics. For instance, with sound, manipulations may be required for tone, pitch, and volume.
- the input device and storage device will obviously handle different data types. Still further, if necessary, the dimensionality of at least the input space, the subspace, the classification space and the texture preserving filter are changed to reflect the dimensionality of the input data .
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US5523198A | 1998-04-06 | 1998-04-06 | |
US09/055,231 | 1998-04-06 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO1999052059A2 true WO1999052059A2 (en) | 1999-10-14 |
WO1999052059A3 WO1999052059A3 (en) | 2000-08-31 |
Family
ID=21996540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/IB1999/000975 WO1999052059A2 (en) | 1998-04-06 | 1999-04-06 | Method and apparatus for performing robust recognition |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR20010013501A (en) |
WO (1) | WO1999052059A2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11928582B1 (en) * | 2018-12-31 | 2024-03-12 | Cadence Design Systems, Inc. | System, media, and method for deep learning |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20040042500A (en) * | 2002-11-14 | 2004-05-20 | 엘지전자 주식회사 | Face detection based on pca-lda |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5386103A (en) * | 1993-07-06 | 1995-01-31 | Neurnetics Ltd. | Identification and verification system |
US5497430A (en) * | 1994-11-07 | 1996-03-05 | Physical Optics Corporation | Method and apparatus for image recognition using invariant feature signals |
US5699449A (en) * | 1994-11-14 | 1997-12-16 | The University Of Connecticut | Method and apparatus for implementation of neural networks for face recognition |
US6038337A (en) * | 1996-03-29 | 2000-03-14 | Nec Research Institute, Inc. | Method and apparatus for object recognition |
-
1999
- 1999-04-06 WO PCT/IB1999/000975 patent/WO1999052059A2/en not_active Application Discontinuation
- 1999-04-06 KR KR1019997011508A patent/KR20010013501A/en not_active Application Discontinuation
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5386103A (en) * | 1993-07-06 | 1995-01-31 | Neurnetics Ltd. | Identification and verification system |
US5497430A (en) * | 1994-11-07 | 1996-03-05 | Physical Optics Corporation | Method and apparatus for image recognition using invariant feature signals |
US5699449A (en) * | 1994-11-14 | 1997-12-16 | The University Of Connecticut | Method and apparatus for implementation of neural networks for face recognition |
US6038337A (en) * | 1996-03-29 | 2000-03-14 | Nec Research Institute, Inc. | Method and apparatus for object recognition |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11928582B1 (en) * | 2018-12-31 | 2024-03-12 | Cadence Design Systems, Inc. | System, media, and method for deep learning |
Also Published As
Publication number | Publication date |
---|---|
WO1999052059A3 (en) | 2000-08-31 |
KR20010013501A (en) | 2001-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4529172B2 (en) | Method and apparatus for detecting red eye region in digital image | |
JP4903854B2 (en) | Object detection method in digital image | |
JP4443722B2 (en) | Image recognition apparatus and method | |
US8254644B2 (en) | Method, apparatus, and program for detecting facial characteristic points | |
US7092554B2 (en) | Method for detecting eye and mouth positions in a digital image | |
US7925093B2 (en) | Image recognition apparatus | |
US6151403A (en) | Method for automatic detection of human eyes in digital images | |
US8861845B2 (en) | Detecting and correcting redeye in an image | |
EP0864134B1 (en) | Vector correlation system for automatically locating patterns in an image | |
CA2218793C (en) | Multi-modal system for locating objects in images | |
US7995805B2 (en) | Image matching apparatus, image matching method, computer program and computer-readable storage medium | |
JP4604439B2 (en) | Image processing apparatus, image processing method, and recording medium | |
US6934406B1 (en) | Image processing apparatus, image processing method, and recording medium recorded with image processing program to process image taking into consideration difference in image pickup condition using AAM | |
EP2590140A1 (en) | Facial authentication system, facial authentication method, and facial authentication program | |
US20040252882A1 (en) | Object recognition using binary image quantization and Hough kernels | |
WO1996042040A2 (en) | Network-based system and method for detection of faces and the like | |
IL172480A (en) | Method for automatic detection and classification of objects and patterns in low resolution environments | |
US7570815B2 (en) | Comparing patterns | |
Ng et al. | An effective segmentation method for iris recognition system | |
JPH0573663A (en) | Recognition method for picture of three-dimensional object | |
WO1999052059A2 (en) | Method and apparatus for performing robust recognition | |
Nagao et al. | Using photometric invariants for 3D object recognition | |
Betta et al. | Metrological characterization of 3D biometric face recognition systems in actual operating conditions | |
Short | Illumination invariance for face verification | |
JP2001022924A (en) | Pattern collating device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): CN JP KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019997011508 Country of ref document: KR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
AK | Designated states |
Kind code of ref document: A3 Designated state(s): CN JP KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: A3 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 1019997011508 Country of ref document: KR |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1019997011508 Country of ref document: KR |