WO2007103698A2 - Invariant radial iris segmentation - Google Patents

Invariant radial iris segmentation Download PDF

Info

Publication number
WO2007103698A2
WO2007103698A2 PCT/US2007/063019 US2007063019W WO2007103698A2 WO 2007103698 A2 WO2007103698 A2 WO 2007103698A2 US 2007063019 W US2007063019 W US 2007063019W WO 2007103698 A2 WO2007103698 A2 WO 2007103698A2
Authority
WO
WIPO (PCT)
Prior art keywords
peaks
iris
pupil
peak
subject
Prior art date
Application number
PCT/US2007/063019
Other languages
French (fr)
Other versions
WO2007103698A3 (en
Inventor
Rida M. Hamza
Original Assignee
Honeywell International Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/372,854 external-priority patent/US8442276B2/en
Priority claimed from US11/382,373 external-priority patent/US8064647B2/en
Application filed by Honeywell International Inc. filed Critical Honeywell International Inc.
Priority to KR1020087022043A priority Critical patent/KR101423153B1/en
Priority to JP2008558461A priority patent/JP4805359B2/en
Priority to GB0815933A priority patent/GB2450027B/en
Publication of WO2007103698A2 publication Critical patent/WO2007103698A2/en
Publication of WO2007103698A3 publication Critical patent/WO2007103698A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • the invention is directed towards biomethc recognition, specifically to an improved approach to radial iris segmentation.
  • Biometrics is the study of automated methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits.
  • biometric authentications refer to technologies that measure and analyze human physical characteristics for authentication purposes. Examples of physical characteristics include fingerprints, eye retinas and irises, facial patterns and hand measurements.
  • a leading concern of existing biometric systems is that individual features that identify humans from others can be easily missed due to the lack of accurate acquisition of the biometric data, or due to the deviation of operational conditions. Iris recognition has been seen as a low error, high success method of retrieving biometric data. However, iris scanning and image processing has been costly and time consuming. Fingerprinting, facial patterns and hand measurements have afforded cheaper, quicker solutions. [0006] During the past few years, iris recognition has matured sufficiently to allow it to compete economically with other biomethc methods. However, inconsistency of acquisition conditions of iris images has led to rejecting valid subjects or validating imposters, especially when the scan is done under uncontrolled environmental conditions.
  • iris recognition has proven to be very effective. This is true because iris recognition systems rely on more distinct features than other biomethc techniques such as facial patterns and hand measurements and therefore provides a reliable solution by offering a much more discriminating biomethc data set.
  • Iris segmentation is the process of locating and isolating the iris from the other parts of the eye. Iris segmentation is essential to the system's use. Computing iris features requires a high quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such an acquisition process is sensitive to the acquisition conditions and has proven to be a very challenging problem. Current systems try to maximize the segmentation accuracy by constraining the operation conditions. Constraints may be placed on the lighting levels, position of the scanned eye, and environmental temperature. These constraints can lead to a more accurate iris acquisition, but are not practical in all real time operations.
  • a new feature extraction technique is presented along with a new encoding scheme resulting in an improved biometric algorithm.
  • This new extraction technique is based on a simplified polar segmentation (POSE).
  • the new encoding scheme utilizes the new extraction technique to extract actual local iris features using a process with low computational load.
  • the encoding scheme does not rely on accurate segmentation of the outer bounds of the iris region, which is essential to prior art techniques. Rather, it relies on the identification of peaks and valleys in the iris (i.e., the noticeable points of change in color intensity in the iris).
  • the encoding scheme does not rely on the exact location of the occurrence of peaks detected in the iris, but rather relies on the magnitude of detected peaks relative to a referenced first peak. Since this algorithm does not rely on the exact location of pattern peaks/valleys, it does not require accurate segmentation of the outer boundary of the iris, which in turn eliminates the need for a normalization process.
  • the overall function of the present invention can be summarized as follows.
  • the iris is preprocessed and then localized using an enhanced segmentation process based on a POSE approach, herein referred to as invariant radial POSE segmentation.
  • all obscurant parts i.e. pupil, eyelid, eyelashes, sclera and other non-essential parts of the eye
  • Lighting correction and contrast improvement are processed to compensate for differences in image lighting and reflective conditions.
  • the captured iris image is unwrapped into several radial segments and each segment is analyzed to generate a one dimensional dataset representing the peak and/or valley data for that segment.
  • the peak and/or valley data is one dimensional in the sense that peaks and/or valleys are ordered in accordance with their position along a straight line directed radially outward from the center of the iris.
  • the iris image is unwrapped into a one-dimensional polar representation of the iris signature, in which the data for only a single peak per radial segment is stored.
  • the magnitude of the outermost peak from the pupil-iris border per segment is stored.
  • the magnitude of the largest peak in the segment is stored.
  • the data for a plurality of peaks and/or valleys is stored per radial segment.
  • each peak and/or valley is recorded as a one bit value indicating its magnitude relative to another peak and/or valley in the segment, such as the immediately preceding peak/valley along the one dimensional direction.
  • the data for all of the radial segments is concatenated into a template representing the data for the entire iris scan. That template can be compared to stored templates to find a match.
  • Figure 1 illustrates a scanned iris image based on existing techniques.
  • Figure 2a illustrates a scanned iris image utilizing the principles of the present invention.
  • Figure 2b illustrates the scanned iris image of figure 2a mapped into a one dimensional iris map.
  • Figure 3 illustrates a flow chart showing one embodiment of the present invention.
  • Figure 4a illustrates a mapping of the iris segmentation process according to the principles of the present invention.
  • Figure 4b illustrates an enhanced mapping of the iris scan according to principles of the present invention.
  • Figure 5a illustrates a first encoding scheme according to principles of the present invention.
  • Figure 5b illustrates a second encoding scheme according to principles of the present invention.
  • a leading concern of existing biomethc systems is that individual features which identify humans from others can be easily missed due to the lack of accurate data acquisition or due to deviations in operational conditions.
  • iris recognition has matured to a point that allows it to compete with more common biometric means, such as fingerprinting.
  • biometric means such as fingerprinting.
  • inconsistencies in acquisition conditions of iris images often leads to rejecting valid subjects or validating imposters, especially under uncontrolled operational environments, such as environments where the lighting is not closely controlled.
  • iris recognition has proven to be very effective. This is so because iris recognition systems rely on more distinct features than other common biometric means, providing a reliable solution by offering a more discriminating biometric.
  • Fig. 1 shows a scanned eye image with the borders identified according to conventional prior art segmentation techniques.
  • iris 105 is defined by outer iris border 1 10.
  • outer iris border 1 10 is obstructed by the eyelid at 107 and a true border cannot be determined.
  • the system must estimate the missing portion of the outer iris border 1 10.
  • Computing iris features requires a high-quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such a process is sensitive to the acquisition conditions and has proven to be a challenging problem (especially for uncooperative subjects that are captured at a distance). By constraining operational conditions, such as carefully controlling lighting and the position of a subject's eye, current systems attempt to resolve segmentation problems, but these approaches are not always practical.
  • FIG. 2A shows a similarly scanned eye image as in figure 1.
  • POSE simplified polar segmentation
  • This enhanced POSE technique or invariant radial POSE, focuses on detecting the peaks and valleys of the iris, i.e., the significant discontinuities in color intensity between the pupil and the sclera within defined radial segments of the iris.
  • a peak is a point where color intensity on either side of that point (in the selected direction) is less than the color intensity at that point (and the discontinuity exceeds some predetermined threshold so as to prevent every little discontinuity from being registered as a recorded peak).
  • a valley is a point where color intensity on either side of that point in the selected direction is greater than the color intensity at that point (with the same qualifications).
  • This technique is referred to as being one dimensional because, rather than collecting two dimensional image data per radial segment as in the prior art, the collected iris data per radial segment has only one signal dimension. This process eliminates the need to: estimate an obstructed outer boundary of the iris; segment the outer bound of the iris; and calculate exact parameters of circles, ellipses, or any other shapes needed to estimate a missing portion of the outer boundary. [0027] Iris 205 is scanned utilizing the invariant radial POSE process.
  • the invariant radial POSE process locates and identifies the peaks and valleys present in the scanned iris and creates an iris map.
  • Figure 2A helps illustrate one form of iris map that can represent the peak and/or valley data in an iris scan.
  • the data for only one peak is stored per radial segment.
  • To construct an iris map in accordance with this embodiment of the invention first the iris is segmented into a set number of radial segments, for example 200 segments. Thus, each segment represents a 1 .8 degree slice of a complete 360 degree scan of the iris. After each of the 200 segments is analyzed, the data for one characteristic peak in the segment is stored.
  • the peak selected for representation in each radial segment is the peak 210 that is outermost from the pupil-iris border.
  • the selected peak may be the greatest peak (other than the peak at the pupil-iris border), the sharpest peak, or the innermost peak. If the criterion is the outermost peak, it is preferable to use the outermost peak within a predefined distance of the pupil-iris border since, as one gets closer to the iris- sclera border, the peaks and valleys tend to become less distinct and, therefore, less reliable as a criterion for identifying subjects.
  • the data corresponding to valleys instead of peaks may be recorded.
  • the recorded data need not necessarily even be a peak or valley, but may be any other readily identifiable color or contrast characteristic.
  • the distance from the center of the pupil of whichever peak or valley (or other characteristic) is selected for representation is stored.
  • the radial distance is reported as a relative value relative to the radial distance of a reference peak from the center of the pupil. In this manner, it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions (e.g., pupil dilation, ambient light).
  • the reference peak is the peak at the pupil-iris border in that segment, which usually, if not always, will be the greatest peak in the segment.
  • Figure 2B shows the scanned iris mapped into a one dimensional iris map.
  • the iris is segmented into a predetermined number of radial segments, for example 200 segments, each segment representing 1.8 degrees of a complete 360 degree scan of the iris.
  • a reference peak is selected in each segment, the reference peak being the peak at the pupil-iris border in the analyzed radial segment (which usually, if not always, will be the greatest peak in the segment).
  • the iris is unwrapped to create the graph shown in Figure 2B, with each point 215 representing the aforementioned relative radial distance of the corresponding peak for each of the radial segments.
  • the conversion of the peaks and valleys data into the graph shown in Figure 2B may be an "unwrapping" of the iris about the normal of the pupil-iris border (i.e., perpendicular to the border).
  • the pupil-iris border is essentially a circular border.
  • border is a string and unwrapping that string into a straight line with the reference peaks from each radial segment represent as a discrete point 215, as shown in Figure 2B.
  • this one dimensional iris representation will be unchanged with respect to the relative location of the reference peaks in each angular segment, but may result in the shifting of the entire curve 215 upwards or downwards. While pupil dilation and other factors may affect the absolute locations of the peaks or valleys (i.e., their actual distances from the pupil border), they will not affect the relative locations of the peaks and valleys in the iris relative to the reference peaks (or valleys).
  • Figure 4A helps illustrate the formation of an alternative and more robust representation of the scanned iris image data in which the data for multiple peaks, rather than just one characteristic peak, is recorded per radial segment.
  • the center of the pupil is indicated by cross 405.
  • the horizontal or x- axis represents the radial distance from the pupil-iris border (i.e., perpendicular to the pupil-iris border), and the vertical or y-axis represents the derivative of the color intensity.
  • the peak at the pupil-iris border is indicated at 41 1 . All other peaks and valleys in the segment are represented graphically relative to the reference peak so that no data normalization will be necessary.
  • each radial segment usually will be several pixels wide at the pupil border 410, and become wider as the distance from the pupil-iris border increases. Therefore, in order to generate the one dimensional data represented in the graph of Fig. 4A, the color intensity derivative data represented by the y- axis should be averaged or interpolated over the width of the segment. This representation of the interpolated data is shown in line 415, in which each significant data peak is marked by reference numeral 420.
  • Figure 4B helps illustrate an even further embodiment.
  • Graph 425 shows a graphical representation of the iris, such as the one illustrated in Figure 4A.
  • each individual peak is isolated and recorded with respect to the reference peak.
  • enhancement curve 430 is removed from the one dimensional iris representation.
  • Enhancement curve 430 is the component of the graph that can be removed without affecting the magnitude of each peak relative to the next peak resulting in a normalized data set focusing solely on the magnitudes of the relative peaks.
  • the enhancement curve can be calculated as the approximate component (DC component) of the decomposition of the graph of Figure 4A.
  • DC component the approximate component
  • graph 435 results, where each peak is represented as a point 437 on the graph.
  • graph 425 is now normalized based on peak occurrence.
  • the peak data will be encoded very efficiently by encoding each peak relative to an adjacent peak using as few as one or two bits per peak. Accordingly, the removed enhancement curve simplifies the processing while preserving all needed information.
  • Figure 3 illustrates a flow chart showing an embodiment of the present invention.
  • Step 305 a preprocessing step takes place.
  • the preprocessing may be essentially conventional.
  • texture enhancements are performed on the scanned image. Obscurant parts of the image, such as pupils, eyelids, eyelashes, sclera and other non-essential parts of the eye are dropped out of the analysis.
  • the system preprocesses the image using a local radial texture pattern (LRTP).
  • LRTP local radial texture pattern
  • the image is preprocessed using local radial texture pattern similar to, but revised over that proposed in Y. Du, R. Ives, D. Etter, T. Welch, C-I. Chang, "A one-dimensional approach for iris identification", EE Dept, US Naval Academy, Annapolis, MD, 2004.
  • I (x, y) the color intensity of the pixel located at the two dimensional coordinate x, y;
  • the curve that determines the neighboring points of the pixel x, y;
  • Step 310 the Invariant Radial POSE segmentation process is performed. This approach differs from traditional techniques as it does not require iris segmentation at the outer border of the iris, i.e., the ihs-sclera border.
  • the process first roughly determines the iris center in the original image, and then refines the center estimate and extracts the edges of the pupil.
  • a technique for locating the center of the pupil is disclosed in aforementioned U.S. Patent Application No. 1 1/043,366, incorporated by reference and need not be discussed further.
  • Techniques for locating the pupil- iris border also are disclosed in the aforementioned patent application and need not be discussed further.
  • the segmentation process begins.
  • the radial scan of the iris is done in radial segments, e.g., 200 segments of 1.8 degrees each.
  • Step 315 the actual feature extraction occurs based on the segmented image obtained in Step 310.
  • the feature extraction process can be performed, for example, in accordance with any of the three embodiments previously described in connection with Figures 2A and B, 4A, and 4B, respectively, which detect changes in the graphical representation of the iris while not relying on the absolute location of the changes' occurrence. Particularly, the absolute locations change as a function of the natural dilation and contraction of the human iris when exposed to variations in environmental light conditions. Therefore, the feature extraction process relies on detecting the peak and valley relative variations in magnitude and their relative locations rather than focusing on their absolute magnitudes or locations.
  • a key advantage of this approach is that it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions. A normalization procedure of the iris scan is crucial to prior art iris recognition techniques.
  • Step 320 the resulting peak data represented in graph 435 is encoded into an encoded template so that it can later be efficiently compared with stored templates of iris data for known persons.
  • Two encoding alternatives are discussed below in connection with Figures 5A and 5B, respectively. These two are shown only for example and are not meant to limit the scope of the present invention.
  • FIGS 5A and 5B help illustrate the encoding of the peak/valley data set for one radial segment of a scanned iris in accordance with two embodiments of the invention, respectively.
  • each template will comprise a plurality of such data sets, the number of such sets in a template being equal to the number of radial segments. Thus, for instance, if each segment is 1 .8°, each template will comprise 200 such data sets.
  • Figure 5A illustrates a first encoding scheme which focuses on relative peak amplitude versus the amplitude of the immediately previous peak.
  • Figure 5A illustrates encoding of the peak data for a single radial segment and shows a data set for that segment.
  • Each data set comprises IxK bits, where K is a number of peaks per radial segment for which we wish to record data and I is the number of bits used to encode each peak.
  • the bits represent peaks that are farther radially outward from the pupil-iris border (i.e., the x axis in Figures 2B, 4A, and 4B, which represents distance from the pupil-iris border). If the magnitude of a peak is greater than the magnitude of the previous peak in the graph such as graph 435, the bits representing that peak are set to 1 1. Otherwise, the bits are set to a second value, e.g., 00.
  • the second I bits are essentially guaranteed to be 00 since, in this example, the reference peak is essentially guaranteed to have the greatest magnitude in the segment, and will, thus, always be larger than the next peak. Therefore, in this encoding scheme, the first four bits of each data set are irrelevant to and will not be considered during matching since they will always be identical, namely 1 100.
  • the end of the data set is filled with a one or more third bits sets of a third value, e.g., 10 or 01 that will eventually be masked in the matching step 325.
  • the radial segment has more than K peaks, only the K peaks closest to the pupil-iris border are encoded.
  • the sequence representing the peak/valley information for this segment of the iris is 1 1001 1001 1001010.
  • the first two bits represent the magnitude of the reference peak 501 and are always 1 1
  • the second two bits represent the magnitude of the first peak 503 in the segment and is essentially guaranteed to be 00 because it will always be smaller than the reference peak
  • the fifth and sixth bits are 1 1 because the next peak 505 is greater than the preceding peak 503
  • the seventh and eighth bits are 00 because the next peak 507 is less than the immediately preceding peak 507
  • the ninth and tenth bits are 1 1 because the next peak 509 is greater than the preceding peak 507
  • the eleventh and twelfth bits are 00 because the next peak 51 1 is less than the immediately preceding peak 509
  • the last four bits are 1010 corresponding to two sets of unknowns because this segment has only five peaks (and the reference peak is the sixth peak represented in the data set).
  • the sequence representing the peak/valley information for this segment of the iris is 1 10000001 1 101010 since the first two bits represent the magnitude of the reference peak 501 and is always 1 1 , the next two bits represent the magnitude of the first peak in the segment 513 and are OO because it is smaller that the reference peak, the next two bits are OO because the next peak 515 is greater than the preceding peak 513, the next two bits are OO because the next peak 517 is less than the immediately preceding peak 517, the next two bits are 1 1 because the next peak 519 is greater than the preceding peak 517, and last six bits are 101010 because this segment has only five peaks (including the reference peak).
  • FIG. 5B illustrates a second exemplary encoding scheme according to principles of the present invention.
  • This second encoding scheme also is based on a 2-bit quantization of the magnitude of the filtered peaks, but the peak magnitudes are quantized into three magnitude levels, namely, Low (L), High (H), and Medium (M).
  • Low level magnitude L is assigned 2-bit pattern 00
  • High level magnitude H is assigned 2-bit pattern 1 1.
  • the levels are structured in a way that will allow only one bit error-tolerance to move from one adjacent quantization level to the next. Per this constraint, the scheme has two combinations to represent the medium level, i.e.
  • the bits corresponding to unknown peaks may be identified by any reasonable means, such as appending a flag to the end of the data set indicating the number of bits that correspond to unknown peaks.
  • the levels may be encoded with three bit quantization in order to provide additional bit combinations for representing unknown. Even further, only one value, e.g., 10, can be assigned for the Medium level, which will leave the two-bit combination 01 for representing unknowns. The unknown bits will be masked during matching, as discussed below. Likewise, if the number of peaks in a radial segment exceeds the number of peeks needed to fill the data set, then the peaks farthest from the pupil are dropped.
  • Step 325 a template is constructed by concatenating all of the data sets corresponding to all of the radial segments in the iris scan.
  • a template is constructed by concatenating all of the data sets corresponding to all of the radial segments in the iris scan.
  • Step 330 The process determines whether a scanned iris template matches a stored iris template by comparing the similarity between the corresponding bit-templates.
  • a weighted Hamming distance can be used as a metric for recognition to execute the bit-wise comparisons.
  • the comparison algorithm can incorporate a noise mask to mask out the unknown bits so that only significant bits are used in calculating the information measure distance (e.g. Hamming distance).
  • the algorithm reports a value based on the comparison. A higher value reflects fewer similarities in the templates. Therefore, the lowest value is considered to be the best matching score of two templates.
  • a weighting mechanism can be used in connection with the above mentioned matching.
  • the bits representing the peaks closest to the pupillary region (the pupil borders) are the most reliable/distinct data points and may be weighted higher as they represent more accurate data. All unknown bits, whether present in the template to be matched or in the stored templates, are weighted zero in the matching. This may be done using any reasonable technique.
  • the bit positions corresponding to unknown bits of one of the two templates are always filled in with bits that match the corresponding bits of the other template.

Abstract

A method and computer product are presented for identifying a subject by biometric analysis of an eye. First, an image of the iris of a subject to be identified is acquired. Texture enhancements may be done to the image as desired, but are not necessary. Next, the iris image is radially segmented into a selected number of radial segments, for example 200 segments, each segment representing 1.8° of the iris scan. After segmenting, each radial segment is analyzed, and the peaks and valleys of color intensity are detected in the iris radial segment. These detected peaks and valleys are mathematically transformed into a data set used to construct a template. The template represents the subject's scanned and analyzed iris, being constructed of each transformed data set from each of the radial segments. After construction, this template may be stored in a database, or used for matching purposes if the subject is already registered in the database.

Description

INVARIANT RADIAL IRIS SEGMENTATION
Related Applications
[0001] This application is related to U.S. Non-Provisional Patent
Application Serial No. 1 1/043,366, entitled "A 1 D Polar Based Segmentation Approach," filed January 26, 2005. The disclosure of the related document is hereby fully incorporated by reference.
Statement of Government Interest
[0002] This invention was made with government support under Contract
No. F10801 EE5.2. The Government has certain rights in the invention.
Field of the Invention
[0003] The invention is directed towards biomethc recognition, specifically to an improved approach to radial iris segmentation.
Background of the Invention
[0004] Biometrics is the study of automated methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits. In information technology, biometric authentications refer to technologies that measure and analyze human physical characteristics for authentication purposes. Examples of physical characteristics include fingerprints, eye retinas and irises, facial patterns and hand measurements.
[0005] A leading concern of existing biometric systems is that individual features that identify humans from others can be easily missed due to the lack of accurate acquisition of the biometric data, or due to the deviation of operational conditions. Iris recognition has been seen as a low error, high success method of retrieving biometric data. However, iris scanning and image processing has been costly and time consuming. Fingerprinting, facial patterns and hand measurements have afforded cheaper, quicker solutions. [0006] During the past few years, iris recognition has matured sufficiently to allow it to compete economically with other biomethc methods. However, inconsistency of acquisition conditions of iris images has led to rejecting valid subjects or validating imposters, especially when the scan is done under uncontrolled environmental conditions.
[0007] In contrast, under controlled conditions, iris recognition has proven to be very effective. This is true because iris recognition systems rely on more distinct features than other biomethc techniques such as facial patterns and hand measurements and therefore provides a reliable solution by offering a much more discriminating biomethc data set.
[0008] Although prototype systems and techniques had been proposed in the early 1980s, it was not until research in the 1990s that autonomous iris recognition systems were developed. The concepts discovered in this research have since been implemented in field devices. The overall approach is based on the conversion of a raw iris image into a numerical code that can be easily manipulated. The robustness of this approach and the following alternative approaches rely heavily on accurate iris segmentation. Iris segmentation is the process of locating and isolating the iris from the other parts of the eye. Iris segmentation is essential to the system's use. Computing iris features requires a high quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such an acquisition process is sensitive to the acquisition conditions and has proven to be a very challenging problem. Current systems try to maximize the segmentation accuracy by constraining the operation conditions. Constraints may be placed on the lighting levels, position of the scanned eye, and environmental temperature. These constraints can lead to a more accurate iris acquisition, but are not practical in all real time operations.
[0009] Significant progress has been made to mitigate this problem; however, these developments were mostly built around the original methodology, namely, circular/elliptical contour segmentation that has proven to be problematic under uncontrolled conditions. Other work introduces concepts which compete with the above discussed methodology, but still suffer similar issues with segmentation robustness under uncontrolled conditions.
[0010] Thus, it would be desirable to have a method that provides an iris recognition technique that is well suited for ihs-at-a-distance applications, i.e. a system utilizing unconstrained conditions, which still provides an accurate, realtime result based on the collected biomethc data.
Summary of the Invention
[0011] In accordance with the principles of the present invention, a new feature extraction technique is presented along with a new encoding scheme resulting in an improved biometric algorithm. This new extraction technique is based on a simplified polar segmentation (POSE). The new encoding scheme utilizes the new extraction technique to extract actual local iris features using a process with low computational load.
[0012] The encoding scheme does not rely on accurate segmentation of the outer bounds of the iris region, which is essential to prior art techniques. Rather, it relies on the identification of peaks and valleys in the iris (i.e., the noticeable points of change in color intensity in the iris). Advantageously, regardless of a chosen filter, the encoding scheme does not rely on the exact location of the occurrence of peaks detected in the iris, but rather relies on the magnitude of detected peaks relative to a referenced first peak. Since this algorithm does not rely on the exact location of pattern peaks/valleys, it does not require accurate segmentation of the outer boundary of the iris, which in turn eliminates the need for a normalization process.
[0013] The overall function of the present invention can be summarized as follows. First, the iris is preprocessed and then localized using an enhanced segmentation process based on a POSE approach, herein referred to as invariant radial POSE segmentation. During the segmentation process, all obscurant parts (i.e. pupil, eyelid, eyelashes, sclera and other non-essential parts of the eye) are dropped out of the analysis if the obscuration reaches the inner border of the iris. Lighting correction and contrast improvement are processed to compensate for differences in image lighting and reflective conditions. The captured iris image is unwrapped into several radial segments and each segment is analyzed to generate a one dimensional dataset representing the peak and/or valley data for that segment. The peak and/or valley data is one dimensional in the sense that peaks and/or valleys are ordered in accordance with their position along a straight line directed radially outward from the center of the iris. In one embodiment, the iris image is unwrapped into a one-dimensional polar representation of the iris signature, in which the data for only a single peak per radial segment is stored. In one implementation, the magnitude of the outermost peak from the pupil-iris border per segment is stored. In another implementation, the magnitude of the largest peak in the segment is stored. In another embodiment, the data for a plurality of peaks and/or valleys is stored per radial segment. In this embodiment, each peak and/or valley is recorded as a one bit value indicating its magnitude relative to another peak and/or valley in the segment, such as the immediately preceding peak/valley along the one dimensional direction. The data for all of the radial segments is concatenated into a template representing the data for the entire iris scan. That template can be compared to stored templates to find a match.
Brief Description of the Drawings
[0014] Figure 1 illustrates a scanned iris image based on existing techniques.
[0015] Figure 2a illustrates a scanned iris image utilizing the principles of the present invention.
[0016] Figure 2b illustrates the scanned iris image of figure 2a mapped into a one dimensional iris map.
[0017] Figure 3 illustrates a flow chart showing one embodiment of the present invention.
[0018] Figure 4a illustrates a mapping of the iris segmentation process according to the principles of the present invention.
[0019] Figure 4b illustrates an enhanced mapping of the iris scan according to principles of the present invention.
[0020] Figure 5a illustrates a first encoding scheme according to principles of the present invention.
[0021] Figure 5b illustrates a second encoding scheme according to principles of the present invention.
Detailed Description of the Invention
[0022] A leading concern of existing biomethc systems is that individual features which identify humans from others can be easily missed due to the lack of accurate data acquisition or due to deviations in operational conditions. During the past few years, iris recognition has matured to a point that allows it to compete with more common biometric means, such as fingerprinting. However, inconsistencies in acquisition conditions of iris images often leads to rejecting valid subjects or validating imposters, especially under uncontrolled operational environments, such as environments where the lighting is not closely controlled. In contrast, under controlled conditions, iris recognition has proven to be very effective. This is so because iris recognition systems rely on more distinct features than other common biometric means, providing a reliable solution by offering a more discriminating biometric.
[0023] Fig. 1 shows a scanned eye image with the borders identified according to conventional prior art segmentation techniques. Here, iris 105 is defined by outer iris border 1 10. However, outer iris border 1 10 is obstructed by the eyelid at 107 and a true border cannot be determined. The system must estimate the missing portion of the outer iris border 1 10. Computing iris features requires a high-quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such a process is sensitive to the acquisition conditions and has proven to be a challenging problem (especially for uncooperative subjects that are captured at a distance). By constraining operational conditions, such as carefully controlling lighting and the position of a subject's eye, current systems attempt to resolve segmentation problems, but these approaches are not always practical.
[0024] The major downfall of these prior art techniques is that the system focuses on the outer border of the iris to normalize the iris scaling to allow for uniform matching. Due to many factors, including eyelids and eyelashes which may obscure the outer iris border, and lightly colored irises that may be difficult to distinguish from the sclera, the outer border may be impossible to accurately map, resulting in an incorrect segmentation of the subject's iris, which, in turn, negatively impacts the rest of the biometric recognition process. In addition, when applied to uncontrolled conditions, these segmentation techniques result in many errors. Such conditions may include subjects captured at various ranges from the acquisition device or subjects who may not have their eye directly aligned with the imaging equipment.
[0025] Figure 2A shows a similarly scanned eye image as in figure 1. In this figure, principles of the present invention are applied. This approach is based on a simplified polar segmentation (POSE), a newer encoding scheme that does not rely on accurate segmentation of the outer boundary of the iris region. A detailed explanation of POSE can be found in the previously mentioned, related U.S. Non-Provisional Patent Application Serial No. 1 1/043,366, entitled "A 1 D Polar Based Segmentation Approach". The present invention utilizes an enhanced POSE technique. This enhanced POSE technique, or invariant radial POSE, focuses on detecting the peaks and valleys of the iris, i.e., the significant discontinuities in color intensity between the pupil and the sclera within defined radial segments of the iris. In other words, a peak is a point where color intensity on either side of that point (in the selected direction) is less than the color intensity at that point (and the discontinuity exceeds some predetermined threshold so as to prevent every little discontinuity from being registered as a recorded peak). Likewise, a valley is a point where color intensity on either side of that point in the selected direction is greater than the color intensity at that point (with the same qualifications).
[0026] This technique is referred to as being one dimensional because, rather than collecting two dimensional image data per radial segment as in the prior art, the collected iris data per radial segment has only one signal dimension. This process eliminates the need to: estimate an obstructed outer boundary of the iris; segment the outer bound of the iris; and calculate exact parameters of circles, ellipses, or any other shapes needed to estimate a missing portion of the outer boundary. [0027] Iris 205 is scanned utilizing the invariant radial POSE process.
Rather than concentrating on the outer border of the iris as the process in Figure 1 does, the invariant radial POSE process locates and identifies the peaks and valleys present in the scanned iris and creates an iris map. Figure 2A helps illustrate one form of iris map that can represent the peak and/or valley data in an iris scan. In Figure 2A, the data for only one peak is stored per radial segment. To construct an iris map in accordance with this embodiment of the invention, first the iris is segmented into a set number of radial segments, for example 200 segments. Thus, each segment represents a 1 .8 degree slice of a complete 360 degree scan of the iris. After each of the 200 segments is analyzed, the data for one characteristic peak in the segment is stored. In the embodiment illustrated in Figures 2A and 2B, the peak selected for representation in each radial segment is the peak 210 that is outermost from the pupil-iris border. In alternative embodiments, the selected peak may be the greatest peak (other than the peak at the pupil-iris border), the sharpest peak, or the innermost peak. If the criterion is the outermost peak, it is preferable to use the outermost peak within a predefined distance of the pupil-iris border since, as one gets closer to the iris- sclera border, the peaks and valleys tend to become less distinct and, therefore, less reliable as a criterion for identifying subjects.
[0028] Alternately, the data corresponding to valleys instead of peaks may be recorded. In fact, the recorded data need not necessarily even be a peak or valley, but may be any other readily identifiable color or contrast characteristic. The distance from the center of the pupil of whichever peak or valley (or other characteristic) is selected for representation is stored. In a preferred embodiment, the radial distance is reported as a relative value relative to the radial distance of a reference peak from the center of the pupil. In this manner, it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions (e.g., pupil dilation, ambient light). In a preferred embodiment of the invention, the reference peak is the peak at the pupil-iris border in that segment, which usually, if not always, will be the greatest peak in the segment.
[0029] Figure 2B shows the scanned iris mapped into a one dimensional iris map. To construct this iris map, first the iris is segmented into a predetermined number of radial segments, for example 200 segments, each segment representing 1.8 degrees of a complete 360 degree scan of the iris. After each of the 200 segments is analyzed, a reference peak is selected in each segment, the reference peak being the peak at the pupil-iris border in the analyzed radial segment (which usually, if not always, will be the greatest peak in the segment). The iris is unwrapped to create the graph shown in Figure 2B, with each point 215 representing the aforementioned relative radial distance of the corresponding peak for each of the radial segments.
[0030] For purposes of visualization, one may consider the conversion of the peaks and valleys data into the graph shown in Figure 2B to be an "unwrapping" of the iris about the normal of the pupil-iris border (i.e., perpendicular to the border). For example, the pupil-iris border is essentially a circular border. Imagine that border is a string and unwrapping that string into a straight line with the reference peaks from each radial segment represent as a discrete point 215, as shown in Figure 2B.
[0031] The preceding explanation is merely for the purposes of illustration in helping a person unskilled in the related arts appreciate the process viscerally. Those of skill in the related arts will understand that the conversion of the peaks and valleys information into a one dimensional dataset is actually a rather simple mathematical transformation.
[0032] Regardless of conditions, such as lighting and temperature (which affects pupil diameter dilation or contraction), this one dimensional iris representation will be unchanged with respect to the relative location of the reference peaks in each angular segment, but may result in the shifting of the entire curve 215 upwards or downwards. While pupil dilation and other factors may affect the absolute locations of the peaks or valleys (i.e., their actual distances from the pupil border), they will not affect the relative locations of the peaks and valleys in the iris relative to the reference peaks (or valleys).
[0033] Figure 4A helps illustrate the formation of an alternative and more robust representation of the scanned iris image data in which the data for multiple peaks, rather than just one characteristic peak, is recorded per radial segment. The center of the pupil is indicated by cross 405. The horizontal or x- axis represents the radial distance from the pupil-iris border (i.e., perpendicular to the pupil-iris border), and the vertical or y-axis represents the derivative of the color intensity. The peak at the pupil-iris border is indicated at 41 1 . All other peaks and valleys in the segment are represented graphically relative to the reference peak so that no data normalization will be necessary.
[0034] Note that each radial segment usually will be several pixels wide at the pupil border 410, and become wider as the distance from the pupil-iris border increases. Therefore, in order to generate the one dimensional data represented in the graph of Fig. 4A, the color intensity derivative data represented by the y- axis should be averaged or interpolated over the width of the segment. This representation of the interpolated data is shown in line 415, in which each significant data peak is marked by reference numeral 420.
[0035] Figure 4B helps illustrate an even further embodiment. Graph 425 shows a graphical representation of the iris, such as the one illustrated in Figure 4A. As with the Figure 4A embodiment, in the feature extraction step, preferably, each individual peak is isolated and recorded with respect to the reference peak. However, in addition, in order to focus solely on the peaks, enhancement curve 430 is removed from the one dimensional iris representation. Enhancement curve 430 is the component of the graph that can be removed without affecting the magnitude of each peak relative to the next peak resulting in a normalized data set focusing solely on the magnitudes of the relative peaks. Using standard wavelet analysis well known to one of ordinary skill in the art, the enhancement curve can be calculated as the approximate component (DC component) of the decomposition of the graph of Figure 4A. Once the enhancement curve is removed, a segmented graph 435 results, where each peak is represented as a point 437 on the graph. However with the removal of the enhancement curve, graph 425 is now normalized based on peak occurrence. As will be discussed in more detail below, in at least one embodiment of the invention, the peak data will be encoded very efficiently by encoding each peak relative to an adjacent peak using as few as one or two bits per peak. Accordingly, the removed enhancement curve simplifies the processing while preserving all needed information.
[0036] Figure 3 illustrates a flow chart showing an embodiment of the present invention.
[0037] In Step 305, a preprocessing step takes place. The preprocessing may be essentially conventional. In this step, texture enhancements are performed on the scanned image. Obscurant parts of the image, such as pupils, eyelids, eyelashes, sclera and other non-essential parts of the eye are dropped out of the analysis. In order to reduce the side effects of outside illumination, gray scale variations and other artifacts (e.g. colored contact lenses), the system preprocesses the image using a local radial texture pattern (LRTP). However, it should be noted that the texture enhancements are not essential to the operation of the system.
[0038] The image is preprocessed using local radial texture pattern similar to, but revised over that proposed in Y. Du, R. Ives, D. Etter, T. Welch, C-I. Chang, "A one-dimensional approach for iris identification", EE Dept, US Naval Academy, Annapolis, MD, 2004.
LRTP(X, y) = /(x, j) - -V /(x, j)
where
I (x, y) = the color intensity of the pixel located at the two dimensional coordinate x, y;
ω = the curve that determines the neighboring points of the pixel x, y; and
A = the area (number of pixels) of ω. [0039] This LRTP approach differs from that method as it avoids discontinuities due to the block analysis that was adopted in the aforementioned reference while preserving the approximation of the true mean value using the window mean instead. The mean of each window of small blocks constitutes a coarse estimate of the background illumination and thus it is subtracted from the actual values of intensities as shown in the equation above.
[0040] In Step 310, the Invariant Radial POSE segmentation process is performed. This approach differs from traditional techniques as it does not require iris segmentation at the outer border of the iris, i.e., the ihs-sclera border.
[0041] Particularly, the process first roughly determines the iris center in the original image, and then refines the center estimate and extracts the edges of the pupil. A technique for locating the center of the pupil is disclosed in aforementioned U.S. Patent Application No. 1 1/043,366, incorporated by reference and need not be discussed further. Techniques for locating the pupil- iris border also are disclosed in the aforementioned patent application and need not be discussed further.
[0042] Once the pupil edge has been found, the segmentation process begins. The radial scan of the iris is done in radial segments, e.g., 200 segments of 1.8 degrees each.
[0043] After the segmentation and scanning in Step 310, the process proceeds to Step 315. In Step 315, the actual feature extraction occurs based on the segmented image obtained in Step 310. The feature extraction process can be performed, for example, in accordance with any of the three embodiments previously described in connection with Figures 2A and B, 4A, and 4B, respectively, which detect changes in the graphical representation of the iris while not relying on the absolute location of the changes' occurrence. Particularly, the absolute locations change as a function of the natural dilation and contraction of the human iris when exposed to variations in environmental light conditions. Therefore, the feature extraction process relies on detecting the peak and valley relative variations in magnitude and their relative locations rather than focusing on their absolute magnitudes or locations. A key advantage of this approach is that it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions. A normalization procedure of the iris scan is crucial to prior art iris recognition techniques.
[0044] Next, in Step 320, the resulting peak data represented in graph 435 is encoded into an encoded template so that it can later be efficiently compared with stored templates of iris data for known persons. Two encoding alternatives are discussed below in connection with Figures 5A and 5B, respectively. These two are shown only for example and are not meant to limit the scope of the present invention.
[0045] Figures 5A and 5B help illustrate the encoding of the peak/valley data set for one radial segment of a scanned iris in accordance with two embodiments of the invention, respectively. As will be discussed in further detail below, each template will comprise a plurality of such data sets, the number of such sets in a template being equal to the number of radial segments. Thus, for instance, if each segment is 1 .8°, each template will comprise 200 such data sets.
[0046] Figure 5A illustrates a first encoding scheme which focuses on relative peak amplitude versus the amplitude of the immediately previous peak. Figure 5A illustrates encoding of the peak data for a single radial segment and shows a data set for that segment. Each data set comprises IxK bits, where K is a number of peaks per radial segment for which we wish to record data and I is the number of bits used to encode each peak. K may be any reasonable number and should be selected to be close to the typical number of peaks expected in a radial segment. In Figure 5A, K = 8.
[0047] In this encoding scheme, the first I bits of every data set represents the selected reference peak (the pupil-iris border) and is always set to a first value , e.g., 1 1 , where I = 2. As one moves from left to right within the data set, the bits represent peaks that are farther radially outward from the pupil-iris border (i.e., the x axis in Figures 2B, 4A, and 4B, which represents distance from the pupil-iris border). If the magnitude of a peak is greater than the magnitude of the previous peak in the graph such as graph 435, the bits representing that peak are set to 1 1. Otherwise, the bits are set to a second value, e.g., 00. Therefore, the second I bits are essentially guaranteed to be 00 since, in this example, the reference peak is essentially guaranteed to have the greatest magnitude in the segment, and will, thus, always be larger than the next peak. Therefore, in this encoding scheme, the first four bits of each data set are irrelevant to and will not be considered during matching since they will always be identical, namely 1 100. In cases where the radial segment does not have at least K peaks, the end of the data set is filled with a one or more third bits sets of a third value, e.g., 10 or 01 that will eventually be masked in the matching step 325. In the case where the radial segment has more than K peaks, only the K peaks closest to the pupil-iris border are encoded.
[0048] Thus, referring to the iris segment shown in the first, left-hand graph of Figure 5A, the sequence representing the peak/valley information for this segment of the iris is 1 1001 1001 1001010. Particularly, the first two bits represent the magnitude of the reference peak 501 and are always 1 1 , the second two bits represent the magnitude of the first peak 503 in the segment and is essentially guaranteed to be 00 because it will always be smaller than the reference peak, the fifth and sixth bits are 1 1 because the next peak 505 is greater than the preceding peak 503, the seventh and eighth bits are 00 because the next peak 507 is less than the immediately preceding peak 507, the ninth and tenth bits are 1 1 because the next peak 509 is greater than the preceding peak 507, the eleventh and twelfth bits are 00 because the next peak 51 1 is less than the immediately preceding peak 509, and the last four bits are 1010 corresponding to two sets of unknowns because this segment has only five peaks (and the reference peak is the sixth peak represented in the data set).
[0049] As another example, referring to the iris segment shown in the second, right-hand graph of Figure 5A, the sequence representing the peak/valley information for this segment of the iris is 1 10000001 1 101010 since the first two bits represent the magnitude of the reference peak 501 and is always 1 1 , the next two bits represent the magnitude of the first peak in the segment 513 and are OO because it is smaller that the reference peak, the next two bits are OO because the next peak 515 is greater than the preceding peak 513, the next two bits are OO because the next peak 517 is less than the immediately preceding peak 517, the next two bits are 1 1 because the next peak 519 is greater than the preceding peak 517, and last six bits are 101010 because this segment has only five peaks (including the reference peak).
[0050] Figure 5B illustrates a second exemplary encoding scheme according to principles of the present invention. This second encoding scheme also is based on a 2-bit quantization of the magnitude of the filtered peaks, but the peak magnitudes are quantized into three magnitude levels, namely, Low (L), High (H), and Medium (M). Low level magnitude L is assigned 2-bit pattern 00, and High level magnitude H is assigned 2-bit pattern 1 1. To account for quantization errors, the levels are structured in a way that will allow only one bit error-tolerance to move from one adjacent quantization level to the next. Per this constraint, the scheme has two combinations to represent the medium level, i.e. Mi = 10, and Mr = OL Ml represents a case where the valley to the left of the corresponding peak is lower than the valley to the right of the peak and Mr represents a peak where the valley to the right is lower than the valley to the left. Peak 520, for example, would be labeled Mi because the valley to its left is lower than the valley to its right. Peak 521 , on the other hand, is an example of a peak that would be labeled Mr because the valley to its left is higher than the valley to its right. However, the two resulting values for Medium level magnitude are treated as equivalent in the matching process. Again, bits are used to complete the bit-vector to reach a predefined vector length, e.g., 10 bits, if there are not enough peeks to fill the data set. Although not shown in the figure, the bits corresponding to unknown peaks may be identified by any reasonable means, such as appending a flag to the end of the data set indicating the number of bits that correspond to unknown peaks. Alternately, the levels may be encoded with three bit quantization in order to provide additional bit combinations for representing unknown. Even further, only one value, e.g., 10, can be assigned for the Medium level, which will leave the two-bit combination 01 for representing unknowns. The unknown bits will be masked during matching, as discussed below. Likewise, if the number of peaks in a radial segment exceeds the number of peeks needed to fill the data set, then the peaks farthest from the pupil are dropped.
[0051] Next, in Step 325, a template is constructed by concatenating all of the data sets corresponding to all of the radial segments in the iris scan. Thus, for example, if there are 200 radial segments and the number of bits used for each data set in the encoding scheme to represent the detected peaks is 16 bits, all encoded binary strings are concatenated into a template of 16x200 = 3400 bits.
[0052] Once the data is encoded, the process continues to Step 330. The process determines whether a scanned iris template matches a stored iris template by comparing the similarity between the corresponding bit-templates. A weighted Hamming distance can be used as a metric for recognition to execute the bit-wise comparisons. The comparison algorithm can incorporate a noise mask to mask out the unknown bits so that only significant bits are used in calculating the information measure distance (e.g. Hamming distance). The algorithm reports a value based on the comparison. A higher value reflects fewer similarities in the templates. Therefore, the lowest value is considered to be the best matching score of two templates.
[0053] To account for rotational inconsistencies and imaging misalignment, when the information measure of two templates is calculated, one template is shifted left and right bit-wise (along the angular axis) and a number of information measure distance values are calculated from successive shifts. This bit-wise shifting in the angular direction corresponds to rotation of the original iris region by an angular resolution unit. From the calculated information measure distances, only the lowest value is considered to be the best matching score of two templates.
[0054] A weighting mechanism can be used in connection with the above mentioned matching. The bits representing the peaks closest to the pupillary region (the pupil borders) are the most reliable/distinct data points and may be weighted higher as they represent more accurate data. All unknown bits, whether present in the template to be matched or in the stored templates, are weighted zero in the matching. This may be done using any reasonable technique. In one embodiment, when two templates are being compared, the bit positions corresponding to unknown bits of one of the two templates are always filled in with bits that match the corresponding bits of the other template.
[0055] While the above described embodiments rely on the detection and analysis of a peak in the iris, this is merely shown as an example. Other embodiments can rely on the detection of valleys in the iris, or any other noticeable feature in the iris.
[0056] It should be clear to persons familiar with the related arts that the process, procedures and/or steps of the invention described herein can be performed by a programmed computing device running software designed to cause the computing device to perform the processes, procedures and/or steps described herein. These processes, procedures and/or steps also could be performed by other forms of circuitry including, but not limited to, application- specific integrated circuits, logic circuits, and state machines.
[0057] Having thus described a particular embodiment of the invention, various alterations, modifications, and improvements will readily occur to those skilled in the art. Such alterations, modifications, and improvements as are made obvious by this disclosure are intended to be part of this description though not expressly stated herein, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and not limiting. The invention is limited only as defined in the following claims and equivalents thereto.

Claims

Claims
1. A method of identifying a subject by biometric analysis of the iris of an eye, the method comprising the steps of:
(1 ) acquiring an image of an iris of the subject;
(2) radially segmenting the iris image into a plurality of radial segments;
(3) for each radial segment, determining data for a predetermined one dimensional feature within said segment relative to a reference value of said feature within said image; and
(4) constructing a template for said subject comprising said data set for each of said radial segments.
2. The method of claim wherein step (2) comprises the steps of:
(2.1 ) determining a center of a pupil of said subject; and
(2.2) determining a pupil-iris border in said image; and
(2.3) radially segmenting said iris into a plurality of radial segments of equal angular size.
3. The method of claim 2 wherein said feature is peaks of color intensity and wherein said data comprises a distance from said pupil-iris border of a one of said peaks that is the farthest from said pupil-iris border within a predetermined distance of said pupil-iris border.
4. The method of claim 2, wherein said one-dimensional data comprises a distance from said pupil-iris border of a largest one of said peaks in said radial segment.
5. The method of claim 2 wherein said feature comprises peaks of color intensity and wherein said one dimensional data comprises the relative magnitudes of said peaks and their relative locations along a direction radially outward from said center of said pupil.
6. The method of claim 5 wherein said relative magnitudes are determined by interpolating data across a width of each said radial segment.
7. The method of claim 5 wherein said magnitudes are determined by averaging data across a width of each radial segment.
8. The method of claim 5 wherein said reference value is a value of a one of said peaks corresponding to color intensity at said pupil-iris border.
9. The method of claim 8 wherein step (3) includes the step of removing a decomposition curve from said detected peaks and valleys.
10. The method of claim 5 further comprising the step of: (5) encoding said data into a data set.
1 1. The method of claim 10 wherein, in step (5), a first predetermined number of bits are used to represent data of each peak in said radial segment and each said data set comprises a second predetermined number of bits corresponding to a predetermined number of peaks that can be encoded in said data set, and: if the number of peaks in said radial segment is less than said predetermined number of peaks that can be encoded, filling said data set with bits indicating an unknown peak; and if the number of peaks in said radial segment is greater than said predetermined number of peaks that can be encoded, encoding a subset of said peaks in said radial segment.
13. The method of claim 12 wherein said subset of peaks comprises said peaks in said radial segment that are closest to a pupil of said subject.
14. The method of claim 12 wherein said subset of peaks comprises the largest peaks detected in said radial segment.
15. The method of claim 10 further comprising the step of:
(6) comparing said subject's template to at least one stored template to determine if said subject's template matches said at least one stored template.
16. The method of claim 15 wherein step (6) further comprises the step of weighting each encoded data set such that bits corresponding to peaks closer to said subject's pupil are more heavily weighted than bits farther from said subject's pupil.
17. The method of claim 10 wherein step (6) comprises: determining an order of said peaks in said radial segment; assigning a first value to each peak having a magnitude that is greater than a preceding peak in said order; assigning a second value to each peak having a magnitude that is lesser than a preceding peak in said order; and placing said values in said data set in accordance with said order.
18. The method of claim 17, wherein said reference value is a value of a one of said peaks corresponding to color intensity at said pupil-iris border and wherein said peaks are ordered in accordance with their distance from the subject's pupil.
19. The method of claim 10, wherein said encoding comprises encoding each peak in said dataset as a two bit sequence wherein a first two bit sequence represents a peak with a high magnitude, a second two bit sequence represents a peak with a low magnitude, and third and fourth two bit sequences both represent a peak with a medium magnitude.
20. The method of claim 19, wherein said encoding is performed with one bit error tolerance.
21. The method of claim 1 wherein step (1 ) comprises preprocessing said image by performing texture enhancement on said image and dropping out parts of said image that obscure said iris.
22. A computer program product embodied on a computer readable medium for identifying a subject by biomethc analysis of the iris of an eye, the product comprising: first computer executable instructions for acquiring an image of an iris of the subject; second computer executable instructions for radially segmenting the iris image into a plurality of radial segments; third computer executable instructions for determining data for a predetermined one dimensional feature within said segment relative to a reference value of said feature within said image; and fourth computer executable instructions for constructing and storing a template for said subject comprising said data set for each of said radial segments.
23. The product of claim 22 wherein said second computer executable instructions comprises: instructions for determining a center of a pupil of said subject; and instructions for determining a pupil-iris border in said image; and instructions for radially segmenting said iris into a plurality of radial segments of equal angular size.
24. The product of claim 23 wherein said feature is peaks of color intensity and wherein said data comprises a distance from said pupil-iris border of a one of said peaks that is the farthest from said pupil-iris border within a predetermined distance of said pupil-iris border.
25. The product of claim 23 wherein said one-dimensional data comprises a distance from said pupil-iris border of a largest one of said peaks in said radial segment.
26. The product of claim 23 wherein said feature comprises peaks of color intensity and wherein said one dimensional data comprises the relative magnitudes of said peaks and their relative locations along a direction radially outward from said center of said pupil.
27. The product of claim 26 wherein said relative magnitudes are determined by interpolating data across a width of each said radial segment.
28. The product of claim 26 wherein said magnitudes are determined by averaging data across a width of each radial segment.
29. The method of claim 26 wherein said reference value is a value of a one of said peaks corresponding to color intensity at said pupil-iris border.
30. The product of claim 29 wherein said third computer executable instructions include instructions for removing a decomposition curve from said detected peaks and valleys.
31. The product of claim 30 further comprising: fifth computer executable instructions include instructions for encoding said data sets.
32. The product of claim 31 wherein, in said fifth computer executable instructions, a first predetermined number of bits are used to represent data of each peak in said radial segment and each said data set comprises a second predetermined number of bits corresponding to a predetermined number of peaks that can be encoded in said data set, and: if the number of peaks in said radial segment is less than said predetermined number of peaks that can be encoded, filling said data set with bits indicating an unknown peak; and if the number of peaks in said radial segment is greater than said predetermined number of peaks that can be encoded, encoding a subset of said peaks in said radial segment.
33. The product of claim 32 wherein said subset of peaks comprises the peaks in said radial segment that are closest to a pupil of said subject.
34. The product of claim 32 wherein said subset of peaks comprises the largest peaks detected in said radial segment.
35. The product of claim 32 further comprising: sixth computer executable instructions for comparing said subject's template to at least one stored template to determine if said subject's template matches said at least one stored template.
36. The product of claim 35 wherein said sixth computer executable instructions further comprise instructions for weighting each encoded data set such that bits corresponding to peaks closer to said subject's pupil are more heavily weighted than bits farther from said subject's pupil.
37. The product of claim 32 wherein said sixth computer executable instructions comprises: instructions for determining an order of said peaks in said radial segment; instructions for assigning a first value to each peak having a magnitude that is greater than a preceding peak in said order; instructions for assigning a second value to each peak having a magnitude that is lesser than a preceding peak in said order; and instructions for placing said values in said data set in accordance with said order.
38. The product of claim 37, wherein said reference value is a value of a one of said peaks corresponding to color intensity at said pupil-iris border and wherein said peaks are ordered in accordance with their distance from the subject's pupil.
PCT/US2007/063019 2006-03-03 2007-03-01 Invariant radial iris segmentation WO2007103698A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
KR1020087022043A KR101423153B1 (en) 2006-03-03 2007-03-01 Invariant radial iris segmentation
JP2008558461A JP4805359B2 (en) 2006-03-03 2007-03-01 Invariant radial iris segmentation
GB0815933A GB2450027B (en) 2006-03-03 2007-03-01 Invariant radial iris segmentation

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US77877006P 2006-03-03 2006-03-03
US60/778,770 2006-03-03
US11/372,854 US8442276B2 (en) 2006-03-03 2006-03-10 Invariant radial iris segmentation
US11/372,854 2006-03-10
US11/382,373 2006-05-09
US11/382,373 US8064647B2 (en) 2006-03-03 2006-05-09 System for iris detection tracking and recognition at a distance

Publications (2)

Publication Number Publication Date
WO2007103698A2 true WO2007103698A2 (en) 2007-09-13
WO2007103698A3 WO2007103698A3 (en) 2007-11-22

Family

ID=38353648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/063019 WO2007103698A2 (en) 2006-03-03 2007-03-01 Invariant radial iris segmentation

Country Status (4)

Country Link
JP (1) JP4805359B2 (en)
KR (1) KR101423153B1 (en)
GB (1) GB2450027B (en)
WO (1) WO2007103698A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2979727A1 (en) * 2011-09-06 2013-03-08 Morpho IDENTIFICATION BY RECOGNITION OF IRIS

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8873810B2 (en) 2009-03-02 2014-10-28 Honeywell International Inc. Feature-based method and system for blur estimation in eye images
GB2468380B (en) * 2009-03-02 2011-05-04 Honeywell Int Inc A feature-based method and system for blur estimation in eye images
US8948467B2 (en) * 2010-08-06 2015-02-03 Honeywell International Inc. Ocular and iris processing system and method
KR101601564B1 (en) * 2014-12-30 2016-03-09 가톨릭대학교 산학협력단 Face detection method using circle blocking of face and apparatus thereof

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062239A1 (en) * 1999-04-09 2000-10-19 Iritech Inc. Iris identification system and method of identifying a person through iris recognition
US20050207614A1 (en) * 2004-03-22 2005-09-22 Microsoft Corporation Iris-based biometric identification

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001195594A (en) * 1999-04-09 2001-07-19 Iritech Inc Iris identifying system and method of identifying person by iris recognition
WO2004090814A1 (en) * 2003-04-02 2004-10-21 Matsushita Electric Industrial Co. Ltd. Image processing method, image processor, photographing apparatus, image output unit and iris verify unit

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062239A1 (en) * 1999-04-09 2000-10-19 Iritech Inc. Iris identification system and method of identifying a person through iris recognition
US20050207614A1 (en) * 2004-03-22 2005-09-22 Microsoft Corporation Iris-based biometric identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MA L ET AL: "Local intensity variation analysis for iris recognition" PATTERN RECOGNITION, ELSEVIER, KIDLINGTON, GB, vol. 37, no. 6, June 2004 (2004-06), pages 1287-1298, XP004505327 ISSN: 0031-3203 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2979727A1 (en) * 2011-09-06 2013-03-08 Morpho IDENTIFICATION BY RECOGNITION OF IRIS
WO2013034654A1 (en) * 2011-09-06 2013-03-14 Morpho Identification by iris recognition
CN103843009A (en) * 2011-09-06 2014-06-04 茂福公司 Identification by iris recognition
US9183440B2 (en) 2011-09-06 2015-11-10 Morpho Identification by iris recognition

Also Published As

Publication number Publication date
GB0815933D0 (en) 2008-10-08
KR101423153B1 (en) 2014-07-25
WO2007103698A3 (en) 2007-11-22
GB2450027A (en) 2008-12-10
KR20080100256A (en) 2008-11-14
GB2450027B (en) 2011-05-18
JP4805359B2 (en) 2011-11-02
JP2009529195A (en) 2009-08-13

Similar Documents

Publication Publication Date Title
US8442276B2 (en) Invariant radial iris segmentation
Chen et al. Iris recognition based on human-interpretable features
Raja Fingerprint recognition using minutia score matching
KR102501209B1 (en) Method for identifying and/or authenticating an individual by iris recognition
Alonso-Fernandez et al. Iris recognition based on sift features
Hou et al. Finger-vein biometric recognition: A review
WO2007103698A2 (en) Invariant radial iris segmentation
Chirchi et al. Feature extraction and pupil detection algorithm used for iris biometric authentication system
Podder et al. An efficient iris segmentation model based on eyelids and eyelashes detection in iris recognition system
CN112926516B (en) Robust finger vein image region-of-interest extraction method
WO2007097510A1 (en) Deformation-resilient iris recognition methods
Gupta et al. Iris recognition system using biometric template matching technology
Jung et al. Fingerprint classification using the stochastic approach of ridge direction information
KR100794361B1 (en) The eyelid detection and eyelash interpolation method for the performance enhancement of iris recognition
Kulshrestha et al. Finger print recognition: survey of minutiae and gabor filtering approach
Arora et al. Human identification based on iris recognition for distant images
Khan et al. Fast and efficient iris segmentation approach based on morphology and geometry operation
Sahmoud Enhancing iris recognition
George et al. A survey on prominent iris recognition systems
Poonguzhali et al. Iris indexing techniques: A review
US7333640B2 (en) Extraction of minutiae from a fingerprint image
Wibowo et al. Real-time iris recognition system using a proposed method
Sojan et al. Fingerprint Image Enhancement and Extraction of Minutiae and Orientation
Tian et al. A practical iris recognition algorithm
Mehrotra et al. An efficient dual stage approach for iris feature extraction using interest point pairing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 0815933

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20070301

WWE Wipo information: entry into national phase

Ref document number: 0815933.7

Country of ref document: GB

WWE Wipo information: entry into national phase

Ref document number: 2008558461

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1020087022043

Country of ref document: KR

122 Ep: pct application non-entry in european phase

Ref document number: 07757674

Country of ref document: EP

Kind code of ref document: A2