WO2002007068A1 - An authentication device for forming an image of at least a partial area of an eye retina - Google Patents

An authentication device for forming an image of at least a partial area of an eye retina Download PDF

Info

Publication number
WO2002007068A1
WO2002007068A1 PCT/BE2001/000118 BE0100118W WO0207068A1 WO 2002007068 A1 WO2002007068 A1 WO 2002007068A1 BE 0100118 W BE0100118 W BE 0100118W WO 0207068 A1 WO0207068 A1 WO 0207068A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
retina
eye
illumination
Prior art date
Application number
PCT/BE2001/000118
Other languages
French (fr)
Inventor
D. Beghuin
P. Chevalier
D. Devenyn
J. -M. Wislez
Original Assignee
Creative Photonics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Creative Photonics N.V. filed Critical Creative Photonics N.V.
Priority to AU2001275610A priority Critical patent/AU2001275610A1/en
Publication of WO2002007068A1 publication Critical patent/WO2002007068A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor

Definitions

  • An authentication device for forming an image of at least a partial area of an eye retina
  • the present invention relates to an authentication device using data of at least a partial area of an eye retina, said device comprising illumination means for forming an illumination channel, said illumination means comprising a first light source, in particular an infrared light source, provided for generating a first light beam in order to illuminate said partial area of said retina.
  • Such a device is known from US-A-4109237.
  • the illumination means are provided for illuminating a partial area of the retina by successive illumination of individual spots on the retina.
  • a circular scanning is realised for collecting data of the scanned part of the retina by using the light reflected by the retina.
  • a drawback of the known device is that it requires either mechanical movements of some components or numerous duplication of senders and/or receivers, or expensive acoustical-optical elements.
  • An object of the present invention is to realise a homogeneous radiation density on the partial area of the retina in order to produce an image thereof.
  • a device according to the invention is therefore characterised in that said device further comprises imaging means arranged in an imaging channel and ending in an image sensor for forming said image with light reflected by said retina, said illuminating means and said imaging means comprise an eyepiece applied into said illumination and said imaging channel, said eyepiece being provided for focusing light originating from said first light source at a first point in said illumination channel substantially corresponding to a position where a pupil of said eye has to be positioned, said eyepiece being further provided for focusing said reflected light on an image plane recordable by said image sensor.
  • the "imaged" first light source forms a light source at the pupil which enables to illuminate the to be considered retina area in a homogeneous manner. Scattered and/or reflected radiation from the retina is then collected by the eyepiece in order to form an image of the considered retina area on the image plane.
  • a first preferred embodiment of a device according to the invention is characterised in that a beam splitter is applied into said illumination and said imaging channel, said beam splitter being provided for orienting said first light beam towards said retina and for orienting said imaging channel towards said image sensor.
  • the beam splitter enables on the one hand to combine the illumination and the imaging channel along a common axis in line with the eye and on the other hand to dissociate the channels in the vicinity of the image sensor.
  • a second preferred embodiment of a device according to the present invention is characterised in that an optical member is applied into said image channel between said image plane and said image sensor, said optical member being provided for projecting an image of said retina area formed on said image plane on said image sensor.
  • the optical member enables to apply optical operations on the image such as for example magnification.
  • an aperture stop is applied between said image plane and said image sensor at a position where an eye pupil is imaged.
  • the aperture stop enhances the depth of the field and can be used for making the image's luminosity independent from the eye's pupil size.
  • polarising means are used upon coupling said illumination into said imaging channel.
  • the polarising means enables a better splitting of the imaging channel and the illumination channel.
  • said imaging means are designed to be substantially telecentric. This enables to obtain a consistent image size.
  • a third preferred embodiment of a device is characterised in that it further comprises eye positioning means, provided for enabling a positioning of said eye substantially at said position, said eye positioning means comprising a second light source, provided for emitting second light beams of visible light intersecting at said position.
  • the positioning means will help the user to correctly position his eye, in such a manner, that his pupil position substantially corresponds to the position of the first point.
  • the positioning means will thus help the user to correctly position his eye.
  • said eye positioning means comprises at least one picture located into said second light beam and illuminated therewith, said picture(s) being applied in such a manner that it is (they are) sharply displayed on said retina.
  • a picture provides a user friendly device.
  • a fourth preferred embodiment of a device according to the invention is characterised in that a further beam splitter is applied in said illumination and positioning channel, said further beam splitter being provided for orienting said positioning channel towards said retina. This provides more flexibility for the illumination and the positioning channel.
  • a fifth preferred embodiment of a device according to the invention is characterised in that eye fixation means are provided for selecting said partial retina area, said eye fixation means comprising a target which is imaged by means of a visible target light beam on said retina.
  • said illumination and said imaging channel have an optical axis, said second light beam having a light beam axis being off-axis with respect to said optical axis and said target light beam being on said light beam axis. In such a manner a central viewing, which can be used with both eyes, is possible.
  • said further beam splitter is a wavelength selective beam splitter provided for selectively orienting said imaging and said positioning channel in distinct directions.
  • Light of different wavelengths can thus be used for imaging and positioning purpose, while using the same optics.
  • a sixth preferred embodiment of a device according to the invention is characterised in that said eyepiece, and said optical member, said first and second light source are rigidly fixed within said device. No adjustments of the optics are required once the device is built up.
  • said device comprises pattern projection means provided for projecting a predetermined pattern on said retina, said imaging means being provided for forming on said image plane a further image of said pattern with light reflected by said retina, said further image being recordable by said image sensor. Recognition of the selected retina part becomes more easy.
  • a seventh preferred embodiment of a device according to the invention is characterised in that said image sensor is connected to image processing means provided to apply an authentication operation on an image, recorded by said image sensor. The processing of the recorded image can thus be performed.
  • figure 1 shows a set-up of the illumination and imaging channel within the device
  • figure 2 shows a set-up of the device using an intermediate image plane
  • figure 3 shows the incorporation of an eye detection target into the illumination channel
  • figure 4 illustrates an example of the positioning means
  • figure 5 shows an implementation of a circular set-up for positioning purpose
  • figure 6 and 7 illustrate the use of an image target for positioning
  • figure 8 shows an embodiment of the positioning means using a microlens array
  • figure 9 shows an embodiment of the positioning means using wedges
  • figure 10 illustrates an example of a retina structure
  • figure 11 illustrates an implementation of an autonomous fixation target
  • figure 12 illustrates an implementation of a fixation target using the eyepiece optics of the illumination and imaging channel
  • figure 13 illustrates an implementation of a fixation target using the beam splitter of the illumination or positioning channel
  • figure 14 illustrates a device with central viewing which can be used with any of both eyes of the user
  • figure 15 shows by
  • Figure 1 illustrates a first embodiment of a retinal authentication device according to the invention for forming an image of at least a partial area of an eye retina.
  • the device comprises a first light source 1 , provided for generating a first light beam 2, which is either formed by a continuous light or by pulsed light.
  • the first light source is preferably formed by a LED emitting light in the near infrared light range, such as for example between 720 nm and 1300 nm. It would of course also be possible to have a first light source emitting visible light. This latter option is however less preferred as it is less convenient for the user and causes the eye's pupil to reduce in size when the light beam is switched on.
  • the first light beam is preferably collimated by a collimator lens 3 in order to form a parallel beam.
  • the first light beam which is part of an illumination channel 10 is incident on a beam splitter formed by a semi-transparent mirror 4, in order to orient the first light beam towards the user's eye 5 and to further form the illumination channel 10.
  • the first light beam 2 after being reflected by the semi- transparent mirror 4, crosses an eyepiece 11.
  • the eyepiece is formed by a set of lenses with a combined positive effect which focuses the incoming light beam in a first point 12 situated near the eye pupil 13.
  • an image of the first light source 1 is formed in that first point 12, which image forms a light source illuminating at least a part 16 of the retina 15.
  • a substantially homogeneous illumination of the retina is thus obtained.
  • the eyepiece 11 could also be formed by a single lens with positive effect.
  • the light incident on the illuminated part of the retina is then scattered by the latter and the scattered light beam crossing the eye lens 17, the eye pupil 13 and a cornea 14 leaves the eye substantially collimated and reaches the eyepiece 11. That scattered light creates an imaging channel 9.
  • the eyepiece will focus the scattered light on the plane of the image sensor 8, in order to form by means of that reflected light an image of the illuminated retina part on the image sensor.
  • the semi-transparent mirror 4 enables the light in the image channel to reach the image sensor.
  • the beam splitter as well couples as de-couples the image and the illumination channel. Both the illumination and the image channel are centralised around the optical axis 18 crossing the retina, the eye pupil, the eyepiece, the beam splitter and the image sensor.
  • the image sensor is preferably formed by a 2D CCD (Charge Coupled Device) a CMOS (Complementary Metal Oxide Semiconductor), a CID (Charge Injection Device), a PDA (Photodiode Array) or any other imaging sensor.
  • CCD Charge Coupled Device
  • CMOS Complementary Metal Oxide Semiconductor
  • CID Charge Injection Device
  • PDA Photodiode Array
  • polarising means 6, 7 are used, which are placed near the beam splitter 4, as illustrated in the embodiment of figure 1.
  • the polarising means comprise a first polariser 6 provided for eliminating light from back-reflections on the image sensor 8 as will be described hereinafter.
  • a second polariser 7 is applied in the path of the first light beam and is provided for polarising light reaching the cornea 14.
  • Other set-ups for the polarising means than the one illustrated in figure 1 are however possible. It is however important that the first polariser 6 is mounted between the beam splitter and the sensor and that the second polariser 7 is mounted between the first light source 1 and the beam splitter 4.
  • the polarising means could be formed by the second polariser 7 only in combination with a polarising semi-transparent mirror 4, or by a combination of the polarising semi-transparent mirror and both polarisers 6 and 7.
  • the second polariser 7 is provided for selecting a linear polarisation state such a for example the s- polarisation state.
  • the beam splitter 4 is then chosen in such a manner that the semi-transparent mirror has a better reflection coefficient for s- polarisation than for a p-polarisation state, so that the efficiency of the light reflection towards the eye is maximised. In such a manner the necessary light power of the first light source and the back reflection on the image sensor are limited. The light scattered by the retina has lost its initial polarisation causing a random polarisation.
  • the radiation within the image channel is further filtered by the first polariser 6 which only enables the passage of the p-polarised light.
  • Figure 2 shows a further embodiment of a device according to the present invention.
  • the embodiment shown in this figure distinguishes from the one illustrated in figure 1 by the fact that a retina intermediate image plane 19 is used. This signifies that an image of the illuminated retina part is first formed on the intermediate image plane 19 and then on the image sensor 8 plane where the final image is recorded.
  • a first 20 and a second 21 optical element are arranged within the imaging channel 9.
  • the eyepiece, the optical elements and the first and second light source are all rigidly fixed within the device. In such a manner no adjustments to the user are required once they have been mounted in the device.
  • the first optical element 20 collimates the reflected light beam, starting from the intermediate image plane 19.
  • the second optical element 21 focuses the collimated image light beam onto the image plane of the sensor 8.
  • the use of this first and second optical element enables to choose a magnification of the image, formed on the image sensor plane. In such a manner, the image of the retina part can be adjusted to the image sensor's size.
  • the beam splitter 4 is situated between the optical elements 20 and 21.
  • the first optical element 20, which is also applied in the illumination channel 10 replaces the lens 3 of figure 1 and forms a parallel first light beam in the illumination channel.
  • the first light beam incident on the beam splitter is not collimated by a lens.
  • the imaging optics composed of the optical elements 20 and 21 and the eyepiece 11 , should be designed in such a manner to be telecentric or close to telecentric in order to obtain a consistent image size, even if the focal distance of the eye varies somewhat.
  • Telecentricity means that the chief rays 22 impinging on the sensor 8 should make an angle ⁇ with the optical axis 18 which equals 0°.
  • the eyepiece 11 , the lens 3, the first 20 and second 21 optical elements are all made of a combination of one or more optical components such as refractive, diffractive and/or reflective components which can be made of glass, vitroceramics, polymers or the like.
  • the surface of those components is spherical or aspherical and is preferably coated with an appropriate layer.
  • the optical components used within the image channel are tuned to make an image of an object at infinity or at a finite distance, as the retina plane is projected at this given distance by the eye lens 17. Therefore the device according to the present invention can also be used for face recognition or as a surveillance camera.
  • An aperture stop 23 is mounted between the first 20 and second 21 optical element, at the place where the eye's pupil 13 is imaged by the eyepiece and the first optical element 20.
  • the aperture stop 23 enables to limit the aperture of the imaging channel to an aperture corresponding to the size of the smallest expected eye pupil, i.e. 1 to 2 mm. This avoids scattered light from i.a. the iris to influence the image formed on the sensor and yields a constant image brightness, which is independent from the eye's pupil size.
  • the aperture stop also reduces the numerical aperture of the device leading to an increased field depth and limiting the sensitivity of the imaging channel to eye defects.
  • a baffle 24 is applied between the eye 5 and the eyepiece 11.
  • the baffle may limit the influence of stray light on the image and causes, due to the reduced amount of incident light, the eye pupil to open more, which on its turn enables a less stringent positioning accuracy and consequently improves the image quality.
  • the opening angle ⁇ of the illumination between the optical axis 18 and the outermost of the first light beam, considered in the first point 12 is determined by the dimension of the part of the retina to be illuminated.
  • An angle 5° ⁇ ⁇ ⁇ 15° enables the illumination of a sufficient retina part, enough to form an image, providing sufficient information for biometric authentication purposes.
  • the eye As long as the eye is not correctly positioned it is not meaningful to use an image grabbed by the image sensor for authentication purpose.
  • the pattern is positioned in the outer regions of the image in order to avoid interference with the information carrying image parts.
  • the image sensor only images near infrared light
  • the pattern on the retina should also be created in the near infrared.
  • One straightforward implementation is to block a part of the illumination channel (dark lines, dark square, ...), in a plane that is sharply imaged on the retina.
  • a shadow image can be created by an aperture in the collimated first light beam between the beam splitter 4 and the collimator lens 3 in a plane 90 situated just after the lens 3 and matching with the image plane 8.
  • FIG 3. A further possible optical set-up for the illumination optics using such a target projection on the retina is shown in figure 3.
  • This embodiment is comparable with the one illustrated in figure 2.
  • the plane 90 matching the imaging plane is created by means of a lens 91 in the illumination path.
  • the collimating lens 3 is added to restore the regular light path of the illumination subsystem. Since the pattern is incorporated into the first light beam due to its presence in plane 90, it is projected on the retina and thus displayed on the image sensor, enabling a selection of usable pictures for authentication purpose.
  • the pattern used for eye detection can also be generated by an independent light source.
  • the pattern shape can be tuned to algorithms used in the detection.
  • FIG. 4 illustrates a first embodiment of such positioning means.
  • the latter comprise a series of second light sources 30-1 , 30-2, provided for producing second light beams 31 of visible light.
  • the second light beams are preferably formed by collimated beams which are obtained by using small sized field stops 33 and collimating optics 36.
  • the second light sources may also comprise an aperture stop 32, provided to adjust the diameter of the collimated bundle.
  • the field stop 33 is also suitable for controlling the size or shape of a target, such as for example a cross, which is incorporated into the second light source and has to be observed by the user.
  • the second light sources 30 are preferably positioned in a circle having its centre on the optical axis 18. Due to this circular set-up a cone shaped second light beam is formed by the different second light sources.
  • the second light beams 31 intersect in a position 35, which coincides with the first point 12, on which the first light beam is focused. At this position 35 all the light of the second beams is concentrated in a disc having a size of at the most the minimal eye pupil opening. Only if the user positions his eye substantially at this position 35, the user will see a ring comprising all the beams produced by all the second light sources 30 and their respective targets. The user thus has to move his head and his eye until he sees all second light beams simultaneously. Only then, his eye position will substantially correspond with the one of the first point 12 and his retina will be adequately illuminated by the first source 1.
  • the second light beams need not necessarily to be formed by discrete light sources.
  • a continuous ring of light as illustrated in figure 5, can also be used.
  • a light emitting ring 40 preferably formed by a Light Emitting Polymer (LEP) or a light guide, is applied around the optical axis 18. The user sees the LEP through an aperture 42 and preferably one or more baffles 41 in order to limit the eye positions from which the ring can be seen.
  • Collimating optics 43 can be added in front of the ring of light assembly to offer a sharper view of the ring to the user.
  • Figure 6 shows an alternative embodiment for the eye positioning means. In this embodiment a 2D image slide 50 is incorporated into the second light beam 3 .
  • a lens 36 placed before this slide 50, collimates the second light beam, which has a limited spatial extent due to the aperture stop 54 applied adjacent the second light source 30.
  • the light, collimated by lens 36 passes through the slide carrying the image to be displayed in order to pick up the latter and reaches the second beam splitter 9.
  • the position of the lens 36 and the aperture stop 54 need to be carefully chosen in order to produce an image of the aperture stop 54 via the eyepiece 11 at the entrance pupil 13.
  • the user then needs to adjust his position so that he can see all beams 56 originating from image 50, which intersect at position 35, corresponding to the first point. At too small or too large distances, only the central part of the displayed image is visible. When the user's eye is displaced in a lateral direction, one image side will disappear.
  • Figure 7 shows a further embodiment where one or more 2D-images are presented to the user.
  • the second light source 30 illuminates the 2D-image slide 50 via a diffuser 51.
  • the second light source is for example formed by a LED or a LEP.
  • the slide 50 needs not to be transparent and illuminated by a separate light source if it is luminescent itself.
  • Slide collimating optics 36 are placed subsequent to the slide and they are followed by an aperture stop 32, when considered in the direction of the outgoing second light beam.
  • the aperture stop 32 enables the light, from a given part of the slide, to follow a predetermined path through the optics.
  • the second light beam 31 is then incident on a second or further beam splitter 49 in order to be injected in the illumination and imaging channel.
  • the second light beams intersect at the position 52 in order to substantially coincide with the first point. The full image can only be seen by the user when all the rays of the second light beams enter the eye pupil 13, thus when the latter is correctly positioned at
  • the image plane 53 of the slide 50 should coincide with the intermediate image plane 19. In such a manner, when the user sees a sharp image, its retina is sharply projected on the image sensor 8. In order for the intersection point 52 to be at the correct position, the aperture stop 32 of the image optics should match with the intended pupil position 13, and thus with the aperture stop 23 in the imaging channel.
  • the combination of the eye positioning means with the imaging optics can be done in several ways, without coupling, with partial coupling, or with a more intimate coupling.
  • the positioning and imaging systems can be totally independent from each other, full-featured positioning optics can be mounted around the imaging optics. This is shown in figure 4. Also figure 5 shows that there is space for the imaging an illumination subsystems in the centre of the positioning subsystem, if the optional collimating optics are not included, or are hollow in the centre.
  • the positioning target For multiple targets or a “ring of light” it is also possible to have a partial integration of the subsystems. This is shown for the "ring of light” in figure 5, where the collimating optics 43 are in fact the eyepiece 11 of the device.
  • the positioning target In order for the positioning target (discrete targets or “ring of light”) to be observed sharp through the eyepiece, the positioning target needs to be in the intermediate image plane (see 19 on figure 2) of the imaging system, where an intermediate image of the retina is formed by the eyepiece. It is important to have the targets mounted at fields outside the field of interest for the retina imaging, in order to avoid shadows formed by the target on the retina image.
  • the first light source is an infrared source
  • this can be a filter at some stage between the eye positioning optics and the imaging sensor, which filter only accepts near infrared light, or beam splitter 49 being wavelength selective, where it reflects visible light but transmits infrared light. Configurations with an infrared reflecting and visible transmitting beam splitter can of course be envisaged.
  • the use of wavelength selective beam splitter yields a better transmission in the imaging optics channel than the alternatives with regular beam splitters and filters.
  • Figure 8 shows an alternative embodiment of the positioning means where use is made of a microlens array.
  • the second light source 65 is placed in the focal plane of a microlens array 66.
  • the second light source 65 can be formed by a diffuse slide illuminated by a source 67.
  • the slide 50 is placed close to the microlens array.
  • a repetitive pattern 68 is introduced in the source, which pattern has a pitch 69 equal to the one of the microlens.
  • the second light source as a red plane with a matrix of green dots 68 aligned with respect to the microlens.
  • the dot size is chosen in such a way that the image of one dot, as imaged by one microlens and the eyepiece 11 has the eye pupil dimension.
  • wedges Another alternative for increasing the feedback to the user for a lateral displacement of his eye, is to use wedges, as illustrated in figure 9.
  • Wedges 70 with a small deflection angle are positioned close to the central part of the slide 50, which is illuminated by collimated second light beams 55.
  • This collimated light is made by a collimating light source, for instance a light source 30 with a field stop 54, illuminating a collimator lens 36.
  • the placement of a wedge serves to displace the image of the source 71 away from the nominal position 56.
  • the size of the source is chosen small enough and the angle deviation produced by the wedges is chosen in a way that the three aligned images of the stop (not deviated, through left wedge and through right wedge) can enter the eye pupil all together.
  • Each wedge placed close to plane 50 deviates light in a particular direction.
  • Four wedges can be used to produce the above described effect in four different directions. The number of wedges is not restricted, and even a continuous cone could be formed.
  • a few slides can be stacked, each bearing a target. These targets have to be aligned by the user by placing his eye correctly in lateral position.
  • the intensity of the eye positioning targets can be made variable, dependent on the ambient light level. This can enhance the user comfort when using the device.
  • the slide could also be formed by an imaging micro-display displaying still or moving video images.
  • Figure 10 illustrates an example of a retina 16.
  • the latter comprises a central part being the fovea 82 used by a person to observe details, i.e. when a person stares at a given point, the image of that point will be imaged on the fovea.
  • the retina further comprises a vein pattern 83 around the white spot 81 at the place where the optical nerve is connected to the eye. This is located at about 15.5° (84) from the fovea 82, considered in substantially horizontal direction.
  • For authentication using retina imaging it is important to choose a particular part of the retina area, which part will then be projected on the image sensor. For the selection of this retina part, the user is asked to stare at a given target, which will be imaged on the fovea.
  • the eye orientation will be fixed.
  • the efficiency of the eye fixation is increased by generating the fixation targets in a pulsation mode by using a pulsed light source, which is at frequencies less than 50 Hz and preferably between 4 and 12 Hz.
  • the eye fixation target can be combined with one of the eye positioning means or can be independent. If the fixation target is on the optical axis of the imaging channel, the fovea spot and its surroundings will be imaged on the sensor.
  • a disadvantage however of using the area around the fovea is that the blood veins there are much narrower than for example around the optical nerve, and thus much more difficult to observe.
  • fixation targets offset the imaging axis 18 can be used. If the fixation target is at about 15.5° right (left) of the optical axis in the horizontal plane, the optical nerve will be in view when the user uses his left (right) eye.
  • the device When using the optical nerve, the device needs to know if the user presents his left or right eye, in order to be able to offer the appropriate fixation target (otherwise the system would look at the wrong side of the fovea).
  • a solution thereto is to use external proximity detectors on the device to "see" the position of the user's head and to deduce whether the left or right eye is offered.
  • the detectors work for example with capacitive, ultrasonic, pyro-electric or opto-electronic sensors. Two or more detectors are placed symmetrically with respect to the vertical plane passing through the eyepiece. When the user has positioned his eye in front of the eyepiece, one sensor will be close to the face, while the other will be more distant.
  • the detectors can also be used to activate the device from a stand-by mode.
  • Figure 11 shows a detailed embodiment of an autonomous fixation target 96 which is intended to be positioned at a given angle from the optical axis and which is to aim a collimated third beam, produced by a third light source, of visible light directly to the eye.
  • the latter comprises a LED 97 generating a bright illumination of the field stop 95 placed beyond the LED 97. The light crossing the field stop is collected by the lens 93 beyond which the aperture stop 92 is positioned.
  • Figure 12 illustrates an implementation of the fixation target using the eyepiece 11 of the illumination and imaging channel.
  • the third light source is placed in the aerial image plane 19 of the retina.
  • the fixation targets are visible light sources 98 with an aperture 95 in order to limit their spatial extent as seen by the user.
  • the eyepiece 11 has to be designed for large field angles because the targets are viewed through it, i.e. at angles close to 15.5°.
  • a disadvantage of this set-up is that the eye fixation targets can block light from the imaging or illumination channel.
  • Figure 13 illustrates an implementation of the fixation target using a beam splitter 72.
  • this could be the first beam splitter 4 introduced for coupling the illumination and the imaging channel, or the second beam splitter 49 introduced for coupling the positioning and the imaging channel.
  • the use of the second beam splitter for projection of the targets provides the possibility to project targets on or around the optical axis 18, without generating shadows on the retina image.
  • the complete field in plane 19 can therefore be used for illumination and imaging purposes.
  • the targets and field stop for eye fixation are disposed, as in figure 12 on the aerial retinal image plane or a plane matching with the retinal image plane.
  • the first beam splitter is used for coupling the fixation targets to the illumination channel, and wavelength filtering is performed to enable only infrared light on the image sensor, the first beam splitter cannot be placed behind the second beam splitter, as seen from the users' side. This would inject visible light in that part of the device where only infrared light should be, and consequently this light would not reach the eyepiece and the imaging channel would be disturbed.
  • the eye fixation target is now in the centre of the target on the optical axis. It can be included as a feature in the slide of the positioning subsystem, or can be a light source in the image plane 19 on the optical axis 18.
  • an imaging subsystem is mounted, each equipped with an illumination subsystem, exactly as was shown in figure 2.
  • the eyepiece 11 is common to all optical subsystems. In figure 14 only the imaging optical path is shown in the upper half, while the lower half only shows the illumination optical path. Of course, both are to be used in the same subsystem in order to have the device operational.
  • the same approach of off-axis imaging and illumination can be used but with only one imaging/illumination subsystem. Care has to be taken that, when imaging other parts of the retina than the fovea, the part of the retina that is imaged is sensitive to rotation of the eye around the optical axis of the system (the system might then for instance look above or below the optical nerve). For this, the orientation of the head should be fixed. This can be done by ergonomic features of the housing, by a second dummy eyepiece or by using two parallel systems (with on-axis targets and off-axis illumination and imaging optics as shown in figure 14).
  • the image sensor 8 is connected to image processing means (not shown) which are generally formed by a microprocessor and a memory.
  • image processing means (not shown) which are generally formed by a microprocessor and a memory.
  • the image of the illuminated retina area is, after being recorded by the image sensor, transmitted to the processing means in order to be grabbed and to form a picture thereof. That picture is generally used for authentication purpose, which signifies that a comparison with stored patterns is required.
  • Figure 15 illustrates schematically, by means of a flow chart, the different operations performed for analysing an image of a retina part and generating biometric templates.
  • the processing is started (100) once an image is formed on the image plane.
  • the analogue image formed on the image plane is grabbed (101 ) by the processor and converted into a digital picture (102), for example by an A/D converter.
  • the picture is then processed (103), whereby several operations could be performed such as for example a check whether the picture comprises sufficient information for extracting the data necessary for authentication purpose, a verification if indeed retina data is available, a picture sharpness or illumination intensity verification.
  • the check could also include a verification if a sufficient part of the region of interest of the retina is imaged, whether no artefacts are present in the picture, etc. If it is established that the picture does not comprise useful data (103 N), then an error message is generated (104) and supplied to the operator.
  • This error message may include a feedback mesage in order to adapt the image grabbing. After generation of an error message the process is restarted.
  • the different checks are for example realised by using grey scaling techniques. If the picture is accepted by the processor (103 Y), it is further improved (105), for example by using digital filters in order to reduce the noise, increase the contrast, sharpen the picture, eliminate artefacts, etc. It is also possible to combine different pictures and form an average picture. Beside highlighting the distinctive features, the processing could also suppress possible variable features in the eye or artefacts in the picture. In a retinal picture it is mainly the vascular pattern which is stable.
  • the present processing step could also comprise a selection of a region of interest, digital filtering and other picture processing (106).
  • the filtering 108 is for example a quadruple convolution realised with the four kernels described below (kernel vertical, kernel horizontal, kernel diagonal 1 , kernel diagonal 2) as illustrated in table 1. From the four pictures obtained by these convolutions, a result picture is obtained by holding the maximum pixel value of the four pictures.
  • kernels highlighting linear structures can be used.
  • a binary picture is generated by setting a value of one to the pixels greater than a predetermined threshold value and a zero value for the pixel values less than the threshold.
  • Kernel diagonal 2 is the vertical symmetry of Kernel diagonal 1. Kernel horizontal is the transposition of the Kernel vertical matrix.
  • the standard template is created for each user and uniquely identifies the latter.
  • the operations 100 to 103 could be repeated for a predetermined number of times and each generated picture is compared with the initial standard biometric template in order to improve the reliability of the standard template. If the compared templates are substantially similar, the last generated template is stored as standard template, if not, the last generated template is rejected. If too many rejections have been observed, the whole process is restarted. For this purpose, each rejection is memorised for example by means of a counter.
  • the standard biometric template preferably has the form of a standard code comprising the distinctive features of the retina of the user. This template may be encrypted and should preferably be independent of the design parameters of the retinal imaging device.
  • the generation of a standard template is followed by a check (108) for evaluating the template properties themselves, or by comparing them to independently acquired biometric properties of the same eye. If the operation only comprises the generation of a standard biometric template, that template is then stored in a memory (109) and the processing is stopped thereafter. If however an authentication operation has to be performed, for example for enabling access, the process continues with a comparison operation (110) where the just acquired template is compared with the one assigned to the user. If the comparison matches (110Y) access is allowed (112), if not an error message is generated (11 ON) and access is refused.
  • the biometric template can be stored in a local, central or distributed memory.
  • the computing device can base its decision (110) on one or more evaluations of similarity between templates.
  • the authentication device according to the invention can be used :
  • - to enrol a user i.e. after a check of his identity, record his retinal biometric template and store it in a database together with identity information for later authentication; - to authenticate a user, based on the comparison of a previously stored template and one or more freshly acquired templates, after the user claimed a given identity;

Abstract

An authentication device using data of at least a partial area of an eye retina, said device comprising illumination means for forming an illumination channel, said illumination means comprising a first light source, in particular an infrared light source, provided for generating a first light beam in order to illuminate said partial area of said retina, said device further comprises imaging means arranged in an imaging channel and ending in an image sensor for forming an image with light reflected by said retina, said illuminating means and said imaging means comprise an eyepiece applied into said illumination and said imaging channel, said eyepiece being provided of focusing light originating from said first light source at a first point in said illumination channel substantially corresponding to a position where a pupil of said eye has to be positioned, said eyepiece being further provided for focusing said reflected light on an image plane recordable by said image sensor. The device optionally further comprises positioning and eye fixation means.

Description

An authentication device for forming an image of at least a partial area of an eye retina
The present invention relates to an authentication device using data of at least a partial area of an eye retina, said device comprising illumination means for forming an illumination channel, said illumination means comprising a first light source, in particular an infrared light source, provided for generating a first light beam in order to illuminate said partial area of said retina.
Such a device is known from US-A-4109237. In the known device, the illumination means are provided for illuminating a partial area of the retina by successive illumination of individual spots on the retina. In such a manner, a circular scanning is realised for collecting data of the scanned part of the retina by using the light reflected by the retina.
A drawback of the known device is that it requires either mechanical movements of some components or numerous duplication of senders and/or receivers, or expensive acoustical-optical elements.
An object of the present invention is to realise a homogeneous radiation density on the partial area of the retina in order to produce an image thereof. A device according to the invention is therefore characterised in that said device further comprises imaging means arranged in an imaging channel and ending in an image sensor for forming said image with light reflected by said retina, said illuminating means and said imaging means comprise an eyepiece applied into said illumination and said imaging channel, said eyepiece being provided for focusing light originating from said first light source at a first point in said illumination channel substantially corresponding to a position where a pupil of said eye has to be positioned, said eyepiece being further provided for focusing said reflected light on an image plane recordable by said image sensor. By focusing light, originating from the first light source, at a position where the eye pupil has to be positioned, an image of the first light source is created at that position. In such a manner, the "imaged" first light source forms a light source at the pupil which enables to illuminate the to be considered retina area in a homogeneous manner. Scattered and/or reflected radiation from the retina is then collected by the eyepiece in order to form an image of the considered retina area on the image plane.
A first preferred embodiment of a device according to the invention is characterised in that a beam splitter is applied into said illumination and said imaging channel, said beam splitter being provided for orienting said first light beam towards said retina and for orienting said imaging channel towards said image sensor. The beam splitter enables on the one hand to combine the illumination and the imaging channel along a common axis in line with the eye and on the other hand to dissociate the channels in the vicinity of the image sensor.
A second preferred embodiment of a device according to the present invention is characterised in that an optical member is applied into said image channel between said image plane and said image sensor, said optical member being provided for projecting an image of said retina area formed on said image plane on said image sensor. The optical member enables to apply optical operations on the image such as for example magnification.
Preferably, an aperture stop is applied between said image plane and said image sensor at a position where an eye pupil is imaged. The aperture stop enhances the depth of the field and can be used for making the image's luminosity independent from the eye's pupil size.
Preferably, polarising means are used upon coupling said illumination into said imaging channel. The polarising means enables a better splitting of the imaging channel and the illumination channel. Preferably said imaging means are designed to be substantially telecentric. This enables to obtain a consistent image size.
A third preferred embodiment of a device according to the invention is characterised in that it further comprises eye positioning means, provided for enabling a positioning of said eye substantially at said position, said eye positioning means comprising a second light source, provided for emitting second light beams of visible light intersecting at said position. The positioning means will help the user to correctly position his eye, in such a manner, that his pupil position substantially corresponds to the position of the first point. By having the second light beams generated by the second light source, intersecting at the considered position, the user will only see all the light of the second light beam if his eye is correctly positioned. The positioning means will thus help the user to correctly position his eye. Preferably said eye positioning means comprises at least one picture located into said second light beam and illuminated therewith, said picture(s) being applied in such a manner that it is (they are) sharply displayed on said retina. The use of a picture provides a user friendly device. A fourth preferred embodiment of a device according to the invention is characterised in that a further beam splitter is applied in said illumination and positioning channel, said further beam splitter being provided for orienting said positioning channel towards said retina. This provides more flexibility for the illumination and the positioning channel. A fifth preferred embodiment of a device according to the invention is characterised in that eye fixation means are provided for selecting said partial retina area, said eye fixation means comprising a target which is imaged by means of a visible target light beam on said retina. This enables to choose a particular part of the retina for imaging purpose. Preferably said illumination and said imaging channel have an optical axis, said second light beam having a light beam axis being off-axis with respect to said optical axis and said target light beam being on said light beam axis. In such a manner a central viewing, which can be used with both eyes, is possible.
Preferably, said further beam splitter is a wavelength selective beam splitter provided for selectively orienting said imaging and said positioning channel in distinct directions. Light of different wavelengths can thus be used for imaging and positioning purpose, while using the same optics.
A sixth preferred embodiment of a device according to the invention is characterised in that said eyepiece, and said optical member, said first and second light source are rigidly fixed within said device. No adjustments of the optics are required once the device is built up. Preferably said device comprises pattern projection means provided for projecting a predetermined pattern on said retina, said imaging means being provided for forming on said image plane a further image of said pattern with light reflected by said retina, said further image being recordable by said image sensor. Recognition of the selected retina part becomes more easy.
A seventh preferred embodiment of a device according to the invention is characterised in that said image sensor is connected to image processing means provided to apply an authentication operation on an image, recorded by said image sensor. The processing of the recorded image can thus be performed.
The invention will now be described with reference to the drawings illustrating different embodiments of a device according to the invention. In the drawings : figure 1 shows a set-up of the illumination and imaging channel within the device; figure 2 shows a set-up of the device using an intermediate image plane; figure 3 shows the incorporation of an eye detection target into the illumination channel; figure 4 illustrates an example of the positioning means; figure 5 shows an implementation of a circular set-up for positioning purpose; figure 6 and 7 illustrate the use of an image target for positioning; figure 8 shows an embodiment of the positioning means using a microlens array; figure 9 shows an embodiment of the positioning means using wedges; figure 10 illustrates an example of a retina structure; figure 11 illustrates an implementation of an autonomous fixation target; figure 12 illustrates an implementation of a fixation target using the eyepiece optics of the illumination and imaging channel; figure 13 illustrates an implementation of a fixation target using the beam splitter of the illumination or positioning channel; figure 14 illustrates a device with central viewing which can be used with any of both eyes of the user; and figure 15 shows by means of a flow chart the image processing for the authentication device according to the invention.
In the drawings, a same reference sign has been assigned to a same or analogous element. Figure 1 illustrates a first embodiment of a retinal authentication device according to the invention for forming an image of at least a partial area of an eye retina. The device comprises a first light source 1 , provided for generating a first light beam 2, which is either formed by a continuous light or by pulsed light. The first light source is preferably formed by a LED emitting light in the near infrared light range, such as for example between 720 nm and 1300 nm. It would of course also be possible to have a first light source emitting visible light. This latter option is however less preferred as it is less convenient for the user and causes the eye's pupil to reduce in size when the light beam is switched on. The first light beam is preferably collimated by a collimator lens 3 in order to form a parallel beam.
The first light beam, which is part of an illumination channel 10, is incident on a beam splitter formed by a semi-transparent mirror 4, in order to orient the first light beam towards the user's eye 5 and to further form the illumination channel 10.
The first light beam 2, after being reflected by the semi- transparent mirror 4, crosses an eyepiece 11. The eyepiece is formed by a set of lenses with a combined positive effect which focuses the incoming light beam in a first point 12 situated near the eye pupil 13. In such a manner, an image of the first light source 1 is formed in that first point 12, which image forms a light source illuminating at least a part 16 of the retina 15. A substantially homogeneous illumination of the retina is thus obtained. Of course the eyepiece 11 could also be formed by a single lens with positive effect.
The light incident on the illuminated part of the retina is then scattered by the latter and the scattered light beam crossing the eye lens 17, the eye pupil 13 and a cornea 14 leaves the eye substantially collimated and reaches the eyepiece 11. That scattered light creates an imaging channel 9. The eyepiece will focus the scattered light on the plane of the image sensor 8, in order to form by means of that reflected light an image of the illuminated retina part on the image sensor. The semi-transparent mirror 4 enables the light in the image channel to reach the image sensor. In such a manner, the beam splitter as well couples as de-couples the image and the illumination channel. Both the illumination and the image channel are centralised around the optical axis 18 crossing the retina, the eye pupil, the eyepiece, the beam splitter and the image sensor. The image sensor is preferably formed by a 2D CCD (Charge Coupled Device) a CMOS (Complementary Metal Oxide Semiconductor), a CID (Charge Injection Device), a PDA (Photodiode Array) or any other imaging sensor. When pulsed light is used for the first light source 1 , the pulse mode has to be synchronised with the image grabbing by the image sensor.
Preferably, polarising means 6, 7 are used, which are placed near the beam splitter 4, as illustrated in the embodiment of figure 1. The polarising means comprise a first polariser 6 provided for eliminating light from back-reflections on the image sensor 8 as will be described hereinafter. A second polariser 7 is applied in the path of the first light beam and is provided for polarising light reaching the cornea 14. Other set-ups for the polarising means than the one illustrated in figure 1 are however possible. It is however important that the first polariser 6 is mounted between the beam splitter and the sensor and that the second polariser 7 is mounted between the first light source 1 and the beam splitter 4. According to an alternative embodiment, the polarising means could be formed by the second polariser 7 only in combination with a polarising semi-transparent mirror 4, or by a combination of the polarising semi-transparent mirror and both polarisers 6 and 7.
As already mentioned, preference is given to the use of polarised light for the first light beam. The second polariser 7 is provided for selecting a linear polarisation state such a for example the s- polarisation state. The beam splitter 4 is then chosen in such a manner that the semi-transparent mirror has a better reflection coefficient for s- polarisation than for a p-polarisation state, so that the efficiency of the light reflection towards the eye is maximised. In such a manner the necessary light power of the first light source and the back reflection on the image sensor are limited. The light scattered by the retina has lost its initial polarisation causing a random polarisation. This causes a sufficient part of this scattered light to cross the semi-transparent mirror and reach the image sensor, since a large part of the p-polarised light passes the beam splitter. The radiation within the image channel is further filtered by the first polariser 6 which only enables the passage of the p-polarised light.
Figure 2 shows a further embodiment of a device according to the present invention. The embodiment shown in this figure distinguishes from the one illustrated in figure 1 by the fact that a retina intermediate image plane 19 is used. This signifies that an image of the illuminated retina part is first formed on the intermediate image plane 19 and then on the image sensor 8 plane where the final image is recorded. In order to relay the retina image from the intermediate image plane to the sensor's image plane, a first 20 and a second 21 optical element are arranged within the imaging channel 9. In the device according to the invention the eyepiece, the optical elements and the first and second light source are all rigidly fixed within the device. In such a manner no adjustments to the user are required once they have been mounted in the device.
The first optical element 20 collimates the reflected light beam, starting from the intermediate image plane 19. The second optical element 21 focuses the collimated image light beam onto the image plane of the sensor 8. The use of this first and second optical element enables to choose a magnification of the image, formed on the image sensor plane. In such a manner, the image of the retina part can be adjusted to the image sensor's size.
In the embodiment illustrated in figure 2 the beam splitter 4 is situated between the optical elements 20 and 21. In this configuration the first optical element 20, which is also applied in the illumination channel 10, replaces the lens 3 of figure 1 and forms a parallel first light beam in the illumination channel. Indeed as can be seen in figure 2, the first light beam incident on the beam splitter is not collimated by a lens.
Preferably, the imaging optics, composed of the optical elements 20 and 21 and the eyepiece 11 , should be designed in such a manner to be telecentric or close to telecentric in order to obtain a consistent image size, even if the focal distance of the eye varies somewhat. Telecentricity means that the chief rays 22 impinging on the sensor 8 should make an angle α with the optical axis 18 which equals 0°.
The eyepiece 11 , the lens 3, the first 20 and second 21 optical elements are all made of a combination of one or more optical components such as refractive, diffractive and/or reflective components which can be made of glass, vitroceramics, polymers or the like. The surface of those components is spherical or aspherical and is preferably coated with an appropriate layer. The optical components used within the image channel are tuned to make an image of an object at infinity or at a finite distance, as the retina plane is projected at this given distance by the eye lens 17. Therefore the device according to the present invention can also be used for face recognition or as a surveillance camera.
An aperture stop 23 is mounted between the first 20 and second 21 optical element, at the place where the eye's pupil 13 is imaged by the eyepiece and the first optical element 20. The aperture stop 23 enables to limit the aperture of the imaging channel to an aperture corresponding to the size of the smallest expected eye pupil, i.e. 1 to 2 mm. This avoids scattered light from i.a. the iris to influence the image formed on the sensor and yields a constant image brightness, which is independent from the eye's pupil size. The aperture stop also reduces the numerical aperture of the device leading to an increased field depth and limiting the sensitivity of the imaging channel to eye defects.
Preferably, a baffle 24 is applied between the eye 5 and the eyepiece 11. The baffle may limit the influence of stray light on the image and causes, due to the reduced amount of incident light, the eye pupil to open more, which on its turn enables a less stringent positioning accuracy and consequently improves the image quality.
In order to enable a user wearing glasses to keep on his glasses during operation of the device, it is necessary to provide a distance between the eyepiece 11 or baffle 24 and the cornea 14 which is large enough. A distance situated between 20 and 30 mm is appropriate.
The opening angle β of the illumination between the optical axis 18 and the outermost of the first light beam, considered in the first point 12 is determined by the dimension of the part of the retina to be illuminated. An angle 5° < β < 15° enables the illumination of a sufficient retina part, enough to form an image, providing sufficient information for biometric authentication purposes.
As long as the eye is not correctly positioned it is not meaningful to use an image grabbed by the image sensor for authentication purpose. In order to detect a correct eye positioning it is possible to project a pattern on the retina and recognise this pattern on the generated image. Preferably, the pattern is positioned in the outer regions of the image in order to avoid interference with the information carrying image parts. Since, according to a preferred embodiment, the image sensor only images near infrared light, the pattern on the retina should also be created in the near infrared. One straightforward implementation is to block a part of the illumination channel (dark lines, dark square, ...), in a plane that is sharply imaged on the retina. In the embodiment of figure 1 , a shadow image can be created by an aperture in the collimated first light beam between the beam splitter 4 and the collimator lens 3 in a plane 90 situated just after the lens 3 and matching with the image plane 8. A further possible optical set-up for the illumination optics using such a target projection on the retina is shown in figure 3. This embodiment is comparable with the one illustrated in figure 2. The plane 90 matching the imaging plane is created by means of a lens 91 in the illumination path. The collimating lens 3 is added to restore the regular light path of the illumination subsystem. Since the pattern is incorporated into the first light beam due to its presence in plane 90, it is projected on the retina and thus displayed on the image sensor, enabling a selection of usable pictures for authentication purpose. The pattern used for eye detection can also be generated by an independent light source. The pattern shape can be tuned to algorithms used in the detection.
The focusing of the first light beam in the first point 12 requires of course that a user correctly positions his eye at this first point. In order to help the user in positioning his eye at the first point, positioning means are provided. Figure 4 illustrates a first embodiment of such positioning means. The latter comprise a series of second light sources 30-1 , 30-2, provided for producing second light beams 31 of visible light. The second light beams are preferably formed by collimated beams which are obtained by using small sized field stops 33 and collimating optics 36. The second light sources may also comprise an aperture stop 32, provided to adjust the diameter of the collimated bundle. The field stop 33 is also suitable for controlling the size or shape of a target, such as for example a cross, which is incorporated into the second light source and has to be observed by the user.
The second light sources 30 are preferably positioned in a circle having its centre on the optical axis 18. Due to this circular set-up a cone shaped second light beam is formed by the different second light sources. The second light beams 31 intersect in a position 35, which coincides with the first point 12, on which the first light beam is focused. At this position 35 all the light of the second beams is concentrated in a disc having a size of at the most the minimal eye pupil opening. Only if the user positions his eye substantially at this position 35, the user will see a ring comprising all the beams produced by all the second light sources 30 and their respective targets. The user thus has to move his head and his eye until he sees all second light beams simultaneously. Only then, his eye position will substantially correspond with the one of the first point 12 and his retina will be adequately illuminated by the first source 1.
The second light beams need not necessarily to be formed by discrete light sources. A continuous ring of light, as illustrated in figure 5, can also be used. A light emitting ring 40, preferably formed by a Light Emitting Polymer (LEP) or a light guide, is applied around the optical axis 18. The user sees the LEP through an aperture 42 and preferably one or more baffles 41 in order to limit the eye positions from which the ring can be seen. Collimating optics 43 can be added in front of the ring of light assembly to offer a sharper view of the ring to the user. Figure 6 shows an alternative embodiment for the eye positioning means. In this embodiment a 2D image slide 50 is incorporated into the second light beam 3 . A lens 36, placed before this slide 50, collimates the second light beam, which has a limited spatial extent due to the aperture stop 54 applied adjacent the second light source 30. The light, collimated by lens 36, passes through the slide carrying the image to be displayed in order to pick up the latter and reaches the second beam splitter 9. The position of the lens 36 and the aperture stop 54 need to be carefully chosen in order to produce an image of the aperture stop 54 via the eyepiece 11 at the entrance pupil 13. For correctly positioning his eye, the user then needs to adjust his position so that he can see all beams 56 originating from image 50, which intersect at position 35, corresponding to the first point. At too small or too large distances, only the central part of the displayed image is visible. When the user's eye is displaced in a lateral direction, one image side will disappear.
Figure 7 shows a further embodiment where one or more 2D-images are presented to the user. The second light source 30 illuminates the 2D-image slide 50 via a diffuser 51. The second light source is for example formed by a LED or a LEP. The slide 50 needs not to be transparent and illuminated by a separate light source if it is luminescent itself. Slide collimating optics 36 are placed subsequent to the slide and they are followed by an aperture stop 32, when considered in the direction of the outgoing second light beam. The aperture stop 32 enables the light, from a given part of the slide, to follow a predetermined path through the optics. The second light beam 31 is then incident on a second or further beam splitter 49 in order to be injected in the illumination and imaging channel. The second light beams intersect at the position 52 in order to substantially coincide with the first point. The full image can only be seen by the user when all the rays of the second light beams enter the eye pupil 13, thus when the latter is correctly positioned at the position 52.
The image plane 53 of the slide 50, as formed by the optics 36 and 20, should coincide with the intermediate image plane 19. In such a manner, when the user sees a sharp image, its retina is sharply projected on the image sensor 8. In order for the intersection point 52 to be at the correct position, the aperture stop 32 of the image optics should match with the intended pupil position 13, and thus with the aperture stop 23 in the imaging channel.
The combination of the eye positioning means with the imaging optics can be done in several ways, without coupling, with partial coupling, or with a more intimate coupling.
When using multiple targets or a "ring of light", the positioning and imaging systems can be totally independent from each other, full-featured positioning optics can be mounted around the imaging optics. This is shown in figure 4. Also figure 5 shows that there is space for the imaging an illumination subsystems in the centre of the positioning subsystem, if the optional collimating optics are not included, or are hollow in the centre.
For multiple targets or a "ring of light" it is also possible to have a partial integration of the subsystems. This is shown for the "ring of light" in figure 5, where the collimating optics 43 are in fact the eyepiece 11 of the device. In order for the positioning target (discrete targets or "ring of light") to be observed sharp through the eyepiece, the positioning target needs to be in the intermediate image plane (see 19 on figure 2) of the imaging system, where an intermediate image of the retina is formed by the eyepiece. It is important to have the targets mounted at fields outside the field of interest for the retina imaging, in order to avoid shadows formed by the target on the retina image.
In the case of the use of a full 2D-image for the eye positioning, there is a need for a full coupling of both optical subsystems, as it is necessary to use the centre of the viewing field also. This is illustrated in figure 6 and 7. The coupling of the imaging and eye positioning optics can be performed by the beam splitter 9, or some other partially reflecting surface. Both subsystems can use common optics i.e. the eyepiece 11 and possibly the optical element 20. It is advised to avoid that the visible light from the image, which is imaged on the retina, as illustrated in figure 6 or 7, forms an image on the sensor, as this would affect the homogeneity of the illumination. This can be done by the use of a wavelength selective optical element. If the first light source is an infrared source, this can be a filter at some stage between the eye positioning optics and the imaging sensor, which filter only accepts near infrared light, or beam splitter 49 being wavelength selective, where it reflects visible light but transmits infrared light. Configurations with an infrared reflecting and visible transmitting beam splitter can of course be envisaged. The use of wavelength selective beam splitter yields a better transmission in the imaging optics channel than the alternatives with regular beam splitters and filters.
Figure 8 shows an alternative embodiment of the positioning means where use is made of a microlens array. The second light source 65 is placed in the focal plane of a microlens array 66. The second light source 65 can be formed by a diffuse slide illuminated by a source 67. The slide 50 is placed close to the microlens array. A repetitive pattern 68 is introduced in the source, which pattern has a pitch 69 equal to the one of the microlens. Consider for example the second light source as a red plane with a matrix of green dots 68 aligned with respect to the microlens. The dot size is chosen in such a way that the image of one dot, as imaged by one microlens and the eyepiece 11 has the eye pupil dimension. In such a manner a coupling is generated between the observed colour of the pattern and the eye position. If the eye is correctly positioned, the user sees a homogeneous green plane. If the distance between the eye and the eyepiece is too large, the user will see green in the center and a combination of green and red on the edges. If, at too large distance of the eyepiece, the user moves his eye from the axis, he will see a lateral colour distribution in the image plane from green on one side to red on the other. By moving his eye, the user can then find the correct position.
Another alternative for increasing the feedback to the user for a lateral displacement of his eye, is to use wedges, as illustrated in figure 9. Wedges 70 with a small deflection angle are positioned close to the central part of the slide 50, which is illuminated by collimated second light beams 55. This collimated light is made by a collimating light source, for instance a light source 30 with a field stop 54, illuminating a collimator lens 36. The placement of a wedge serves to displace the image of the source 71 away from the nominal position 56. The size of the source is chosen small enough and the angle deviation produced by the wedges is chosen in a way that the three aligned images of the stop (not deviated, through left wedge and through right wedge) can enter the eye pupil all together. If the user displaces his eye laterally, the light passing through one of the wedges will not reach the eye any more, and the corresponding part of the slide will be seen dark. Each wedge placed close to plane 50 deviates light in a particular direction. Four wedges can be used to produce the above described effect in four different directions. The number of wedges is not restricted, and even a continuous cone could be formed.
Instead of using a single slide 50, a few slides can be stacked, each bearing a target. These targets have to be aligned by the user by placing his eye correctly in lateral position. The intensity of the eye positioning targets can be made variable, dependent on the ambient light level. This can enhance the user comfort when using the device. The slide could also be formed by an imaging micro-display displaying still or moving video images.
Figure 10 illustrates an example of a retina 16. The latter comprises a central part being the fovea 82 used by a person to observe details, i.e. when a person stares at a given point, the image of that point will be imaged on the fovea. The retina further comprises a vein pattern 83 around the white spot 81 at the place where the optical nerve is connected to the eye. This is located at about 15.5° (84) from the fovea 82, considered in substantially horizontal direction. For authentication using retina imaging it is important to choose a particular part of the retina area, which part will then be projected on the image sensor. For the selection of this retina part, the user is asked to stare at a given target, which will be imaged on the fovea. As long as the user stares at the target, the eye orientation will be fixed. The efficiency of the eye fixation is increased by generating the fixation targets in a pulsation mode by using a pulsed light source, which is at frequencies less than 50 Hz and preferably between 4 and 12 Hz. The eye fixation target can be combined with one of the eye positioning means or can be independent. If the fixation target is on the optical axis of the imaging channel, the fovea spot and its surroundings will be imaged on the sensor. A disadvantage however of using the area around the fovea is that the blood veins there are much narrower than for example around the optical nerve, and thus much more difficult to observe. For viewing other parts of the retina, fixation targets offset the imaging axis 18 can be used. If the fixation target is at about 15.5° right (left) of the optical axis in the horizontal plane, the optical nerve will be in view when the user uses his left (right) eye.
When using the optical nerve, the device needs to know if the user presents his left or right eye, in order to be able to offer the appropriate fixation target (otherwise the system would look at the wrong side of the fovea). A solution thereto is to use external proximity detectors on the device to "see" the position of the user's head and to deduce whether the left or right eye is offered. The detectors work for example with capacitive, ultrasonic, pyro-electric or opto-electronic sensors. Two or more detectors are placed symmetrically with respect to the vertical plane passing through the eyepiece. When the user has positioned his eye in front of the eyepiece, one sensor will be close to the face, while the other will be more distant. The detectors can also be used to activate the device from a stand-by mode.
Figure 11 shows a detailed embodiment of an autonomous fixation target 96 which is intended to be positioned at a given angle from the optical axis and which is to aim a collimated third beam, produced by a third light source, of visible light directly to the eye. The latter comprises a LED 97 generating a bright illumination of the field stop 95 placed beyond the LED 97. The light crossing the field stop is collected by the lens 93 beyond which the aperture stop 92 is positioned.
Figure 12 illustrates an implementation of the fixation target using the eyepiece 11 of the illumination and imaging channel. The third light source is placed in the aerial image plane 19 of the retina. The fixation targets are visible light sources 98 with an aperture 95 in order to limit their spatial extent as seen by the user. The eyepiece 11 has to be designed for large field angles because the targets are viewed through it, i.e. at angles close to 15.5°. A disadvantage of this set-up is that the eye fixation targets can block light from the imaging or illumination channel.
Figure 13 illustrates an implementation of the fixation target using a beam splitter 72. In practice this could be the first beam splitter 4 introduced for coupling the illumination and the imaging channel, or the second beam splitter 49 introduced for coupling the positioning and the imaging channel. The use of the second beam splitter for projection of the targets provides the possibility to project targets on or around the optical axis 18, without generating shadows on the retina image. The complete field in plane 19 can therefore be used for illumination and imaging purposes. The targets and field stop for eye fixation are disposed, as in figure 12 on the aerial retinal image plane or a plane matching with the retinal image plane. If the first beam splitter is used for coupling the fixation targets to the illumination channel, and wavelength filtering is performed to enable only infrared light on the image sensor, the first beam splitter cannot be placed behind the second beam splitter, as seen from the users' side. This would inject visible light in that part of the device where only infrared light should be, and consequently this light would not reach the eyepiece and the imaging channel would be disturbed.
A disadvantage, when using the optical nerve or some other retinal feature located elsewhere than the fovea, is that the user has to stare off-axis if the imaging, illumination and eye positioning optics are all axial, as was supposed until now. It is however also possible to have the eye positioning and eye fixation optics substantially axial with respect to the eyepiece, but the illumination and imaging optics off-axis. Two illumination and imaging optics are then needed in order to allow the use of both left or right eye. A possible set-up for this is shown in figure 14, where a device with central viewing which can be used with both eyes is illustrated. A wide-angle eyepiece (typically 50° field) is used. On the optical axis of the eyepiece, the eye positioning target is implemented, as was done in figure 6. The eye fixation target is now in the centre of the target on the optical axis. It can be included as a feature in the slide of the positioning subsystem, or can be a light source in the image plane 19 on the optical axis 18. On both sides of the optical axis, at about 15.5° if the optical nerves are to be imaged, an imaging subsystem is mounted, each equipped with an illumination subsystem, exactly as was shown in figure 2. The eyepiece 11 is common to all optical subsystems. In figure 14 only the imaging optical path is shown in the upper half, while the lower half only shows the illumination optical path. Of course, both are to be used in the same subsystem in order to have the device operational. If the area of interest is chosen below or under the fovea or if the same eye is always present, the same approach of off-axis imaging and illumination can be used but with only one imaging/illumination subsystem. Care has to be taken that, when imaging other parts of the retina than the fovea, the part of the retina that is imaged is sensitive to rotation of the eye around the optical axis of the system (the system might then for instance look above or below the optical nerve). For this, the orientation of the head should be fixed. This can be done by ergonomic features of the housing, by a second dummy eyepiece or by using two parallel systems (with on-axis targets and off-axis illumination and imaging optics as shown in figure 14).
The image sensor 8 is connected to image processing means (not shown) which are generally formed by a microprocessor and a memory. The image of the illuminated retina area is, after being recorded by the image sensor, transmitted to the processing means in order to be grabbed and to form a picture thereof. That picture is generally used for authentication purpose, which signifies that a comparison with stored patterns is required. Figure 15 illustrates schematically, by means of a flow chart, the different operations performed for analysing an image of a retina part and generating biometric templates.
The processing is started (100) once an image is formed on the image plane. The analogue image formed on the image plane is grabbed (101 ) by the processor and converted into a digital picture (102), for example by an A/D converter. The picture is then processed (103), whereby several operations could be performed such as for example a check whether the picture comprises sufficient information for extracting the data necessary for authentication purpose, a verification if indeed retina data is available, a picture sharpness or illumination intensity verification. The check could also include a verification if a sufficient part of the region of interest of the retina is imaged, whether no artefacts are present in the picture, etc. If it is established that the picture does not comprise useful data (103 N), then an error message is generated (104) and supplied to the operator. This error message may include a feedback mesage in order to adapt the image grabbing. After generation of an error message the process is restarted. The different checks are for example realised by using grey scaling techniques. If the picture is accepted by the processor (103 Y), it is further improved (105), for example by using digital filters in order to reduce the noise, increase the contrast, sharpen the picture, eliminate artefacts, etc. It is also possible to combine different pictures and form an average picture. Beside highlighting the distinctive features, the processing could also suppress possible variable features in the eye or artefacts in the picture. In a retinal picture it is mainly the vascular pattern which is stable. The present processing step could also comprise a selection of a region of interest, digital filtering and other picture processing (106).
The filtering 108 is for example a quadruple convolution realised with the four kernels described below (kernel vertical, kernel horizontal, kernel diagonal 1 , kernel diagonal 2) as illustrated in table 1. From the four pictures obtained by these convolutions, a result picture is obtained by holding the maximum pixel value of the four pictures. Alternative kernels, highlighting linear structures can be used.
A binary picture is generated by setting a value of one to the pixels greater than a predetermined threshold value and a zero value for the pixel values less than the threshold. TABLE 1 Kernel diagonal 1
-0.20 -0.57 0.00 0.00 0.00 0.00 0.00 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.57 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.25 0.05 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 -0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.00 0.00 0.00 0.00 0.02 0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.00 0.00 -0.02 •0.05 -0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 0.00 0.00 0.03 0.07 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.21 0.25 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.05 -0.28 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 -0.45 -0.28 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 0.25 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.21 0.07 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.03 -0.05 -0.02 0.00 0.00 0.00 0.00 0.00 -0.57 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.02 0.00 0.00 0.00 0.00 0.00 -0.57 -0.20
Kernel vertical
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 ■0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 ■0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 -0.03 -0.05 0.08 0.28 0.09 -0.40 -0.40 0.09 0.28 0.08 -0.05 -0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Kernel diagonal 2 is the vertical symmetry of Kernel diagonal 1. Kernel horizontal is the transposition of the Kernel vertical matrix.
0 A first generated picture forms (107) then an initial standard biometric template of the considered retina part. The standard template is created for each user and uniquely identifies the latter. The operations 100 to 103 could be repeated for a predetermined number of times and each generated picture is compared with the initial standard biometric template in order to improve the reliability of the standard template. If the compared templates are substantially similar, the last generated template is stored as standard template, if not, the last generated template is rejected. If too many rejections have been observed, the whole process is restarted. For this purpose, each rejection is memorised for example by means of a counter. The standard biometric template preferably has the form of a standard code comprising the distinctive features of the retina of the user. This template may be encrypted and should preferably be independent of the design parameters of the retinal imaging device.
The generation of a standard template is followed by a check (108) for evaluating the template properties themselves, or by comparing them to independently acquired biometric properties of the same eye. If the operation only comprises the generation of a standard biometric template, that template is then stored in a memory (109) and the processing is stopped thereafter. If however an authentication operation has to be performed, for example for enabling access, the process continues with a comparison operation (110) where the just acquired template is compared with the one assigned to the user. If the comparison matches (110Y) access is allowed (112), if not an error message is generated (11 ON) and access is refused. The biometric template can be stored in a local, central or distributed memory.
The computing device can base its decision (110) on one or more evaluations of similarity between templates. The authentication device according to the invention can be used :
- to enrol a user, i.e. after a check of his identity, record his retinal biometric template and store it in a database together with identity information for later authentication; - to authenticate a user, based on the comparison of a previously stored template and one or more freshly acquired templates, after the user claimed a given identity;
- to identify a user that enrolled before, based on the comparison of a series of stored templates and one or more freshly acquired templates;
- to check that a user was not enrolled yet, based on the comparison of a series of stored templates and one or more freshly acquired templates;
- to verify the template stored for a given user, based on the comparison of a stored template and one or more freshly acquired templates.

Claims

1. An authentication device using data of at least a partial area of an eye retina, said device comprising illumination means for forming an illumination channel, said illumination means comprising a first light source, in particular an infrared light source, provided for generating a first light beam in order to illuminate said partial area of said retina, characterised in that said device further comprises imaging means arranged in an imaging channel and ending in an image sensor for forming an image with light reflected by said retina, said illuminating means and said imaging means comprise an eyepiece applied into said illumination and said imaging channel, said eyepiece being provided for focusing light originating from said first light source at a first point in said illumination channel substantially corresponding to a position where a pupil of said eye has to be positioned, said eyepiece being further provided for focusing said reflected light on an image plane recordable by said image sensor.
2. A device as claimed in claim 1 , characterised in that a beam splitter is applied into said illumination and said imaging channel, said beam splitter being provided for orienting said first light beam towards said retina and for orienting said imaging channel towards said image sensor.
3. A device as claimed in anyone of the claims 1 or 2, characterised in that an optical member is applied into said image channel between said image plane and said image sensor, said optical member being provided for projecting an image of said retina area formed on said image plane on said image sensor.
4. A device as claimed in claim 3, characterised in that an aperture stop is applied between said image plane and said image sensor.
5. A device as claimed in claim 3 or 4, characterised in that said optical member is also operational in said illumination channel and further provided for collimating said first light beam.
6. A device as claimed in anyone of the claims 1 to 5, characterised in that polarising means are used upon coupling said illumination into said imaging channel.
7. A device as claimed in anyone of the claims 1 to 6, characterised in that said imaging means are designed to be substantially telecentric.
8. A device as claimed in anyone of the claims 1 to 7, characterised in that it further comprises eye positioning means provided for enabling a positioning of said eye substantially at said position, said eye positioning means comprising a second light source provided for emitting second light beams of visible light forming a positioning channel and intersecting at said position.
9. A device as claimed in claim 8, characterised in that said second light source has a circular set-up with a centre located on an optical axis of said imaging channel and is provided for forming a substantially cone shaped second light beam.
10. A device as claimed in claim 8, characterised in that said eye positioning means comprises at least one picture located into said second light beam and illuminated therewith, said picture(s) being applied in such a manner that it is (they are) sharply displayed on said retina.
11. A device as claimed in anyone of the claims 8 to 10, characterised in that a further beam splitter is applied in said illumination and positioning channel, said further beam splitter being provided for orienting said positioning channel towards said retina.
12. A device as claimed in anyone of the claims 8 to 11 , characterised in that eye fixation means are provided for selecting said partial retina area, said eye fixation means comprising a target which is imaged by means of a visible target light beam on said retina.
13. A device as claimed in claim 12, characterised in that said eye fixation means co-operates with a detector provided for detecting whether a right or left eye is used.
14. A device as claimed in anyone of the claims 12 or 13, characterised in that said illumination and said imaging channel have an optical axis, said second light beam having a light beam axis being off- axis with respect to said optical axis and said target light beam being on said light beam axis.
15. A device as claimed in anyone of the claims 1 to 14, characterised in that it comprises a plurality of illumination and imaging channels, each having an optical axis forming a respective angle ≠ 0 with respect to each other on at least a segment thereof located near said first point.
16. A device as claimed in claim 15, characterised in that said further beam splitter is a wavelength selective beam splitter provided for selectively orienting said imaging and said positioning channel in distinct directions.
17. A device as claimed in anyone of the claims 3 to 16, characterised in that said eyepiece, and said optical member, said first and second light source are rigidly fixed within said device.
18. A device as claimed in anyone of the claims 1 to 17, characterised in that said device comprises pattern projection means provided for projecting a predetermined pattern on said retina, said imaging means being provided for forming on said image plane a further image of said pattern with light reflected by said retina, said further image being recordable by said image sensor.
19. A device as claimed in anyone of the claims 1 to 18, characterised in that said image sensor is connected to image processing means provided to apply an authentication operation on an image, recorded by said image sensor.
PCT/BE2001/000118 2000-07-19 2001-07-19 An authentication device for forming an image of at least a partial area of an eye retina WO2002007068A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001275610A AU2001275610A1 (en) 2000-07-19 2001-07-19 An authentication device for forming an image of at least a partial area of an eye retina

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP00202587 2000-07-19
EP00202587.2 2000-07-19

Publications (1)

Publication Number Publication Date
WO2002007068A1 true WO2002007068A1 (en) 2002-01-24

Family

ID=8171828

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/BE2001/000118 WO2002007068A1 (en) 2000-07-19 2001-07-19 An authentication device for forming an image of at least a partial area of an eye retina

Country Status (2)

Country Link
AU (1) AU2001275610A1 (en)
WO (1) WO2002007068A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6949180B2 (en) 2002-10-09 2005-09-27 Chevron U.S.A. Inc. Low toxicity Fischer-Tropsch derived fuel and process for making same
WO2007017207A1 (en) 2005-08-05 2007-02-15 Heidelberg Engineering Gmbh Method and system for biometric identification or verification
WO2008155447A2 (en) * 2007-06-21 2008-12-24 Timo Tapani Lehto Method and system for identifying a person
WO2016020147A1 (en) * 2014-08-08 2016-02-11 Fotonation Limited An optical system for an image acquisition device
WO2016029433A1 (en) * 2014-08-29 2016-03-03 Empire Technology Development Llc Biometric authentication
WO2019133550A1 (en) * 2017-12-28 2019-07-04 Broadspot Imaging Corp Multiple off-axis channel optical imaging device with secondary fixation target for small pupils
WO2021067420A1 (en) * 2019-09-30 2021-04-08 Gentex Corporation Alignment system
US11373450B2 (en) 2017-08-11 2022-06-28 Tectus Corporation Eye-mounted authentication system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4109237A (en) * 1977-01-17 1978-08-22 Hill Robert B Apparatus and method for identifying individuals through their retinal vasculature patterns
EP0256635A2 (en) * 1986-06-23 1988-02-24 EyeDentify Inc. Optical alignment system
JPH01124434A (en) * 1986-12-03 1989-05-17 Randall Instr Co Inc Optical apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4109237A (en) * 1977-01-17 1978-08-22 Hill Robert B Apparatus and method for identifying individuals through their retinal vasculature patterns
EP0256635A2 (en) * 1986-06-23 1988-02-24 EyeDentify Inc. Optical alignment system
JPH01124434A (en) * 1986-12-03 1989-05-17 Randall Instr Co Inc Optical apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN vol. 1999, no. 06 31 March 1999 (1999-03-31) *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6949180B2 (en) 2002-10-09 2005-09-27 Chevron U.S.A. Inc. Low toxicity Fischer-Tropsch derived fuel and process for making same
WO2007017207A1 (en) 2005-08-05 2007-02-15 Heidelberg Engineering Gmbh Method and system for biometric identification or verification
JP2009502382A (en) * 2005-08-05 2009-01-29 ハイデルベルク・エンジニアリング・ゲー・エム・ベー・ハー Biometric authentication or biometric verification method and system
US8184867B2 (en) 2005-08-05 2012-05-22 Heidelberg Engineering Gmbh Method and system for biometric identification or verification
JP2014028280A (en) * 2005-08-05 2014-02-13 Heidelberg Engineering Gmbh Method and system for biometric identification or verification
WO2008155447A2 (en) * 2007-06-21 2008-12-24 Timo Tapani Lehto Method and system for identifying a person
WO2008155447A3 (en) * 2007-06-21 2009-04-02 Timo Tapani Lehto Method and system for identifying a person
US10051208B2 (en) 2014-08-08 2018-08-14 Fotonation Limited Optical system for acquisition of images with either or both visible or near-infrared spectra
WO2016020147A1 (en) * 2014-08-08 2016-02-11 Fotonation Limited An optical system for an image acquisition device
CN107111009A (en) * 2014-08-08 2017-08-29 快图有限公司 Optical system for image acquiring device
WO2016029433A1 (en) * 2014-08-29 2016-03-03 Empire Technology Development Llc Biometric authentication
US11373450B2 (en) 2017-08-11 2022-06-28 Tectus Corporation Eye-mounted authentication system
US11754857B2 (en) 2017-08-11 2023-09-12 Tectus Corporation Eye-mounted authentication system
WO2019133550A1 (en) * 2017-12-28 2019-07-04 Broadspot Imaging Corp Multiple off-axis channel optical imaging device with secondary fixation target for small pupils
US10610094B2 (en) 2017-12-28 2020-04-07 Broadspot Imaging Corp Multiple off-axis channel optical imaging device with secondary fixation target for small pupils
WO2021067420A1 (en) * 2019-09-30 2021-04-08 Gentex Corporation Alignment system
US11604863B2 (en) 2019-09-30 2023-03-14 Gentex Corporation Alignment system

Also Published As

Publication number Publication date
AU2001275610A1 (en) 2002-01-30

Similar Documents

Publication Publication Date Title
US7554598B2 (en) Imaging system, and identity authentication system incorporating the same
CN104776801B (en) Information processing unit and information processing method
EP1341119B1 (en) Iris recognition system
US8983146B2 (en) Multimodal ocular biometric system
CN101681021B (en) Large depth-of-field imaging system and iris recognition system
US4768874A (en) Scanning optical apparatus and method
US8382285B2 (en) Device and method for determining the orientation of an eye
KR100342159B1 (en) Apparatus and method for acquiring iris images
US7271839B2 (en) Display device of focal angle and focal distance in iris recognition system
US20050248725A1 (en) Eye image capturing apparatus
US7290880B1 (en) System and method for producing a stereoscopic image of an eye fundus
US20030169334A1 (en) Iris capture device having expanded capture volume
JP2002352235A (en) Apparatus and method for adjusting focus position in iris recognition system
CN211270678U (en) Optical system of fundus camera and fundus camera
EP0734220B1 (en) Eye fundus optical scanner system and method
US4960327A (en) Optical system in a lasar scanning eye fundus camera
WO2002007068A1 (en) An authentication device for forming an image of at least a partial area of an eye retina
KR20010006975A (en) A method for identifying the iris of persons based on the reaction of the pupil and autonomous nervous wreath
CN101547640B (en) Device for producing an iris image
KR20010006976A (en) A system for identifying the iris of persons
JPH11347016A (en) Imaging device
KR200327691Y1 (en) Apparatus for acquiring an iris image
JP2021062162A (en) Scanning type ocular fundus imaging apparatus
KR20170000023U (en) Iris identifying device having optical filter optimized on taking iris image
KR100356600B1 (en) A Method For Identifying The Iris Of Persons Based On The Shape Of Lacuna And/Or Autonomous Nervous Wreath

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ CZ DE DE DK DK DM DZ EC EE EE ES FI FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: COMMUNICATION UNDER RULE 69 EPC (EPO FORM 1205A DATED 24.04.2003)

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP