US20160284068A1 - Method for correcting an image of at least one object presented at a distance in front of an imaging device and illuminated by a lighting system and an image pickup system for implementing said method - Google Patents
Method for correcting an image of at least one object presented at a distance in front of an imaging device and illuminated by a lighting system and an image pickup system for implementing said method Download PDFInfo
- Publication number
- US20160284068A1 US20160284068A1 US15/080,176 US201615080176A US2016284068A1 US 20160284068 A1 US20160284068 A1 US 20160284068A1 US 201615080176 A US201615080176 A US 201615080176A US 2016284068 A1 US2016284068 A1 US 2016284068A1
- Authority
- US
- United States
- Prior art keywords
- image
- imaging device
- taken
- light intensity
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 114
- 238000000034 method Methods 0.000 title claims abstract description 34
- 238000005286 illumination Methods 0.000 claims abstract description 12
- 238000012545 processing Methods 0.000 claims description 28
- 238000012937 correction Methods 0.000 claims description 24
- 238000004088 simulation Methods 0.000 claims description 17
- 230000003247 decreasing effect Effects 0.000 claims description 8
- 238000004590 computer program Methods 0.000 claims description 3
- 238000003702 image correction Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 3
- 238000010191 image analysis Methods 0.000 description 3
- 239000004106 carminic acid Substances 0.000 description 2
- 239000001679 citrus red 2 Substances 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 239000004176 azorubin Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000000284 resting effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000004173 sunset yellow FCF Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G06T5/94—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/001—Image restoration
- G06T5/002—Denoising; Smoothing
-
- G06T7/0057—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/514—Depth or shape recovery from specularities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
Definitions
- the present invention relates to a method for correcting an image of at least one object presented at a distance in front of an imaging device and illuminated by an illumination system, as well as an image pickup system for implementing said method.
- Image pickup systems integrating a system for illuminating the scene to be taken are known and mention can be made, by way of example, of biometric image pickup systems in which the image of a face, of an iris or of a fingerprint of a person is used for recognition purposes, systems for picking up images of documents, such as identity documents, residence evidence, tickets for games, etc., in which the image of the document or a part thereof is used for the purpose of recognising the identity of the bearer of said document etc.
- a system for picking up image of a flat document which consists of a flat scanner functioning as follows.
- the document in question is placed flat on a window of the scanner at a known position determined by said window and is uniformly illuminated by means of suitable light source.
- An imaging device receives the light from the light source, which is reflected on the document, and delivers corresponding image signals for each pixel of the image, at the intensity of the reflected light.
- the light source and the imaging device are for example mounted on a bar that scans the document so as to obtain therefrom a complete image of the document. Since the light source is very close to the document, illumination of the document is uniform over the entire surface of the document.
- the imaging device since the imaging device is very close to the document, the light intensity that it receives therefrom corresponds to the light intensity reflected thereby.
- the light that is received by the imaging device solely represents the way in which it is reflected on the document.
- a surface that is not very reflective, and which therefore appears rather black on the object reflects little light and the corresponding pixels in the image delivered by the imaging device are “dark” pixels.
- a reflective surface which therefore appears white on the object, reflects more light and the corresponding pixels in the image delivered by the imaging device are “light” pixels.
- the homogeneity of lighting of documents may in particular be corrected by calibration because in particular their position is known and does not depend on the document in question.
- a system for picking up an image of at least one object is of the type that comprises an imaging device and a lighting system consisting of at least one light source, but, unlike the example of the document scanner cited above, the position of the object or of each object in question is not predetermined.
- This object is presented at a distance from the imaging device and from the light source or sources of the lighting system and may thus adopt any possible position with respect to each one, optionally while being deformed.
- the object in question is a flat document, such as an identity card, it may be presented at a distance in front of the imaging device and the light source, aslant, inclined and slightly warped.
- this position of the object changes from picking up to picking up. The same applies to a face, an iris or a fingerprint of a person passing in front of a biometric photography system.
- the lighting of the object by the lighting system and the capture of the rays reflected on the object by the imaging device are dependent on the position of the object at the moment of picking up an image. Difficulties may result from this, for the subsequent operations of processing the image taken by the image pickup system, because of an unwanted reflection, lack of contrast in certain areas, etc.
- This non-uniformity of lighting of the object results from the fact that the light striking it is dependent on the spatial distribution of the light intensity conferred by the lighting system.
- This spatial distribution is for example in the form of a lobe of the or each light source constituting said lighting system with a maximum light intensity in the principal axis of the source and a weakened light intensity on moving away from said principal axis.
- the light intensity that strikes a point on the object decreases as the square of the distance that separates it from the light source. Thus, the further this point is away from a light source, the lower the light intensity that strikes this point.
- the intensity of the light that is reflected on the object depends not only on the reflection characteristics of the light according to the surface of the object but also on the inclination of this surface with respect to the axis of the light source. Finally, the light intensity received by the imaging device varies according to the square of the distance that separates the imaging device from each point on the object.
- the illumination of an object by the lighting system and the capture of the rays reflected on the object by the imaging device are dependent on the position of this object at the time the image picking up, the image taken by the imaging device may not be of sufficient quality for subsequent processing.
- the aim of the present invention is therefore to propose such a method for correcting images taken by an image pickup system for taking an image of an object presented at a distance that does not have the drawbacks presented above.
- the present invention relates to a method for correcting an image of an object presented at a distance taken by an imaging device, said object being illuminated by means of a lighting system. It is characterised in that it comprises the following steps:
- an image simulation step consisting of simulating the image taken by said imaging device, using said 3D model thus estimated, a model of the illumination of the object by said lighting system, a model of reflection on the surface of an object and a model of said imaging device, in order to obtain a simulated image of said object, and
- a step of correcting the image taken by said imaging device consisting of modifying the light intensity of each point on the image and of at least one area of interest of the image actually taken by the imaging device according to the light intensity of the corresponding point in the simulated image or the respective light intensities of points adjacent to said corresponding point in the simulated image.
- said correction step includes:
- a calculation step for calculating, for each point of the image or of at least one area of the image taken by said imaging device, a correction factor that is a decreasing function of the light intensity of the corresponding point in the simulated image of said object, and
- a processing step for modifying the light intensity of each point of the image or of at least one area of the image actually taken by said imaging device in proportion to said correction factor.
- said image correction method further includes:
- said correction step not being performed for the points on the image taken by the imaging device corresponding to the points on the simulated image for which the light intensity is above a maximum threshold.
- it also includes a step of creating a mask corresponding to the points thus determined so that subsequent image processings of the corrected image do not take into account the corresponding points on said corrected image.
- said step of determining said 3D model uses a stereoscopic method involving pairs of views taken by said 3D photography system, said 3D photography system consisting either of two cameras taking respectively the views of each pair simultaneously, or a camera taking the views of each pair at two different times.
- said imaging device for example, one or other or both of the two cameras of said 3D photography system constitutes said imaging device.
- the present invention also relates to an image pickup system for taking an image of an object presented at a distance in front of an imaging device, said object being illuminated by means of a lighting system.
- Said system is characterised in that it comprises:
- image simulation means consisting of simulating the image taken by said imaging device, using said 3D model estimated by said estimation means, a model of the illumination of the object by said lighting system, a model of reflection on the surface of an object and a model of said imaging device, in order to obtain a simulated image of said object, and
- said image pickup system comprises:
- calculation means for calculating, for each point on the image or on at least one area of the image taken by said imaging device, a correction factor that is a decreasing function of the light intensity of the corresponding point in the simulated image of the said object, and
- processing means for modifying the light intensity of each point on the image or at least one area of the image actually taken by said imaging device in proportion to said correction factor.
- said image pickup system also comprises:
- said correction means not taking into account the points on the image taken by the image corresponding to the points on the simulated image thus determined.
- said image pickup system also comprises means for creating a mask corresponding to the points thus determined by said determination means, said mask being designed so that subsequent image processings of the corrected image do not take into account the corresponding points of said corrected image.
- FIG. 1 is a block diagram of an image pickup system according to a first embodiment of the invention
- FIG. 2 is a diagram illustrating the lighting simulation that is used by the invention
- FIG. 3 is a block diagram of an image pickup system according to a second embodiment of the invention.
- FIG. 4 is a diagram showing the various steps implemented by a method for taking an image of an object according to the present invention.
- FIG. 5 is a block diagram of a computer system able to implement a method for taking an image according to the present invention.
- the image pickup system depicted in FIG. 1 consists of an imaging device 10 that is designed to deliver image signals I I , a lighting system consisting of at least one light source (in this case, a single light source 20 depicted schematically in the form of a bulb forming the light source), and a processing unit 30 .
- the imaging device 10 can be designed to take an image or a series of images such as a video sequence.
- the image I I involved in the present description is each image in the video sequence.
- the processing unit 30 comprises an estimation unit 31 for estimating a 3D model of an object 40 as it is at the time T 0 of the taking of the image done by the imaging device 10 .
- the estimation unit 31 comprises a modelling unit 32 for determining a 3D model of the object 40 .
- This modelling unit 32 comprises firstly a 3D image pickup apparatus 34 and secondly a processing unit 35 designed to carry out the 3D modelling proper.
- the 3D image pickup apparatus 34 may be any type known to persons skilled in the art, such as a structured-light system or a time-of-flight camera. In the embodiment that is the subject of the present invention and which is given by way illustration, the 3D image pickup apparatus 34 consists of a stereoscopic system comprising two cameras 341 and 342 .
- FIG. 1 in front of the cameras 341 and 342 , there is a single slightly warped flat object 40 .
- This object 40 is presented at a distance in front of the imaging device 10 and in front of a lighting system, here consisting by way of example of a single light source 20 (that is to say presented at a distance in front of the imaging device 10 and the lighting system 20 without this object 40 resting on a reference surface, such as a scanner window, and without its position being previously known). It is also presented in front of the 3D image pickup apparatus 34 , here the cameras 341 and 342 . A particular point P on this object is also depicted.
- the present invention is also relevant if a plurality of objects 40 IS presented in front of the 3D image pickup apparatus 34 .
- the processing unit 35 is designed to receive the image signals from the 3D photography system 34 , for example the cameras 341 and 342 , and, when an object 40 is detected in front of them (a detection made either by the processing unit 30 or by any suitable detection means), determines, from these image signals, a 3D model of the object 40 in front of it.
- the processing unit 35 is explained below in the case where the image pickup apparatus 34 has two cameras 341 and 342 . These cameras 341 and 342 are calibrated, which means that their intrinsic parameters are known and used by the processing unit 35 . These intrinsic parameters are for example given by coefficients of a matrix K. Likewise, the extrinsic parameters of one of the cameras, for example the camera 342 , with respect to the other one in the pair, are determined and used by the processing unit 35 .
- the image I 1 delivered by the camera 341 and the image I 2 delivered by the camera 342 have been depicted in the box of the processing unit 35 .
- the images 41 and 42 of the object 40 and the images P 1 and P 2 of any point P on the object 40 can be seen therein respectively.
- the images respectively thus formed of a point P of coordinates (x, y, z) are respectively points P 1 and P 1 of coordinates (u 1 , v 1 ) and (u 2 , v 2 ) in the respective images of I 1 and I 2 that satisfy the following equations:
- the processing unit 35 is designed to match image points P 1 and P 2 of the images respectively taken by the cameras 341 and 342 , which are situated between respective image points of the same antecedent point P.
- This matching is known to persons skilled in the art and can be carried out by the method that is disclosed in the article by David G. Lowe, entitled “Object recognition from local scale-invariant features” published in Proceedings of the International Conference on Computer Vision, vol 2 (1999) p 1150-1157.
- the document by Herbert Bay, Tinne Tuytelaars and Luc Van Gool entitled “SURF: Speeded Up Robust Features” published in 9 th European Conference on Computer Vision, Graz, Austria, 7-13 May 2006 also mentions such a method.
- a 3D model of an object is a set of discrete points P of coordinates (x, y, z) that belong to the real object. Other points that also belong to the real object but which do not belong to this set of discrete points may be obtained by extrapolation of points of the 3D model.
- the processing unit 35 is designed to control at least one image taking made by the 3D image pickup apparatus 34 at a time t (which may be the time T 0 an image 10 is taken) or at successive times t 0 to tn, and to establish, according to the method disclosed above, a 3D model of the object 40 at the corresponding time t. If several images are taken at successive times t 0 to tn (as is the case in FIG. 1 ), an extrapolation unit 33 is able to establish the movement of the object 40 (translation and rotation) and to estimate its position at the predetermined time t 0 . This position will be referred to hereinafter as the “position at the image taking time T 0 ”. What is referred to as the “position of the object 40 ” is the coordinates of each of the points that constitute the 3D model.
- the processing unit 30 also comprises a simulation unit 36 that is designed to simulate an image taken by the imaging device 10 having regard to the illumination of the object 40 by the light source 20 .
- the simulation unit 36 estimates a quantity of light issuing from the light source 20 , then the quantity of light reflected by said object in the direction of the imaging device 10 , and finally the quantity of light received by the imaging device 10 .
- this image simulation uses the 3D model that was previously determined by the estimation unit 31 , an illumination model of the lighting system 20 , a model of reflection of the light on an object, and a model of the imaging device 10 . It can also use various algorithms, such as the radiosity algorithm, analytical photometry algorithms, the ray-tracing algorithm, etc.
- the lighting model defines the or each light source 20 of the lighting system, for example by their intrinsic parameters representing the distribution of the intensity of the light in the various directions in space and the characteristics of this light (spectrum, polarisation, etc) as well as their extrinsic parameters (position of the light source 20 considered with respect to the imaging device 10 and to the 3D image pickup apparatus 34 .
- the model of reflection on an object used is for example a Lambertian model for perfectly diffusing reflections or a Phong model for diffusing reflections and specular reflections.
- the model of the imaging device may be a simplified so-called “pinhole” model with a lens in the form of a hole or pinhole and a sensor on which the image of the scene is formed.
- the result of the image simulation carried out by the simulation unit 36 is a digital image that is, in the remainder of the description, referred to the simulated image I S .
- FIG. 2 depicts an object 40 , an imaging device 10 in the form of a pinhole model with its lens 11 and a sensor 12 , as well as a lighting system consisting here of a light source 20 .
- the lighting model defines the light source 20 as emitting rays in all directions in a given spatial distribution (the light intensity emitted I varies according to the direction ⁇ of the ray in question: I( ⁇ 1 ) for the ray R 1 and I( ⁇ 2 ) for the ray R 2 ).
- a point of impact on the object 40 is calculated (such as the points P and Q for the respective rays R 1 and R 2 ) in accordance with rules of geometry (intersection of the estimated 3D model of the object 40 with the straight line carrying the ray in question).
- the light intensity received at a point of impact on the object 40 depends on the light intensity I( ⁇ 1 ) of the light source 20 in the direction ⁇ in question and the distance that separates this point from the light source 20 .
- FIG. 2 depicts only, at points P and Q, three rays respectively issuing from the rays R 1 and R 2 .
- Each incident ray if it is captured by the imaging device 10 , forms a point of impact with the sensor 12 and gives thereto a light intensity that is a function not only of the reflection on the object but also the distance that separates this point of impact from the previous point of impact on the object.
- the processing unit 30 also comprises a correction unit 37 , the essential function of which is to modify at least one characteristic, such as the light intensity, of each point of the image or of at least one area of interest of the image actually taken by the imaging device according to the light intensity of the corresponding point in the simulated image or of a set of points adjacent to said corresponding point in the simulated image.
- a correction unit 37 the essential function of which is to modify at least one characteristic, such as the light intensity, of each point of the image or of at least one area of interest of the image actually taken by the imaging device according to the light intensity of the corresponding point in the simulated image or of a set of points adjacent to said corresponding point in the simulated image.
- the function of the correction unit 37 is to correct the image I I (see FIG. 1 ) that is actually taken by the imaging device 10 so that the resulting corrected image I C is representative of the object 40 if it were uniformly illuminated by the light source 20 and if the imaging device 10 received the light intensity directly reflected at each of the points of the object 40 or part of interest of the object 40 .
- F(p) is the correction factor applied at the point p of the image actually taken by the imaging device 10
- the correction factor F(p) is a decreasing function of the light intensity of the point in the simulated image of said object corresponding to the point p.
- I S (p) may be the average of the respective light intensities of the points on the simulated image adjacent to a point on the simulated image corresponding to the point p and therefore the correction factor F(p) a decreasing function of the average of these intensities.
- the simulation unit 36 may determine areas of the object in which specularities are present, that is to say areas that reflect, without significant diffusion, the rays issuing from the light source 20 , the rays thus reflected directly striking the imaging device 10 . These areas cannot generally be used, in particular for character recognition, since the quantity of light thus reflected causes saturations of the digital image produced. In order to warn the user to modify the position of the object 40 for further photographing by the imaging device 10 , it is possible to provide an alarm 50 that is then triggered when such specularities occur.
- a mask is created on the image I I actually taken by the imaging device 10 in order not to take into account, during subsequent image-correction processing, the areas of the object 40 where these specularities occur.
- the image correction can be applied only for the points on the simulated image having values that lie between a maximum threshold value and a minimum threshold value.
- the image actually taken by the imaging device 10 will not be able to be corrected for the image points in question.
- the level in the simulated image is too low, or even zero, the image actually taken by the imaging device 10 cannot be corrected.
- a mask of the points on the corrected image will be able to be created for subsequent use by image-analysis algorithms.
- image-analysis algorithms used subsequently, for example to calculate statistics on the image, not to take into account the points on the corrected image that have not been corrected since their values do not lie between said two threshold values.
- These points correspond either to specularities or to areas of very low (or even zero) light intensity.
- This approach is also one of the ways of managing numerical problems of division by zero, but especially enables the image-analysis algorithms to have information on the reliability of the image points processed.
- the simulated image will have a maximum value of 1 for the useful area corresponding to the object in question and outside any specular reflection area and the image actually taken by the imager will therefore not be modified, or only a little, for its points corresponding to the parts of the object that are more illuminated but do not cause a specular reflection (values close to 1 of the simulated image) and will be corrected so that, the weaker the illumination given by the value of the simulated image at the corresponding points, the higher the level for its other points.
- the simulated image will be calculated to within a factor, and the dynamic range of the corrected image will be determined according to the quantisation of the corrected image required, for example from 0 to 255 for quantisation in 8 bits, and according to a criterion aimed at exploiting the entire dynamic range of intensities possible for the corrected image.
- the simulated image can be considered to be an approximation to a certain scale of the image that would be taken by the imaging device for an object that would have the same 3D characteristics to the scale of the simulation and which would be formed by a homogeneous material corresponding to the properties of the reflection model adopted.
- FIG. 3 depicts another embodiment of an image pickup system of the invention, according to which the imaging device 10 is replaced by one of the two cameras 341 or 342 (in this case the camera 342 , which has an output for the image signals of the image I I taken at time T 0 ).
- each camera 341 and 342 can be used as imaging device 10 , the method of the present invention consisting of correcting the images actually taken by each imaging device 10 .
- FIG. 4 depicts a diagram illustrating an image picking up method according to the invention that is advantageously used by means of an image pickup system such as the one depicted in FIG. 1 or the one depicted in FIG. 3 .
- Step E 10 is a step of estimating a 3D model of the object as it is at time T 0 of the taking of the image by said imaging device 10 . More precisely, step E 10 comprises a step E 110 of detecting an object in the scene and a step E 120 that is the step of estimating a 3D model of the object strictly speaking. Step E 120 itself comprises a step E 121 of taking images by means of a 3D image pickup apparatus 34 , such as the system 34 , and a step E 122 of determining the 3D model of the object that was taken at step E 121 by the 3D image pickup apparatus 34 .
- Step E 20 is a step of simulating the image of the object 40 at time T 0 , on the basis of the 3D model estimated at step E 10 , of a model of the lighting of the object including intrinsic and extrinsic parameters of the lighting system 20 , of a reflection model of the object or of each part of the object able to be taken by the imaging device 10 and of a model of the imaging device 10 .
- the result of this step E 20 is a simulated image I S .
- This image is a digital image that is characterised by a set of pixels, to each of which a numerical value relating to the light intensity of this pixel is allocated. In one embodiment of the invention, only one or more areas of interest of the object 40 are processed during the simulation step E 20 .
- Step E 30 is a step of correcting the image I I or of an area of interest of said image I I taken by said imaging device 10 at the end of which a corrected image I C is obtained.
- This step 30 consists of considering a point on the image I I taken by the imaging device 10 . Only the points on the image taken by the imaging device 10 that are in one or more areas of interest can be considered.
- This correction step E 30 consists of modifying the light intensity of each point of the image I I or of at least one area of interest of the image I I actually taken by the imaging device 10 according to the light intensity of the corresponding point in the simulated image I S or the respective light intensities of points adjacent to said corresponding point in the simulated image I S .
- it consists again of correcting the image I I taken by the imaging device 10 and, more exactly, increasing the light intensity of each point on the image I I or of at least one area of interest of the image I I actually taken by the imaging device 10 to a greater extent, the lower the light intensity of the corresponding point in the simulated image I S , and/or reducing the light intensity of each point on the image I I or of at least one area of interest on the image I I actually taken by the imaging device 10 to a greater extent, the higher the light intensity of the corresponding point in the simulated image I S .
- the correction step E 30 includes a calculation step E 31 for calculating, for each point on the image or on at least one area of the image taken by said imaging device, a correction factor that is a decreasing function of the light intensity of the corresponding point in the simulated image of said object or of the respective light intensities of points adjacent to said corresponding point in the simulated image I S , and a processing step E 32 for modifying the light intensity of the point on the image I I or on at least one area of the image actually taken by said imaging device 10 in proportion to said ratio.
- said method also comprises a step E 40 of determining points on the simulated image for which the light intensity is above a maximum threshold and a step E 50 of determining points on the simulated image for which the light intensity is below a minimum threshold.
- said correction step E 30 is not implemented for the points on the image taken by the imaging device corresponding to the points on the simulated image thus determined at steps E 40 and E 50 .
- said method also includes a step E 60 of creating a mask M corresponding to the points thus determined at steps E 40 and E 50 so that subsequent image processings of the corrected image do not take into account the corresponding points on said corrected image.
- This method is implemented by a processing unit such as the processing unit 30 in FIGS. 1 and 3 .
- the units 33 , 35 to 37 of this processing unit 30 are implemented by means of a computer system such as the one shown schematically in FIG. 5 .
- This computer system comprises a central unit 300 associated with a memory 301 in which there are stored firstly the code of the instructions of a program executed by the central unit 300 and secondly data used for executing said program. It further comprises a port 302 for interfacing with the 3D image pickup apparatus 34 , for example with the cameras 341 and 342 in FIGS. 1 and 3 , and a port 303 for interfacing with the imaging device 10 .
- the central unit 300 and the ports 302 and 303 are connected together by means of a bus 304 .
- the central unit 300 is capable of executing instructions loaded in the memory 301 , from for example storage means, such as an external memory, a memory card, a CD, a DVD, a server of a communication network, etc. These instructions form a computer program causing the implementation, by the central unit 300 , of all or some of the steps of the method in the invention that was described in relation to FIG. 4 .
- all or some of the steps may be implemented in software form by the execution of a set of instructions by a programmable machine, such as a DSP (digital signal processor) or a microcontroller. All or some of the steps may also be implemented in hardware form by a machine or a dedicated component, such as an FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit).
- a programmable machine such as a DSP (digital signal processor) or a microcontroller.
- All or some of the steps may also be implemented in hardware form by a machine or a dedicated component, such as an FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit).
Abstract
Description
- The present invention relates to a method for correcting an image of at least one object presented at a distance in front of an imaging device and illuminated by an illumination system, as well as an image pickup system for implementing said method.
- Image pickup systems integrating a system for illuminating the scene to be taken are known and mention can be made, by way of example, of biometric image pickup systems in which the image of a face, of an iris or of a fingerprint of a person is used for recognition purposes, systems for picking up images of documents, such as identity documents, residence evidence, tickets for games, etc., in which the image of the document or a part thereof is used for the purpose of recognising the identity of the bearer of said document etc.
- In the prior art, a system for picking up image of a flat document is also known, which consists of a flat scanner functioning as follows. The document in question is placed flat on a window of the scanner at a known position determined by said window and is uniformly illuminated by means of suitable light source. An imaging device receives the light from the light source, which is reflected on the document, and delivers corresponding image signals for each pixel of the image, at the intensity of the reflected light. The light source and the imaging device are for example mounted on a bar that scans the document so as to obtain therefrom a complete image of the document. Since the light source is very close to the document, illumination of the document is uniform over the entire surface of the document.
- Likewise, since the imaging device is very close to the document, the light intensity that it receives therefrom corresponds to the light intensity reflected thereby. Thus, because the light source illuminates the document with a light that is uniform in intensity over the surface of the document, the light that is received by the imaging device solely represents the way in which it is reflected on the document. Thus a surface that is not very reflective, and which therefore appears rather black on the object, reflects little light and the corresponding pixels in the image delivered by the imaging device are “dark” pixels. Conversely, a reflective surface, which therefore appears white on the object, reflects more light and the corresponding pixels in the image delivered by the imaging device are “light” pixels.
- The homogeneity of lighting of documents may in particular be corrected by calibration because in particular their position is known and does not depend on the document in question.
- A system for picking up an image of at least one object according to the present invention is of the type that comprises an imaging device and a lighting system consisting of at least one light source, but, unlike the example of the document scanner cited above, the position of the object or of each object in question is not predetermined. This object is presented at a distance from the imaging device and from the light source or sources of the lighting system and may thus adopt any possible position with respect to each one, optionally while being deformed. For example, if the object in question is a flat document, such as an identity card, it may be presented at a distance in front of the imaging device and the light source, aslant, inclined and slightly warped. In addition, this position of the object changes from picking up to picking up. The same applies to a face, an iris or a fingerprint of a person passing in front of a biometric photography system.
- Because of this, the lighting of the object by the lighting system and the capture of the rays reflected on the object by the imaging device are dependent on the position of the object at the moment of picking up an image. Difficulties may result from this, for the subsequent operations of processing the image taken by the image pickup system, because of an unwanted reflection, lack of contrast in certain areas, etc.
- This non-uniformity of lighting of the object results from the fact that the light striking it is dependent on the spatial distribution of the light intensity conferred by the lighting system. This spatial distribution is for example in the form of a lobe of the or each light source constituting said lighting system with a maximum light intensity in the principal axis of the source and a weakened light intensity on moving away from said principal axis. In addition, the light intensity that strikes a point on the object decreases as the square of the distance that separates it from the light source. Thus, the further this point is away from a light source, the lower the light intensity that strikes this point. The intensity of the light that is reflected on the object depends not only on the reflection characteristics of the light according to the surface of the object but also on the inclination of this surface with respect to the axis of the light source. Finally, the light intensity received by the imaging device varies according to the square of the distance that separates the imaging device from each point on the object.
- These phenomena lead to considering that the light that is received under these conditions by an object is not uniform over the entire surface thereof and that the quantity of light received by the imaging device depends not only on the reflectivity of this surface but also on the position of the object with respect to the light source and the imaging device. When the image signals delivered by the imaging device are used for subsequent image processing, such as for example the recognition of characters on the object, this non-uniformity may give rise to problems because of insufficient contrast of certain parts of the image of the object taken by the imaging device.
- In general terms, because the illumination of an object by the lighting system and the capture of the rays reflected on the object by the imaging device are dependent on the position of this object at the time the image picking up, the image taken by the imaging device may not be of sufficient quality for subsequent processing.
- There is therefore, under these circumstances, a need for a method for correcting images taken by an imaging device that makes a corrected image close to an image of said object taken under uniform light conditions.
- The aim of the present invention is therefore to propose such a method for correcting images taken by an image pickup system for taking an image of an object presented at a distance that does not have the drawbacks presented above.
- To do this, the present invention relates to a method for correcting an image of an object presented at a distance taken by an imaging device, said object being illuminated by means of a lighting system. It is characterised in that it comprises the following steps:
- a step of estimating a 3D model of the object as it is at the time when an image is taken by said imaging device,
- an image simulation step consisting of simulating the image taken by said imaging device, using said 3D model thus estimated, a model of the illumination of the object by said lighting system, a model of reflection on the surface of an object and a model of said imaging device, in order to obtain a simulated image of said object, and
- a step of correcting the image taken by said imaging device, consisting of modifying the light intensity of each point on the image and of at least one area of interest of the image actually taken by the imaging device according to the light intensity of the corresponding point in the simulated image or the respective light intensities of points adjacent to said corresponding point in the simulated image.
- Advantageously, said correction step includes:
- a calculation step for calculating, for each point of the image or of at least one area of the image taken by said imaging device, a correction factor that is a decreasing function of the light intensity of the corresponding point in the simulated image of said object, and
- a processing step for modifying the light intensity of each point of the image or of at least one area of the image actually taken by said imaging device in proportion to said correction factor.
- According to another feature of the invention, said image correction method further includes:
- a step of determining points on the simulated image for which the light intensity is above a maximum threshold,
- a step of determining points on the simulated image for which the light intensity is below a minimum threshold,
- said correction step not being performed for the points on the image taken by the imaging device corresponding to the points on the simulated image for which the light intensity is above a maximum threshold.
- According to another advantageous feature, it also includes a step of creating a mask corresponding to the points thus determined so that subsequent image processings of the corrected image do not take into account the corresponding points on said corrected image.
- Advantageously, said step of determining said 3D model uses a stereoscopic method involving pairs of views taken by said 3D photography system, said 3D photography system consisting either of two cameras taking respectively the views of each pair simultaneously, or a camera taking the views of each pair at two different times. For example, one or other or both of the two cameras of said 3D photography system constitutes said imaging device.
- The present invention also relates to an image pickup system for taking an image of an object presented at a distance in front of an imaging device, said object being illuminated by means of a lighting system. Said system is characterised in that it comprises:
- means for estimating a 3D model of the object as it is at the time said imaging device takes said image,
- image simulation means consisting of simulating the image taken by said imaging device, using said 3D model estimated by said estimation means, a model of the illumination of the object by said lighting system, a model of reflection on the surface of an object and a model of said imaging device, in order to obtain a simulated image of said object, and
- means for correcting the image taken by said imaging device in order to modify the light intensity of each point on the image or of at least one area of interest of the image actually taken by the imaging device according to the light intensity of the corresponding point in the simulated image or the respective light intensities of points adjacent to said corresponding point in the simulated image.
- Advantageously, said image pickup system comprises:
- calculation means for calculating, for each point on the image or on at least one area of the image taken by said imaging device, a correction factor that is a decreasing function of the light intensity of the corresponding point in the simulated image of the said object, and
- processing means for modifying the light intensity of each point on the image or at least one area of the image actually taken by said imaging device in proportion to said correction factor.
- Advantageously, said image pickup system also comprises:
- means for determining points on the simulated image for which the light intensity is above a maximum threshold,
- means for determining points on the simulated image for which the light intensity is below a minimum threshold,
- said correction means not taking into account the points on the image taken by the image corresponding to the points on the simulated image thus determined.
- Advantageously, said image pickup system also comprises means for creating a mask corresponding to the points thus determined by said determination means, said mask being designed so that subsequent image processings of the corrected image do not take into account the corresponding points of said corrected image.
- The features of the invention mentioned above, as well as others, will emerge more clearly from a reading of the following description of an example embodiment, said description being given in relation to the accompanying drawings, among which:
-
FIG. 1 is a block diagram of an image pickup system according to a first embodiment of the invention, -
FIG. 2 is a diagram illustrating the lighting simulation that is used by the invention, -
FIG. 3 is a block diagram of an image pickup system according to a second embodiment of the invention, -
FIG. 4 is a diagram showing the various steps implemented by a method for taking an image of an object according to the present invention, and -
FIG. 5 is a block diagram of a computer system able to implement a method for taking an image according to the present invention. - The image pickup system depicted in
FIG. 1 consists of animaging device 10 that is designed to deliver image signals II, a lighting system consisting of at least one light source (in this case, asingle light source 20 depicted schematically in the form of a bulb forming the light source), and aprocessing unit 30. - The
imaging device 10 can be designed to take an image or a series of images such as a video sequence. In this case, the image II involved in the present description is each image in the video sequence. - According to the invention, the
processing unit 30 comprises anestimation unit 31 for estimating a 3D model of anobject 40 as it is at the time T0 of the taking of the image done by theimaging device 10. - In the embodiment depicted, the
estimation unit 31 comprises amodelling unit 32 for determining a 3D model of theobject 40. Thismodelling unit 32 comprises firstly a 3Dimage pickup apparatus 34 and secondly aprocessing unit 35 designed to carry out the 3D modelling proper. - The 3D
image pickup apparatus 34 may be any type known to persons skilled in the art, such as a structured-light system or a time-of-flight camera. In the embodiment that is the subject of the present invention and which is given by way illustration, the 3Dimage pickup apparatus 34 consists of a stereoscopic system comprising twocameras - To illustrate the invention, in
FIG. 1 , in front of thecameras flat object 40. Thisobject 40 is presented at a distance in front of theimaging device 10 and in front of a lighting system, here consisting by way of example of a single light source 20 (that is to say presented at a distance in front of theimaging device 10 and thelighting system 20 without thisobject 40 resting on a reference surface, such as a scanner window, and without its position being previously known). It is also presented in front of the 3Dimage pickup apparatus 34, here thecameras - The present invention is also relevant if a plurality of
objects 40 IS presented in front of the 3Dimage pickup apparatus 34. - The
processing unit 35 is designed to receive the image signals from the3D photography system 34, for example thecameras object 40 is detected in front of them (a detection made either by theprocessing unit 30 or by any suitable detection means), determines, from these image signals, a 3D model of theobject 40 in front of it. - The functioning of the
processing unit 35 is explained below in the case where theimage pickup apparatus 34 has twocameras cameras processing unit 35. These intrinsic parameters are for example given by coefficients of a matrix K. Likewise, the extrinsic parameters of one of the cameras, for example thecamera 342, with respect to the other one in the pair, are determined and used by theprocessing unit 35. - The image I1 delivered by the
camera 341 and the image I2 delivered by thecamera 342 have been depicted in the box of theprocessing unit 35. Theimages object 40 and the images P1 and P2 of any point P on theobject 40 can be seen therein respectively. - The images respectively thus formed of a point P of coordinates (x, y, z) are respectively points P1 and P1 of coordinates (u1, v1) and (u2, v2) in the respective images of I1 and I2 that satisfy the following equations:
-
- where the matrix [R12 T12] (R12 is a rotation matrix and T12 is a translation matrix) expresses the extrinsic parameters of the
camera 342 with respect to thecamera 341 and λ1 and λ2 are unknown factors representing the fact that an infinity of antecedent points corresponds to the same image point P1 and P2. I3 is the unity matrix of dimensions 3×3. - The
processing unit 35 is designed to match image points P1 and P2 of the images respectively taken by thecameras - The above equations show that, with regard to each pair of image points (P1, P2) thus matched, there is a linear system of 6 equations with only 5 unknowns, which are respectively the two factors λ1 and λ2 and the three coordinates x, y, z of the same antecedent point P of these image points P1 and P2. It is therefore possible, from the images supplied by the calibrated
cameras object 40 in front of them. - What is here called a 3D model of an object is a set of discrete points P of coordinates (x, y, z) that belong to the real object. Other points that also belong to the real object but which do not belong to this set of discrete points may be obtained by extrapolation of points of the 3D model.
- As a supplement to what has just been stated here, the work by Hartley, Richard and Andrew Zisserman entitled “Multiple View Geometry In Computer Vision”, Cambridge, 2000, can be consulted, in particular for the disclosure of the above mathematical model of the method for modelling by stereoscopy by means of two calibrated cameras.
- The
processing unit 35 is designed to control at least one image taking made by the 3Dimage pickup apparatus 34 at a time t (which may be the time T0 animage 10 is taken) or at successive times t0 to tn, and to establish, according to the method disclosed above, a 3D model of theobject 40 at the corresponding time t. If several images are taken at successive times t0 to tn (as is the case inFIG. 1 ), anextrapolation unit 33 is able to establish the movement of the object 40 (translation and rotation) and to estimate its position at the predetermined time t0. This position will be referred to hereinafter as the “position at the image taking time T0”. What is referred to as the “position of theobject 40” is the coordinates of each of the points that constitute the 3D model. - The
processing unit 30 also comprises asimulation unit 36 that is designed to simulate an image taken by theimaging device 10 having regard to the illumination of theobject 40 by thelight source 20. Thus thesimulation unit 36 estimates a quantity of light issuing from thelight source 20, then the quantity of light reflected by said object in the direction of theimaging device 10, and finally the quantity of light received by theimaging device 10. To do this, this image simulation uses the 3D model that was previously determined by theestimation unit 31, an illumination model of thelighting system 20, a model of reflection of the light on an object, and a model of theimaging device 10. It can also use various algorithms, such as the radiosity algorithm, analytical photometry algorithms, the ray-tracing algorithm, etc. - The lighting model defines the or each
light source 20 of the lighting system, for example by their intrinsic parameters representing the distribution of the intensity of the light in the various directions in space and the characteristics of this light (spectrum, polarisation, etc) as well as their extrinsic parameters (position of thelight source 20 considered with respect to theimaging device 10 and to the 3Dimage pickup apparatus 34. - The model of reflection on an object used is for example a Lambertian model for perfectly diffusing reflections or a Phong model for diffusing reflections and specular reflections.
- The model of the imaging device may be a simplified so-called “pinhole” model with a lens in the form of a hole or pinhole and a sensor on which the image of the scene is formed.
- The result of the image simulation carried out by the
simulation unit 36 is a digital image that is, in the remainder of the description, referred to the simulated image IS. - To illustrate an example of an image-simulation method that could be used by the
simulation unit 36,FIG. 2 depicts anobject 40, animaging device 10 in the form of a pinhole model with itslens 11 and asensor 12, as well as a lighting system consisting here of alight source 20. The lighting model defines thelight source 20 as emitting rays in all directions in a given spatial distribution (the light intensity emitted I varies according to the direction ω of the ray in question: I(ω1) for the ray R1 and I(ω2) for the ray R2). For each ray emitted by it, a point of impact on theobject 40 is calculated (such as the points P and Q for the respective rays R1 and R2) in accordance with rules of geometry (intersection of the estimated 3D model of theobject 40 with the straight line carrying the ray in question). The light intensity received at a point of impact on theobject 40 depends on the light intensity I(ω1) of thelight source 20 in the direction ω in question and the distance that separates this point from thelight source 20. - It is possible to limit this calculation solely to the rays that give a point of impact on the
object 40 or even solely to the rays that give a point of impact inside one or more areas of interest of theobject 40. - Depending on the reflection model used, Lambertian model, Phong model, or the like, at each point of impact a plurality of rays are re-sent in all possible directions with intensities varying with these directions. For example,
FIG. 2 depicts only, at points P and Q, three rays respectively issuing from the rays R1 and R2. Each incident ray, if it is captured by theimaging device 10, forms a point of impact with thesensor 12 and gives thereto a light intensity that is a function not only of the reflection on the object but also the distance that separates this point of impact from the previous point of impact on the object. - Thus it is possible to determine, from point to point, the total light intensity at each point on the
sensor 12 that depends on the values of the light intensities for each ray reflected on theobject 40 and therefore each incident ray issuing from thelight source 20. The result of this simulation is a simulated image IS. - The
processing unit 30 also comprises acorrection unit 37, the essential function of which is to modify at least one characteristic, such as the light intensity, of each point of the image or of at least one area of interest of the image actually taken by the imaging device according to the light intensity of the corresponding point in the simulated image or of a set of points adjacent to said corresponding point in the simulated image. - For example, the function of the
correction unit 37 is to correct the image II (seeFIG. 1 ) that is actually taken by theimaging device 10 so that the resulting corrected image IC is representative of theobject 40 if it were uniformly illuminated by thelight source 20 and if theimaging device 10 received the light intensity directly reflected at each of the points of theobject 40 or part of interest of theobject 40. - To do this, the darker an area of the simulated image IS, the more the corresponding area of the image II actually taken by the
imaging device 10 will be raised in light intensity, and conversely, the lighter this area of the simulated image IS, the darker will be the corresponding area of the image II actually taken by theimaging device 10 - For example, if a point p on the image actually taken by the
imaging device 10 has a light intensity II(p) and the corresponding point on the simulated image has a light intensity IS(p), the light intensity IC(p) at this point p in the corrected image is given by the equation: -
- where F(p) is the correction factor applied at the point p of the image actually taken by the
imaging device 10 - In general terms, the correction factor F(p) is a decreasing function of the light intensity of the point in the simulated image of said object corresponding to the point p.
- It may also be a decreasing function of the light intensity of a plurality of points adjacent to the point on the simulated image corresponding to the point p of the image actually taken by the imaging device the intensity of which is to be corrected. For example, in the above equation, IS(p) may be the average of the respective light intensities of the points on the simulated image adjacent to a point on the simulated image corresponding to the point p and therefore the correction factor F(p) a decreasing function of the average of these intensities.
- The
simulation unit 36 may determine areas of the object in which specularities are present, that is to say areas that reflect, without significant diffusion, the rays issuing from thelight source 20, the rays thus reflected directly striking theimaging device 10. These areas cannot generally be used, in particular for character recognition, since the quantity of light thus reflected causes saturations of the digital image produced. In order to warn the user to modify the position of theobject 40 for further photographing by theimaging device 10, it is possible to provide analarm 50 that is then triggered when such specularities occur. - In another embodiment, when specularities are detected by the
simulation unit 36, a mask is created on the image II actually taken by theimaging device 10 in order not to take into account, during subsequent image-correction processing, the areas of theobject 40 where these specularities occur. - More generally, the image correction can be applied only for the points on the simulated image having values that lie between a maximum threshold value and a minimum threshold value. Thus, in the case where a specularity is detected in the image simulated by the
simulation unit 36, the image actually taken by theimaging device 10 will not be able to be corrected for the image points in question. Likewise, in the case where the level in the simulated image is too low, or even zero, the image actually taken by theimaging device 10 cannot be corrected. - In addition, a mask of the points on the corrected image will be able to be created for subsequent use by image-analysis algorithms. Such a mask can enable image-analysis algorithms used subsequently, for example to calculate statistics on the image, not to take into account the points on the corrected image that have not been corrected since their values do not lie between said two threshold values. These points correspond either to specularities or to areas of very low (or even zero) light intensity.
- This approach is also one of the ways of managing numerical problems of division by zero, but especially enables the image-analysis algorithms to have information on the reliability of the image points processed.
- In a particular embodiment, the simulated image will have a maximum value of 1 for the useful area corresponding to the object in question and outside any specular reflection area and the image actually taken by the imager will therefore not be modified, or only a little, for its points corresponding to the parts of the object that are more illuminated but do not cause a specular reflection (values close to 1 of the simulated image) and will be corrected so that, the weaker the illumination given by the value of the simulated image at the corresponding points, the higher the level for its other points.
- In another embodiment, the simulated image will be calculated to within a factor, and the dynamic range of the corrected image will be determined according to the quantisation of the corrected image required, for example from 0 to 255 for quantisation in 8 bits, and according to a criterion aimed at exploiting the entire dynamic range of intensities possible for the corrected image.
- Having regard firstly to the precision of the spatial sampling of the estimation of the 3D model of the object of interest and secondly of the reflection models proposed here, which correspond to a hypothesis of homogeneous material, that is to say they do not take account of local variations in reflection of the object, for example created by patterns printed on the object in question or furrows on its surface, such as furrows on a skin, the simulated image can be considered to be an approximation to a certain scale of the image that would be taken by the imaging device for an object that would have the same 3D characteristics to the scale of the simulation and which would be formed by a homogeneous material corresponding to the properties of the reflection model adopted. The correction that is the subject matter of the present invention is then a correction to the uniformity of the image of the object vis-á-vis its positioning, and the characteristics of the lighting system and of the photography system
FIG. 3 depicts another embodiment of an image pickup system of the invention, according to which theimaging device 10 is replaced by one of the twocameras 341 or 342 (in this case thecamera 342, which has an output for the image signals of the image II taken at time T0). - Likewise, each
camera imaging device 10, the method of the present invention consisting of correcting the images actually taken by eachimaging device 10. -
FIG. 4 depicts a diagram illustrating an image picking up method according to the invention that is advantageously used by means of an image pickup system such as the one depicted inFIG. 1 or the one depicted inFIG. 3 . - Step E10 is a step of estimating a 3D model of the object as it is at time T0 of the taking of the image by said
imaging device 10. More precisely, step E10 comprises a step E110 of detecting an object in the scene and a step E120 that is the step of estimating a 3D model of the object strictly speaking. Step E120 itself comprises a step E121 of taking images by means of a 3Dimage pickup apparatus 34, such as thesystem 34, and a step E122 of determining the 3D model of the object that was taken at step E121 by the 3Dimage pickup apparatus 34. - Step E20 is a step of simulating the image of the
object 40 at time T0, on the basis of the 3D model estimated at step E10, of a model of the lighting of the object including intrinsic and extrinsic parameters of thelighting system 20, of a reflection model of the object or of each part of the object able to be taken by theimaging device 10 and of a model of theimaging device 10. The result of this step E20 is a simulated image IS. This image is a digital image that is characterised by a set of pixels, to each of which a numerical value relating to the light intensity of this pixel is allocated. In one embodiment of the invention, only one or more areas of interest of theobject 40 are processed during the simulation step E20. - Step E30 is a step of correcting the image II or of an area of interest of said image II taken by said
imaging device 10 at the end of which a corrected image IC is obtained. - This
step 30 consists of considering a point on the image II taken by theimaging device 10. Only the points on the image taken by theimaging device 10 that are in one or more areas of interest can be considered. This correction step E30 consists of modifying the light intensity of each point of the image II or of at least one area of interest of the image II actually taken by theimaging device 10 according to the light intensity of the corresponding point in the simulated image IS or the respective light intensities of points adjacent to said corresponding point in the simulated image IS. For example, it consists again of correcting the image II taken by theimaging device 10 and, more exactly, increasing the light intensity of each point on the image II or of at least one area of interest of the image II actually taken by theimaging device 10 to a greater extent, the lower the light intensity of the corresponding point in the simulated image IS, and/or reducing the light intensity of each point on the image II or of at least one area of interest on the image II actually taken by theimaging device 10 to a greater extent, the higher the light intensity of the corresponding point in the simulated image IS. - More precisely, the correction step E30 includes a calculation step E31 for calculating, for each point on the image or on at least one area of the image taken by said imaging device, a correction factor that is a decreasing function of the light intensity of the corresponding point in the simulated image of said object or of the respective light intensities of points adjacent to said corresponding point in the simulated image IS, and a processing step E32 for modifying the light intensity of the point on the image II or on at least one area of the image actually taken by said
imaging device 10 in proportion to said ratio. - In a particular embodiment, said method also comprises a step E40 of determining points on the simulated image for which the light intensity is above a maximum threshold and a step E50 of determining points on the simulated image for which the light intensity is below a minimum threshold. According to this particular embodiment, said correction step E30 is not implemented for the points on the image taken by the imaging device corresponding to the points on the simulated image thus determined at steps E40 and E50.
- In another embodiment, said method also includes a step E60 of creating a mask M corresponding to the points thus determined at steps E40 and E50 so that subsequent image processings of the corrected image do not take into account the corresponding points on said corrected image.
- This method is implemented by a processing unit such as the
processing unit 30 inFIGS. 1 and 3 . - Advantageously, the
units processing unit 30 are implemented by means of a computer system such as the one shown schematically inFIG. 5 . - This computer system comprises a
central unit 300 associated with amemory 301 in which there are stored firstly the code of the instructions of a program executed by thecentral unit 300 and secondly data used for executing said program. It further comprises aport 302 for interfacing with the 3Dimage pickup apparatus 34, for example with thecameras FIGS. 1 and 3 , and aport 303 for interfacing with theimaging device 10. Thecentral unit 300 and theports bus 304. - The
central unit 300 is capable of executing instructions loaded in thememory 301, from for example storage means, such as an external memory, a memory card, a CD, a DVD, a server of a communication network, etc. These instructions form a computer program causing the implementation, by thecentral unit 300, of all or some of the steps of the method in the invention that was described in relation toFIG. 4 . - Thus all or some of the steps may be implemented in software form by the execution of a set of instructions by a programmable machine, such as a DSP (digital signal processor) or a microcontroller. All or some of the steps may also be implemented in hardware form by a machine or a dedicated component, such as an FPGA (field-programmable gate array) or an ASIC (application-specific integrated circuit).
Claims (12)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1552495 | 2015-03-25 | ||
FR15/52495 | 2015-03-25 | ||
FR1552495A FR3034233B1 (en) | 2015-03-25 | 2015-03-25 | METHOD OF CORRECTING AN IMAGE OF AT LEAST ONE REMOTELY PRESENTED OBJECT IN FRONT OF AN IMAGER AND LIGHTING BY A LIGHTING SYSTEM AND SHOOTING SYSTEM FOR IMPLEMENTING SAID METHOD |
Publications (2)
Publication Number | Publication Date |
---|---|
US20160284068A1 true US20160284068A1 (en) | 2016-09-29 |
US10262398B2 US10262398B2 (en) | 2019-04-16 |
Family
ID=54014917
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/080,176 Active 2036-04-02 US10262398B2 (en) | 2015-03-25 | 2016-03-24 | Method for correcting an image of at least one object presented at a distance in front of an imaging device and illuminated by a lighting system and an image pickup system for implementing said method |
Country Status (3)
Country | Link |
---|---|
US (1) | US10262398B2 (en) |
EP (1) | EP3073441B1 (en) |
FR (1) | FR3034233B1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109507685A (en) * | 2018-10-15 | 2019-03-22 | 天津大学 | The distance measuring method of the TOF sensor model of phong formula illumination model |
US10607404B2 (en) * | 2015-02-16 | 2020-03-31 | Thomson Licensing | Device and method for estimating a glossy part of radiation |
US11272164B1 (en) * | 2020-01-17 | 2022-03-08 | Amazon Technologies, Inc. | Data synthesis using three-dimensional modeling |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6064759A (en) * | 1996-11-08 | 2000-05-16 | Buckley; B. Shawn | Computer aided inspection machine |
US6088612A (en) * | 1997-04-04 | 2000-07-11 | Medtech Research Corporation | Method and apparatus for reflective glare removal in digital photography useful in cervical cancer detection |
US6160907A (en) * | 1997-04-07 | 2000-12-12 | Synapix, Inc. | Iterative three-dimensional process for creating finished media content |
US20030123713A1 (en) * | 2001-12-17 | 2003-07-03 | Geng Z. Jason | Face recognition system and method |
US20050008199A1 (en) * | 2003-03-24 | 2005-01-13 | Kenneth Dong | Facial recognition system and method |
US20090052767A1 (en) * | 2006-08-23 | 2009-02-26 | Abhir Bhalerao | Modelling |
US20090238449A1 (en) * | 2005-11-09 | 2009-09-24 | Geometric Informatics, Inc | Method and Apparatus for Absolute-Coordinate Three-Dimensional Surface Imaging |
US20100034432A1 (en) * | 2004-03-24 | 2010-02-11 | Fuji Photo Film Co., Ltd. | Authentication system, authentication method, machine readable medium storing thereon authentication program, certificate photograph taking apparatus, and certificate photograph taking method |
US20110182520A1 (en) * | 2010-01-25 | 2011-07-28 | Apple Inc. | Light Source Detection from Synthesized Objects |
US20120268571A1 (en) * | 2011-04-19 | 2012-10-25 | University Of Southern California | Multiview face capture using polarized spherical gradient illumination |
US8923575B2 (en) * | 2010-06-30 | 2014-12-30 | Nec Corporation | Color image processing method, color image processing device, and color image processing program |
US20150146032A1 (en) * | 2013-11-22 | 2015-05-28 | Vidinoti Sa | Light field processing method |
US20160284094A1 (en) * | 2013-09-16 | 2016-09-29 | Technion Research & Development Foundation Limited | 3d reconstruction from photometric stereo with shadows |
US20160307032A1 (en) * | 2015-04-14 | 2016-10-20 | Microsoft Technology Licensing, Llc | Two-dimensional infrared depth sensing |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6009188A (en) * | 1996-02-16 | 1999-12-28 | Microsoft Corporation | Method and system for digital plenoptic imaging |
JP4141090B2 (en) * | 2000-06-30 | 2008-08-27 | 株式会社エヌ・ティ・ティ・データ | Image recognition apparatus, shadow removal apparatus, shadow removal method, and recording medium |
EP1573653B1 (en) * | 2002-11-15 | 2013-07-10 | Warner Bros. Entertainment Inc. | Method for digitally rendering skin or like materials |
US8164594B2 (en) * | 2006-05-23 | 2012-04-24 | Panasonic Corporation | Image processing device, image processing method, program, storage medium and integrated circuit |
US9357204B2 (en) * | 2012-03-19 | 2016-05-31 | Fittingbox | Method for constructing images of a pair of glasses |
WO2014162533A1 (en) * | 2013-04-03 | 2014-10-09 | 日立マクセル株式会社 | Video display device |
CN105453135B (en) * | 2013-05-23 | 2018-11-20 | 生物梅里埃公司 | For generating method, system and the medium of relief map from the image of object |
DE102013209770B4 (en) * | 2013-05-27 | 2015-02-05 | Carl Zeiss Industrielle Messtechnik Gmbh | Method for determining adjustable parameters of a plurality of coordinate measuring machines, and method and apparatus for generating at least one virtual image of a measuring object |
US9905028B2 (en) * | 2013-12-11 | 2018-02-27 | Adobe Systems Incorporated | Simulating sub-surface scattering of illumination for simulated three-dimensional objects |
-
2015
- 2015-03-25 FR FR1552495A patent/FR3034233B1/en active Active
-
2016
- 2016-03-22 EP EP16161574.5A patent/EP3073441B1/en active Active
- 2016-03-24 US US15/080,176 patent/US10262398B2/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6064759A (en) * | 1996-11-08 | 2000-05-16 | Buckley; B. Shawn | Computer aided inspection machine |
US6088612A (en) * | 1997-04-04 | 2000-07-11 | Medtech Research Corporation | Method and apparatus for reflective glare removal in digital photography useful in cervical cancer detection |
US6160907A (en) * | 1997-04-07 | 2000-12-12 | Synapix, Inc. | Iterative three-dimensional process for creating finished media content |
US20030123713A1 (en) * | 2001-12-17 | 2003-07-03 | Geng Z. Jason | Face recognition system and method |
US20050008199A1 (en) * | 2003-03-24 | 2005-01-13 | Kenneth Dong | Facial recognition system and method |
US20100034432A1 (en) * | 2004-03-24 | 2010-02-11 | Fuji Photo Film Co., Ltd. | Authentication system, authentication method, machine readable medium storing thereon authentication program, certificate photograph taking apparatus, and certificate photograph taking method |
US20090238449A1 (en) * | 2005-11-09 | 2009-09-24 | Geometric Informatics, Inc | Method and Apparatus for Absolute-Coordinate Three-Dimensional Surface Imaging |
US20090052767A1 (en) * | 2006-08-23 | 2009-02-26 | Abhir Bhalerao | Modelling |
US20110182520A1 (en) * | 2010-01-25 | 2011-07-28 | Apple Inc. | Light Source Detection from Synthesized Objects |
US8923575B2 (en) * | 2010-06-30 | 2014-12-30 | Nec Corporation | Color image processing method, color image processing device, and color image processing program |
US20120268571A1 (en) * | 2011-04-19 | 2012-10-25 | University Of Southern California | Multiview face capture using polarized spherical gradient illumination |
US20160284094A1 (en) * | 2013-09-16 | 2016-09-29 | Technion Research & Development Foundation Limited | 3d reconstruction from photometric stereo with shadows |
US20150146032A1 (en) * | 2013-11-22 | 2015-05-28 | Vidinoti Sa | Light field processing method |
US20160307032A1 (en) * | 2015-04-14 | 2016-10-20 | Microsoft Technology Licensing, Llc | Two-dimensional infrared depth sensing |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10607404B2 (en) * | 2015-02-16 | 2020-03-31 | Thomson Licensing | Device and method for estimating a glossy part of radiation |
CN109507685A (en) * | 2018-10-15 | 2019-03-22 | 天津大学 | The distance measuring method of the TOF sensor model of phong formula illumination model |
US11272164B1 (en) * | 2020-01-17 | 2022-03-08 | Amazon Technologies, Inc. | Data synthesis using three-dimensional modeling |
Also Published As
Publication number | Publication date |
---|---|
FR3034233B1 (en) | 2018-08-10 |
EP3073441A1 (en) | 2016-09-28 |
US10262398B2 (en) | 2019-04-16 |
EP3073441B1 (en) | 2019-07-10 |
FR3034233A1 (en) | 2016-09-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11546567B2 (en) | Multimodal foreground background segmentation | |
EP1367554B1 (en) | Object detection for sudden illumination changes using order consistency | |
JP7329143B2 (en) | Systems and methods for segmentation of transparent objects using polarization cues | |
US10156437B2 (en) | Control method of a depth camera | |
US11915438B2 (en) | Method and apparatus for depth-map estimation of a scene | |
CN1328700C (en) | Intelligent traffic system | |
KR100681320B1 (en) | Method for modelling three dimensional shape of objects using level set solutions on partial difference equation derived from helmholtz reciprocity condition | |
JP2021522591A (en) | How to distinguish a 3D real object from a 2D spoof of a real object | |
US10262398B2 (en) | Method for correcting an image of at least one object presented at a distance in front of an imaging device and illuminated by a lighting system and an image pickup system for implementing said method | |
JP6519265B2 (en) | Image processing method | |
US20190392601A1 (en) | Image Processing System for Inspecting Object Distance and Dimensions Using a Hand-Held Camera with a Collimated Laser | |
US20210148694A1 (en) | System and method for 3d profile determination using model-based peak selection | |
Funk et al. | Using a raster display for photometric stereo | |
KR101865886B1 (en) | Method and system for estimating surface geometry and reflectance | |
JP2004133919A (en) | Device and method for generating pseudo three-dimensional image, and program and recording medium therefor | |
CN108629329B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
Georgoulis et al. | Tackling shapes and brdfs head-on | |
CN107025636B (en) | Image defogging method and device combined with depth information and electronic device | |
EP3062516B1 (en) | Parallax image generation system, picking system, parallax image generation method, and computer-readable recording medium | |
CN112866596B (en) | Anti-strong light three-dimensional capturing method and system based on CMOS sensor | |
EP3819586B1 (en) | Method for generating a three-dimensional model of an object | |
US9934445B2 (en) | Photography system and method including a system for lighting the scene to be shot | |
Argyriou et al. | Optimal illumination directions for faces and rough surfaces for single and multiple light imaging using class-specific prior knowledge | |
CA2996173C (en) | Image processing system for inspecting object distance and dimensions using a hand-held camera with a collimated laser | |
JP7028814B2 (en) | External shape recognition device, external shape recognition system and external shape recognition method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MORPHO, FRANCE Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROUH, ALAIN;BEAUDET, JEAN;DURA, JEREMY;AND OTHERS;REEL/FRAME:039041/0988 Effective date: 20160615 |
|
AS | Assignment |
Owner name: SAFRAN IDENTITY & SECURITY, FRANCE Free format text: CHANGE OF NAME;ASSIGNOR:MORPHO;REEL/FRAME:048390/0401 Effective date: 20160613 Owner name: IDEMIA IDENTITY & SECURITY, FRANCE Free format text: CHANGE OF NAME;ASSIGNORS:SAFRAN IDENTITY AND SECURITY;IDEMIA IDENTITY & SECURITY;REEL/FRAME:048390/0406 Effective date: 20171002 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |