US20080231835A1 - Divergence ratio distance mapping camera - Google Patents

Divergence ratio distance mapping camera Download PDF

Info

Publication number
US20080231835A1
US20080231835A1 US11/690,503 US69050307A US2008231835A1 US 20080231835 A1 US20080231835 A1 US 20080231835A1 US 69050307 A US69050307 A US 69050307A US 2008231835 A1 US2008231835 A1 US 2008231835A1
Authority
US
United States
Prior art keywords
target objects
distance
light sources
dimensional information
obtaining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/690,503
Inventor
Keigo Iizuka
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/690,503 priority Critical patent/US20080231835A1/en
Priority to PCT/CA2008/000502 priority patent/WO2008116288A1/en
Priority to US12/532,644 priority patent/US8982191B2/en
Publication of US20080231835A1 publication Critical patent/US20080231835A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/586Depth or shape recovery from multiple images from multiple light sources, e.g. photometric stereo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present invention relates to a method and system for detecting and mapping three-dimensional information pertaining to an object.
  • the invention relates to a method and system that makes use of the divergence of light over distance as a means of determining distance.
  • Distance mapping or depth mapping cameras have become ubiquitous in numerous fields such as robotics, machine vision for acquiring three-dimensional (3D) information about objects, intelligent transport systems for assisting driver safety and navigation, bioscience for detecting 3D laparoscopic images of internal organs, non-contact fingerprinting, and image manipulation in movie or television studios.
  • the triangulation method uses two or more images taken by strategically placed cameras to calculate the position of the target object.
  • the 3D information is obtained by synchronizing the movement of the light projection spot with the direction of the return path of the light scattered to the detector. This triangulation method is limited in that it is too slow and generally can not provide for the real-time operation of a television camera.
  • the time of flight method makes use of the time required for a round trip of a laser beam using a phase or frequency modulated probe light.
  • a heterodyne detection converts the phase or frequency information into the distance to the target object. While depth resolution can be within micrometers, time of flight methods can be limited to the order of minutes in providing a depth map of a target object.
  • Projection methods determine depth information from the patterns of configured light projected onto a target object.
  • the best known projection method is the moiré technique.
  • the moiré technique incorporates two grid patterns projected before and after the surface distortion to generate a moiré pattern of the deformed surface. While a moiré pattern can be readily generated, not so are the corresponding distance calculations. This distance is calculated in a manner similar to applying triangulation at every intersection of the pattern.
  • the AXI-VISION CAMERATM method as described in U.S. Pat. No. 7,0165,519 B1 is based on a hybrid of the projection and time of flight methods.
  • the projecting light is temporally rather than spatially modulated.
  • To acquire the depth pattern an instantaneous time of flight pattern is captured using an ultra-fast shutter.
  • Distance is then calculated at every pixel providing a picture quality to that of High Definition Television (HDTV).
  • HDTV High Definition Television
  • the AXI-VISION CAMERATM method requires a large number of fast response time LEDs, a photomultiplier based shutter that are all secured to the AXI-VISION CAMERATM.
  • the object of the present invention is to provide a method and device for detecting and mapping three-dimensional information pertaining to one or more target objects while further addressing the limitations of the prior art.
  • a method of obtaining three-dimensional information of one or more target objects including the steps of: (1) selecting one or more target objects; (2) illuminating the one or more target objects using a first light source at a distance X 1 from the target(s) and capturing an image I 1 of the one or more target objects; (3) illuminating the one or more target objects using a second light source at a distance X 2 from the target(s) and capturing an image I 2 of the same one or more target objects; (4) calculating the distance X between the two light sources and the one or more target objects based on the decay of intensities of light sources over distances X 1 and X 2 using the ratio of the image intensities between images I 1 and I 2 .
  • the first image I 1 and the second image I 2 are stored on a storage medium known by individuals skilled in the art; the distance X at the midpoint between the two light sources and the one or more target objects is calculated by analyzing images I 1 and I 2 on a pixel by pixel basis; the calculated pixel distance information is stored using a known coordinate storage medium.
  • the distance between the center of the two light sources and one or more target objects can be calculated based on the principle that the intensity of a light source decays with the inverse square of the distance traveled.
  • a pair of light sources whose divergence factor is 1/r n where n can be either a positive or negative number (including non-integers) may be used. So long as the divergence attenuation over distance is known beforehand, a pair of sources may be used.
  • the one or more objects are illuminated using another pair of additional light sources for reduction of the impact of shadow on the measurement of light intensity.
  • a distance mapping system comprising: one or more target objects; at least one camera device and at least two light sources, and at least one computer device linked to the camera that is operable to (a) capture digital frame information, and (b) calculate distance X between the center of the light sources and one or more target objects based on the method of the present invention.
  • a distance mapping system further comprising: a video microcontroller, which is operable to signal the front light source to illuminate when the camera device is in the even field and an image I 1 is captured and stored.
  • the microcontroller is operable to signal the back light source to illuminate when the camera device is in the odd field and an image I 2 is captured and stored.
  • a distance mapping system wherein for the purposes of real time 3D information gathering, the sources are of the same type (i.e. acoustic sources or light sources).
  • a distance mapping system wherein to minimize the effect of light source shadowing the system further comprises an additional pair of light sources.
  • FIG. 1 a illustrates the distance mapping apparatus capturing an image I 1 of a target object using the first illuminating device as a light source.
  • FIG. 1 b illustrates the distance mapping apparatus capturing an image I 2 of a target object using the second illuminating device as a light source.
  • FIG. 1 c illustrates the amplitude ratio between I 1 and I 2 .
  • FIG. 2 further illustrates the geometry of the divergence ratio distance mapping camera.
  • FIG. 3 is a graph that illustrates the relationship between amplitude ratio R and the distance r/d.
  • FIG. 4 illustrates the ability to move the divergence ratio distance mapping camera to an arbitrary location.
  • FIG. 5 illustrates the double illuminator sets for eliminating shadows.
  • FIG. 6 illustrates a more detailed apparatus for the divergence ratio distance mapping camera.
  • FIG. 7 a image taken with front illumination.
  • FIG. 7 b image taken with back illumination.
  • FIG. 7 c photograph of the object
  • FIG. 7 d measured depth profile of the object.
  • FIG. 1 a illustrates the distance mapping apparatus 1 capturing an image I 1 of a target object 3 using a first illuminating device 5 as a light source.
  • the first illuminating device 5 illuminates the target object 3 and the camera device 7 captures an image I 1 that is stored by the system (see FIG. 6 ).
  • FIG. 1 b illustrates the distance mapping apparatus 1 capturing an image I 2 of a target object 3 using a second illuminating device 9 as a light source.
  • the second illuminating device 9 illuminates the target object 3 and the camera device 7 captures an image I 2 that is stored by the system (see FIG. 6 ).
  • FIG. 1 c illustrates the amplitude ratio between I 1 and I 2 .
  • the present invention functions by comparing the relative image intensities between I 1 and I 2 on a pixel by pixel basis.
  • FIG. 1 c demonstrates a graph wherein the relative image intensities between I 1 and I 2 have been plotted providing the amplitude ratio.
  • FIG. 2 further illustrates the geometry of a divergence ratio distance mapping camera apparatus, in accordance with one aspect of the present invention.
  • the apparatus is set up in the following manner: a camera device 7 is at a distance r from the target object 3 , a first illuminating device 5 labelled LED s 1 is at a distance r ⁇ d from the target object 3 , and a second illuminating device 9 labelled LED s 2 is at a distance r+d from the target object 3 .
  • the camera device 7 is also linked to, or incorporates, a processor 11 which is operable to compute the distance to the target object 3 from the relative image intensities of I 1 and I 2 .
  • the camera device 7 firstly captures an image I 1 of the target object 3 using the first illuminating device 5 LED s 1 as a light source.
  • This image I 1 is stored by a frame grabber 21 (see FIG. 6 ) of processor 11 (the frame grabber 21 being hardwired to processor 11 or incorporated into computer programming made accessible to processor 11 ).
  • the camera device 7 then captures an image I 2 of the target object 3 using the second illuminating device 9 LED s 2 as a light source.
  • This image I 2 is also stored by the frame grabber 21 of the processor 11 .
  • the processor 11 is operable to compare the relative image intensities of I 1 and I 2 on a pixel by pixel basis. Before this comparison can be performed, the processor 11 calculates the image intensity of I 1 using the first illuminating device 5 LED s 1 as a light source as well as calculating the image intensity of I 2 using the second illuminating device 9 LED s 2 as a light source.
  • the pair of light sources 5 , 9 that are used are infra red point light sources. It is commonly known to those skilled in the art that intensity of a point light source decays with the square of the distance due to the divergence property of light. Therefore the intensity of the light from the illuminating device 5 LED s 1 directed at the target object 3 located at a distance r from the camera device 7 is:
  • I in P 0 4 ⁇ ⁇ ⁇ ( r - d ) 2 ( 1 )
  • P 0 is the power of the point source from first light source 5 LED s 1 .
  • the target object 3 reflects light back towards the camera device 7 .
  • the amount of the reflection is characterized by the radar term of back scattering cross section ⁇ .
  • the light power associated with the back scattering toward the camera device 7 is:
  • I 1 ⁇ ⁇ ⁇ P 0 [ 4 ⁇ ⁇ ⁇ ( r - d ) ] 2 ⁇ r 2 ( 3 )
  • I 2 ⁇ ⁇ ⁇ P 0 [ 4 ⁇ ⁇ ⁇ ( r + d ) ] 2 ⁇ r 2 ( 4 )
  • the processor 11 determines the distance of each pixel on a pixel by pixel basis and is operable to store the information in a coordinate system distance map for the target object 3 in a manner that is known to those skilled in the art.
  • This distance map for the target object 3 contains all of the pixel positional information of the target object 3 .
  • FIG. 3 is a plot that illustrates the relationship between amplitude ratio R and the distance r/d.
  • the sensitivity of the measurement is optimum near the origin and reduces as the asymptote is approached. It is also interesting to note that the Eq. (7) can be rewritten in the following manner:
  • FIG. 4 illustrates the ability to move the divergence ratio distance mapping camera or camera device 7 to an arbitrary location.
  • the ratio distance mapping camera apparatus was previously described with the camera device 7 in-line with the two illuminating devices: first light source 5 LED s 1 and second light source 9 LED s 2 .
  • the camera device 7 may be placed in an arbitrary location and the actual distance that is being measured is between the target object 3 and the center of the two illuminating devices (first light source 5 LED s 1 and second light source 9 LED s 2 ). As depicted in FIG.
  • the position of the camera device 7 has been relocated from the center of the two illuminating devices (first light source 5 LED s 1 and second light source 9 LED s 2 ) to an arbitrary location (x 1 ,z 1 ) in the x-z plane.
  • the target object 3 is located along the z-axis at coordinate (0,z).
  • the separation between the two light point sources (first light source 5 LED s 1 and second light source 9 LED s 2 ) is kept constant at 2d as before.
  • I 1 ⁇ ⁇ ⁇ P 0 [ 4 ⁇ ⁇ ⁇ ( z - d ) ] 2 ⁇ [ ( x - x 1 ) 2 + z 1 2 ] ( 9 )
  • I 2 ⁇ ⁇ ⁇ P 0 [ 4 ⁇ ⁇ ⁇ ( z + d ) ] 2 ⁇ [ ( x - x 1 ) 2 + z 1 2 ] ( 10 )
  • the distance measured is always along this z axis between target object 3 and the center of the two illuminating devices (first light source 5 LED s 1 and second light source 9 LED s 2 ).
  • This ability to position the camera independent of the orientation of the light sources provides considerable operational advantage that could be readily incorporated into different embodiments and arrangements of the present invention. For example if the LED's are installed either on the studio ceiling or wall, the hand-held camera does not have to bear an additional weight or attachment.
  • FIG. 5 illustrates the double illuminator sets for eliminating shadows.
  • the camera device 7 is positioned too away far from the connecting line between the two point sources of light (first light source 5 LED s 1 and second light source 9 LED s 2 )
  • shadows may be incorporated into the distance map.
  • the shadow is an undesirable image product and may corrupt the accuracy of the distance map.
  • FIG. 5 demonstrates an embodiment of the present invention wherein two sets of LEDs are used (illuminator set 1 13 and illuminator set 2 15 ) to illuminate the target object 3 , in this case an overturned cup.
  • each pair of illuminator sets 13 , 15 cast their own specific shadow (see shadow of set 1 17 and shadow of set 2 19 ).
  • the pair of shadows 17 , 19 can be reduced and the corresponding distance map of the overturned cup target object 3 improved.
  • the final distance map for the overturned cup target object 3 is actually comprised of a merging of the distance map developed by the first illuminator set 13 with the distance map developed by the second illuminator set 15 .
  • the two derived distance maps are compared on a pixel by pixel basis and an appropriate pixel is selected by comparison. The comparison is made possible due to the fact that the relative position of the camera device 7 and the target object 3 has not been changed as between the two distance maps and a simple merging step common to individuals skilled in the art is sufficient to combine the two distance maps to form a final distance map.
  • This final distance map generally minimizes the effect of shadows on the pixel positioning to provide a more exact result.
  • FIG. 6 illustrates a more detailed apparatus for the divergence ratio distance mapping camera.
  • the more detailed apparatus is comprised of the camera device 7 connected to a frame grabber 21 (part of the processing unit 11 ), also connected to a video sync separator 23 which in turn is connected to a video microcontroller 25 that controls the front 27 and back 29 LED drivers that control the pair of illuminating devices i.e. the front light source 5 LED s 1 and back light source 9 LED s 2 .
  • the video microcontroller 25 may be connected to a monitor display 31 or some other medium to display the distance map that it calculates.
  • the composite video signal out of an infra red camera device 7 was used to synchronize the timing of the front and back infra red illuminating devices 5 , 9 .
  • the composite video signal is fed into a video sync separator 23 that extracts the vertical sync pulse and also provides the odd/even field information. This output from the sync is provided to the video microcontroller 25 .
  • the video microcontroller 25 is operable to signal the front LED 5 to illuminate when the camera device 7 is in the even field and an image I 1 is captured and stored in the frame grabber 21 (see FIG. 7 a ).
  • the video microcontroller 25 is operable to signal the back LED 9 to illuminate when the camera device 7 is in the odd field and an image I 2 is captured and stored in the frame grabber 21 (see FIG. 7 b ).
  • the frame grabber 21 then applies the derived distance Eq. (7) to the two images I 1 and I 2 on a pixel by pixel basis and the distance map of the target object 3 can be displayed on a monitor display ( 31 ) (see FIG. 7 d ).
  • the depth of an image or the distance map can be displayed using a colour code with red being the shortest distance and purple being the longest distance.
  • red being the shortest distance
  • purple being the longest distance.
  • black and white wherein dark represents the shortest distance and white represents the longest distance.
  • FIG. 7 a illustrates an image taken with front illumination. This is an image of a face of a statue taken by an IR camera device 7 only using front illumination 5 and stored in the frame grabber 21 .
  • FIG. 7 b illustrates an image taken with back illumination. This is an image of a face of a statue taken by an IR camera device 7 only using back illumination 9 and stored in the frame grabber 21 .
  • FIG. 7 c illustrates a photograph of the object. This is a normal photograph of the face of the statue for comparison with the generated depth profile (see FIG. 7 d ).
  • FIG. 7 d illustrates a measured depth profile of the object. This is the result of the frame grabber applying the distance Eq. (7) on the image taken in FIG. 7 a and the image taken in FIG. 7 b on a pixel by pixel basis. As previously explained, dark represents the shortest distance between the target object 3 and the midpoint between the front 5 and back 9 LED devices while white depicts longer distances between the target object 3 and the midpoint between the front 5 and back 9 LED devices.
  • N An appropriate value for N may be found by monitoring the composite video signal of the CCD camera device 7 .
  • the distance mapping system is operable to provide the three-dimensional information and be incorporated, for example, into the automobile industry.
  • the distance mapping apparatus could be incorporated to quickly provide for the exact 3D pixel positional information for prototype vehicles.
  • the distance mapping device provides for real time operational advantages. Most other methods need time for setting up the sensors at specified locations even before making a measurement.
  • the distance mapping apparatus is a hand-held operation that can aim at target at any angle and location relative to the object. Additional embodiments of the invention may be further incorporated into aspects of the automobile industry.
  • the distance mapping system is linked to an on-board computer system of a vehicle and is operable to provide environmental 3D information to assist the on-board system of the vehicle in accident prevention.
  • the distance mapping system can differentiate the radar echo from the trees on the pavement from that of an oncoming moving car from the shape of the objects.
  • ordinary radar systems do not function in this manner. For example, when a car equipped with an ordinary radar system negotiates the curve of a road the ordinary radar system may mistake trees along the side of the road as an oncoming car and the automatic braking system would be triggered. In other words, an ordinary radar system functions optimally when the equipped car is travelling along a straight road but not along a curved road.
  • the distance mapping system could be incorporated into traffic surveillance system and used to assist in determining the make and model of a vehicle by only calculating the distance map of one profile.
  • the detailed information of the one profile of the vehicle could be extrapolated to recreate a 3D representation of the vehicle, or it could be used to compare with stored library information of 3D representations of vehicles for greater accuracy and identification.
  • a distance mapping system is provided as previously described wherein the distance mapping system is operable to provide environmental three-dimensional information so as to assist an individual who is visually impaired. Due to the ability to freely position the camera device 7 , the distance mapping system could be readily incorporated into an assistive cane or incorporated into the outer apparel of a visually impaired individual. The distance mapping system could then provide signals regarding the calculated environmental information to the individual based upon predefined criteria such as the size and the shape of an object. Ordinary echo based warning systems are not capable of discerning whether an object is a man, a tree, or a building. In addition, the distance mapping system could be readily incorporated into a humanoid robotic system to provide omni directional eye vision to more quickly identify its surroundings and avoid obstacles.
  • the distance mapping system is operable to provide environmental 3D information for a 3D virtual studio. Due to the ability to freely position the camera device 7 , a 3D virtual studio could be readily set up wherein the live scenery is inserted either in the foreground or background of a computer generated graphic, but could be positioned anywhere within the frame as long as the computer generated graphics itself as the distance information in each pixel.
  • the 3D virtual studio could function in real time and could greatly assist television broadcasts. All too often live reporters are disrupted by individuals walking into the video frame; these individuals could be removed in real time by distance discrimination.
  • the real time editing need not be limited to the removal of individuals, once the 3D information has been obtained for a video frame, virtually anything may be edited into and out of the video feed.
  • the distance mapping system is incorporated into the cosmetic industry to quickly provide a 3D imaging of a patient without having to manufacture a moulding. More specifically, this 3D imaging could be used to assist a plastic surgeon and subsequently the patient in determining how certain features may appear after a procedure. In addition, the 3D imaging information could be used by an orthodontist who makes teeth mouldings, the provided information could greatly reduce the need of an uncomfortable moulding process.
  • the current distance mapping system would allow for a 3D image to be made without any contact with the patient and a non-invasive manner.
  • the distance mapping system may be readily incorporated into a security system and more specifically linked to a fingerprint capture system, wherein the distance mapping system is accomplished in a touch-less non contact method that provides a three-dimensional creation of a 3D map of a fingerprint without having to ink the individual's fingers or touch a panel for scanning of the palm.
  • the distance mapping system may be readily incorporated into surveillance systems to provide for profile information on an individual. If a front profile of an individual has been captured, the distance mapping system could be used to generate a side profile of the individual. Additionally, if the side profile of an individual has been captured, the front profile could be extrapolated based upon the 3D distance mapping information.
  • a distance mapping system wherein the distance mapping apparatus may substitute the illuminating light sources with sound transducers to achieve a sonar distance mapping camera for underwater objects like a submarine or a school of fish result.

Abstract

The present invention relates to a method and system for detecting and mapping three-dimensional information pertaining to one or more target objects. More particularly, the invention consists of selecting one or more target objects, illuminating the one or more target objects using a first light source and capturing an image of the one or more target objects, then, illuminating the same one or more target objects using a second light source and capturing an image of the one or more target objects and lastly calculating the distance at the midpoint between the two light sources and the one or more target objects based on the decay of intensities of light over distance by analyzing the ratio of the image intensities on a pixel by pixel basis.

Description

    FIELD OF THE INVENTION
  • The present invention relates to a method and system for detecting and mapping three-dimensional information pertaining to an object. In particular, the invention relates to a method and system that makes use of the divergence of light over distance as a means of determining distance.
  • BACKGROUND OF THE INVENTION
  • Distance mapping or depth mapping cameras have become ubiquitous in numerous fields such as robotics, machine vision for acquiring three-dimensional (3D) information about objects, intelligent transport systems for assisting driver safety and navigation, bioscience for detecting 3D laparoscopic images of internal organs, non-contact fingerprinting, and image manipulation in movie or television studios.
  • To achieve the goal of distance mapping an object in order to acquire its 3D information, numerous methods have been developed. The triangulation method uses two or more images taken by strategically placed cameras to calculate the position of the target object. The 3D information is obtained by synchronizing the movement of the light projection spot with the direction of the return path of the light scattered to the detector. This triangulation method is limited in that it is too slow and generally can not provide for the real-time operation of a television camera.
  • The time of flight method makes use of the time required for a round trip of a laser beam using a phase or frequency modulated probe light. A heterodyne detection converts the phase or frequency information into the distance to the target object. While depth resolution can be within micrometers, time of flight methods can be limited to the order of minutes in providing a depth map of a target object.
  • Projection methods determine depth information from the patterns of configured light projected onto a target object. The best known projection method is the moiré technique. The moiré technique incorporates two grid patterns projected before and after the surface distortion to generate a moiré pattern of the deformed surface. While a moiré pattern can be readily generated, not so are the corresponding distance calculations. This distance is calculated in a manner similar to applying triangulation at every intersection of the pattern.
  • The AXI-VISION CAMERA™ method as described in U.S. Pat. No. 7,0165,519 B1 is based on a hybrid of the projection and time of flight methods. The projecting light is temporally rather than spatially modulated. To acquire the depth pattern, an instantaneous time of flight pattern is captured using an ultra-fast shutter. Distance is then calculated at every pixel providing a picture quality to that of High Definition Television (HDTV). To achieve its results, the AXI-VISION CAMERA™ method requires a large number of fast response time LEDs, a photomultiplier based shutter that are all secured to the AXI-VISION CAMERA™.
  • The object of the present invention is to provide a method and device for detecting and mapping three-dimensional information pertaining to one or more target objects while further addressing the limitations of the prior art.
  • SUMMARY OF THE INVENTION
  • A method of obtaining three-dimensional information of one or more target objects is provided, including the steps of: (1) selecting one or more target objects; (2) illuminating the one or more target objects using a first light source at a distance X1 from the target(s) and capturing an image I1 of the one or more target objects; (3) illuminating the one or more target objects using a second light source at a distance X2 from the target(s) and capturing an image I2 of the same one or more target objects; (4) calculating the distance X between the two light sources and the one or more target objects based on the decay of intensities of light sources over distances X1 and X2 using the ratio of the image intensities between images I1 and I2.
  • In accordance with an aspect of the method of the invention, the first image I1 and the second image I2 are stored on a storage medium known by individuals skilled in the art; the distance X at the midpoint between the two light sources and the one or more target objects is calculated by analyzing images I1 and I2 on a pixel by pixel basis; the calculated pixel distance information is stored using a known coordinate storage medium. In accordance with the method of the invention, the distance between the center of the two light sources and one or more target objects can be calculated based on the principle that the intensity of a light source decays with the inverse square of the distance traveled. A pair of light sources whose divergence factor is 1/rn where n can be either a positive or negative number (including non-integers) may be used. So long as the divergence attenuation over distance is known beforehand, a pair of sources may be used.
  • In accordance with the method of the invention, the one or more objects are illuminated using another pair of additional light sources for reduction of the impact of shadow on the measurement of light intensity.
  • In another aspect of the invention a distance mapping system is provided comprising: one or more target objects; at least one camera device and at least two light sources, and at least one computer device linked to the camera that is operable to (a) capture digital frame information, and (b) calculate distance X between the center of the light sources and one or more target objects based on the method of the present invention.
  • In another aspect of the invention, a distance mapping system is provided further comprising: a video microcontroller, which is operable to signal the front light source to illuminate when the camera device is in the even field and an image I1 is captured and stored. The microcontroller is operable to signal the back light source to illuminate when the camera device is in the odd field and an image I2 is captured and stored.
  • In yet another aspect of the present invention, a distance mapping system is provided wherein for the purposes of real time 3D information gathering, the sources are of the same type (i.e. acoustic sources or light sources).
  • In yet another aspect of the present invention, a distance mapping system is provided wherein to minimize the effect of light source shadowing the system further comprises an additional pair of light sources.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A detailed description of the preferred embodiment(s) is (are) provided herein below by way of example only and with reference to the following drawings, in which:
  • FIG. 1 a illustrates the distance mapping apparatus capturing an image I1 of a target object using the first illuminating device as a light source.
  • FIG. 1 b illustrates the distance mapping apparatus capturing an image I2 of a target object using the second illuminating device as a light source.
  • FIG. 1 c illustrates the amplitude ratio between I1 and I2.
  • FIG. 2 further illustrates the geometry of the divergence ratio distance mapping camera.
  • FIG. 3 is a graph that illustrates the relationship between amplitude ratio R and the distance r/d.
  • FIG. 4 illustrates the ability to move the divergence ratio distance mapping camera to an arbitrary location.
  • FIG. 5 illustrates the double illuminator sets for eliminating shadows.
  • FIG. 6 illustrates a more detailed apparatus for the divergence ratio distance mapping camera.
  • FIG. 7 a image taken with front illumination.
  • FIG. 7 b image taken with back illumination.
  • FIG. 7 c photograph of the object
  • FIG. 7 d measured depth profile of the object.
  • In the drawings, preferred embodiments of the invention are illustrated by way of example. It is to be expressly understood that the description and drawings are only for the purpose of illustration and as an aid to understanding, and are not intended as a definition of the limits of the invention. It will be appreciated by those skilled in the art that other variations of the preferred embodiment may also be practiced without departing from the scope of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • FIG. 1 a illustrates the distance mapping apparatus 1 capturing an image I1 of a target object 3 using a first illuminating device 5 as a light source. The first illuminating device 5 illuminates the target object 3 and the camera device 7 captures an image I1 that is stored by the system (see FIG. 6).
  • FIG. 1 b illustrates the distance mapping apparatus 1 capturing an image I2 of a target object 3 using a second illuminating device 9 as a light source. The second illuminating device 9 illuminates the target object 3 and the camera device 7 captures an image I2 that is stored by the system (see FIG. 6).
  • FIG. 1 c illustrates the amplitude ratio between I1 and I2. As further explained (see FIG. 2), through the derivation of the equation to calculate distance, the present invention functions by comparing the relative image intensities between I1 and I2 on a pixel by pixel basis. FIG. 1 c demonstrates a graph wherein the relative image intensities between I1 and I2 have been plotted providing the amplitude ratio.
  • FIG. 2 further illustrates the geometry of a divergence ratio distance mapping camera apparatus, in accordance with one aspect of the present invention. The apparatus is set up in the following manner: a camera device 7 is at a distance r from the target object 3, a first illuminating device 5 labelled LED s1 is at a distance r−d from the target object 3, and a second illuminating device 9 labelled LED s2 is at a distance r+d from the target object 3. The camera device 7 is also linked to, or incorporates, a processor 11 which is operable to compute the distance to the target object 3 from the relative image intensities of I1 and I2. As mentioned, the camera device 7 firstly captures an image I1 of the target object 3 using the first illuminating device 5 LED s1 as a light source. This image I1 is stored by a frame grabber 21 (see FIG. 6) of processor 11 (the frame grabber 21 being hardwired to processor 11 or incorporated into computer programming made accessible to processor 11). The camera device 7 then captures an image I2 of the target object 3 using the second illuminating device 9 LED s2 as a light source. This image I2 is also stored by the frame grabber 21 of the processor 11.
  • In order to calculate the distance to the target object 3, the processor 11 is operable to compare the relative image intensities of I1 and I2 on a pixel by pixel basis. Before this comparison can be performed, the processor 11 calculates the image intensity of I1 using the first illuminating device 5 LED s1 as a light source as well as calculating the image intensity of I2 using the second illuminating device 9 LED s2 as a light source.
  • In one embodiment of the present invention, the pair of light sources 5, 9 that are used are infra red point light sources. It is commonly known to those skilled in the art that intensity of a point light source decays with the square of the distance due to the divergence property of light. Therefore the intensity of the light from the illuminating device 5 LED s1 directed at the target object 3 located at a distance r from the camera device 7 is:
  • I in = P 0 4 π ( r - d ) 2 ( 1 )
  • where P0 is the power of the point source from first light source 5 LED s1. In addition, the target object 3 reflects light back towards the camera device 7. The amount of the reflection is characterized by the radar term of back scattering cross section σ. The light power associated with the back scattering toward the camera device 7 is:
  • P sc = σ P 0 4 π ( r - d ) 2 ( 2 )
  • Since the reflected light that is propagating back to the camera device 7 also obeys the divergence property, the intensity of the reflected light decays with the square of the distance resulting in the following light intensity equation for I1:
  • I 1 = σ P 0 [ 4 π ( r - d ) ] 2 r 2 ( 3 )
  • In a similar manner the light intensity equation for the image I2 of the target object 3 using the second illuminating device 9 LED s2 as a light source is derived, by simply replacing r−d by r+d in Eq. (3) in the following manner:
  • I 2 = σ P 0 [ 4 π ( r + d ) ] 2 r 2 ( 4 )
  • As one can clearly see, Eqs. (3) and (4) share a number of common factors and by importing these two equations in the amplitude ratio R equation:

  • R=√{square root over ((I 1 /I 2))}  (5)
  • Results in the following reduced equation for amplitude ratio R:

  • R=(r+d)/(r−d)  (6)
  • Rearranging Eq. (6) to solve for the value of interest r, that being the distance between the camera device 7 and the target object 3 results in the following equation:

  • r=d(R+1)/(R−1)  (7)
  • Of special note, such factors as the back scattering cross section σ of the target object 3, point source light power P0 (assuming that both point sources: first light source 5 LED s1 and second light source 9 LED s2 have equivalent power), and the divergence r−2 of the return light reflected from the target object 3 towards the camera device 7 all of which appear in both Eqs. (3), and (4) are cancelled out from the calculation and accurate distance is measured regardless of the target object colour and texture.
  • Using the derived equation for distance Eq. (7), the processor 11, then determines the distance of each pixel on a pixel by pixel basis and is operable to store the information in a coordinate system distance map for the target object 3 in a manner that is known to those skilled in the art. This distance map for the target object 3, contains all of the pixel positional information of the target object 3.
  • FIG. 3 is a plot that illustrates the relationship between amplitude ratio R and the distance r/d. The sensitivity of the measurement is optimum near the origin and reduces as the asymptote is approached. It is also interesting to note that the Eq. (7) can be rewritten in the following manner:
  • ( x d - 1 ) ( R - 1 ) = 2 ( 8 )
  • and the exchange of coordinates between x/d and R gives the same curve shape. As shown by FIG. 3, the curve is symmetric with respect to a 45 degree line, and there is the same asymptote of unity in both the R and the r/d axes.
  • FIG. 4 illustrates the ability to move the divergence ratio distance mapping camera or camera device 7 to an arbitrary location. For ease of equation derivation, the ratio distance mapping camera apparatus was previously described with the camera device 7 in-line with the two illuminating devices: first light source 5 LED s1 and second light source 9 LED s2. As indicated by FIG. 4 and will be demonstrated by the following equation derivation, the camera device 7 may be placed in an arbitrary location and the actual distance that is being measured is between the target object 3 and the center of the two illuminating devices (first light source 5 LED s1 and second light source 9 LED s2). As depicted in FIG. 4, the position of the camera device 7 has been relocated from the center of the two illuminating devices (first light source 5 LED s1 and second light source 9 LED s2) to an arbitrary location (x1,z1) in the x-z plane. Taking the origin (0,0) of this coordinate system to be the center of the light point sources, the target object 3 is located along the z-axis at coordinate (0,z). The separation between the two light point sources (first light source 5 LED s1 and second light source 9 LED s2) is kept constant at 2d as before.
  • Incorporating this information from the new coordinate system into Eqs. (3) and (4) results in the following equations:
  • I 1 = σ P 0 [ 4 π ( z - d ) ] 2 [ ( x - x 1 ) 2 + z 1 2 ] ( 9 ) I 2 = σ P 0 [ 4 π ( z + d ) ] 2 [ ( x - x 1 ) 2 + z 1 2 ] ( 10 )
  • Solving for the amplitude ratio R using Eqs. (9) and (10) results in the following equation:

  • r=d(R+1)/(R−1)  (11)
  • which is identical to the previously derived Eq. (7).
  • It is interesting to note that the distance measured is always along this z axis between target object 3 and the center of the two illuminating devices (first light source 5 LED s1 and second light source 9 LED s2). This ability to position the camera independent of the orientation of the light sources provides considerable operational advantage that could be readily incorporated into different embodiments and arrangements of the present invention. For example if the LED's are installed either on the studio ceiling or wall, the hand-held camera does not have to bear an additional weight or attachment.
  • It should be noted that as a caveat, it is discouraged that the camera stray too far off the connecting line between the two points sources because shadows may be created in the mapped image. A countermeasure to assist in reducing shadows in the mapped image is described below (FIG. 5).
  • FIG. 5 illustrates the double illuminator sets for eliminating shadows. As previously described, if the camera device 7 is positioned too away far from the connecting line between the two point sources of light (first light source 5 LED s1 and second light source 9 LED s2), shadows may be incorporated into the distance map. The shadow is an undesirable image product and may corrupt the accuracy of the distance map. In order to minimize the effect of shadowing, FIG. 5 demonstrates an embodiment of the present invention wherein two sets of LEDs are used (illuminator set 1 13 and illuminator set 2 15) to illuminate the target object 3, in this case an overturned cup. As emphasized by the shape of the overturned cup target object 3, each pair of illuminator sets 13, 15 cast their own specific shadow (see shadow of set 1 17 and shadow of set 2 19). By incorporating two pairs of illuminator sets 13, 15, the pair of shadows 17, 19 can be reduced and the corresponding distance map of the overturned cup target object 3 improved.
  • The final distance map for the overturned cup target object 3 is actually comprised of a merging of the distance map developed by the first illuminator set 13 with the distance map developed by the second illuminator set 15. In the processor 11 of the frame grabber 21, the two derived distance maps are compared on a pixel by pixel basis and an appropriate pixel is selected by comparison. The comparison is made possible due to the fact that the relative position of the camera device 7 and the target object 3 has not been changed as between the two distance maps and a simple merging step common to individuals skilled in the art is sufficient to combine the two distance maps to form a final distance map. This final distance map generally minimizes the effect of shadows on the pixel positioning to provide a more exact result.
  • FIG. 6 illustrates a more detailed apparatus for the divergence ratio distance mapping camera. The more detailed apparatus is comprised of the camera device 7 connected to a frame grabber 21 (part of the processing unit 11), also connected to a video sync separator 23 which in turn is connected to a video microcontroller 25 that controls the front 27 and back 29 LED drivers that control the pair of illuminating devices i.e. the front light source 5 LED s1 and back light source 9 LED s2. In addition, the video microcontroller 25 may be connected to a monitor display 31 or some other medium to display the distance map that it calculates.
  • In the preferred embodiment of the present invention, the composite video signal out of an infra red camera device 7 was used to synchronize the timing of the front and back infra red illuminating devices 5, 9. The composite video signal is fed into a video sync separator 23 that extracts the vertical sync pulse and also provides the odd/even field information. This output from the sync is provided to the video microcontroller 25.
  • The video microcontroller 25 is operable to signal the front LED 5 to illuminate when the camera device 7 is in the even field and an image I1 is captured and stored in the frame grabber 21 (see FIG. 7 a). The video microcontroller 25 is operable to signal the back LED 9 to illuminate when the camera device 7 is in the odd field and an image I2 is captured and stored in the frame grabber 21 (see FIG. 7 b). The frame grabber 21 then applies the derived distance Eq. (7) to the two images I1 and I2 on a pixel by pixel basis and the distance map of the target object 3 can be displayed on a monitor display (31) (see FIG. 7 d).
  • In one embodiment, the depth of an image or the distance map can be displayed using a colour code with red being the shortest distance and purple being the longest distance. This same information can be displayed using black and white wherein dark represents the shortest distance and white represents the longest distance.
  • FIG. 7 a illustrates an image taken with front illumination. This is an image of a face of a statue taken by an IR camera device 7 only using front illumination 5 and stored in the frame grabber 21.
  • FIG. 7 b illustrates an image taken with back illumination. This is an image of a face of a statue taken by an IR camera device 7 only using back illumination 9 and stored in the frame grabber 21.
  • FIG. 7 c illustrates a photograph of the object. This is a normal photograph of the face of the statue for comparison with the generated depth profile (see FIG. 7 d).
  • FIG. 7 d illustrates a measured depth profile of the object. This is the result of the frame grabber applying the distance Eq. (7) on the image taken in FIG. 7 a and the image taken in FIG. 7 b on a pixel by pixel basis. As previously explained, dark represents the shortest distance between the target object 3 and the midpoint between the front 5 and back 9 LED devices while white depicts longer distances between the target object 3 and the midpoint between the front 5 and back 9 LED devices.
  • It should be noted that there exists practical limits on the range of the camera of the present invention. The measurement depends upon the divergence of light. This limit may be extended by unbalancing the intensities of these two illuminating light sources by avoiding the saturation of the CCD camera device 7 when the front LED 5 is too close to the target object 3. In particular when the distance z to the target object 3, is large as compared to the LED separation distance 2d, the light intensities IntA, and IntB are more or less the same but as the distance to the target object 3 become excessively short and the front light 5 intensity IntA becomes much larger than IntB, this difference between the light intensities no longer remains within the linear range of the CCD camera device 7. As mentioned this limit may be extended by either reducing the exposure time of the CCD camera device 7 when capturing the image with the front LED or by reducing the output power of only the front LED 5 by a known factor N and keeping IntB unchanged.
  • An appropriate value for N may be found by monitoring the composite video signal of the CCD camera device 7.
  • In a particular aspect of the invention, the distance mapping system is operable to provide the three-dimensional information and be incorporated, for example, into the automobile industry. The distance mapping apparatus could be incorporated to quickly provide for the exact 3D pixel positional information for prototype vehicles. The distance mapping device provides for real time operational advantages. Most other methods need time for setting up the sensors at specified locations even before making a measurement. The distance mapping apparatus is a hand-held operation that can aim at target at any angle and location relative to the object. Additional embodiments of the invention may be further incorporated into aspects of the automobile industry.
  • In another particular aspect of the invention, the distance mapping system is linked to an on-board computer system of a vehicle and is operable to provide environmental 3D information to assist the on-board system of the vehicle in accident prevention. The distance mapping system can differentiate the radar echo from the trees on the pavement from that of an oncoming moving car from the shape of the objects. Generally, ordinary radar systems do not function in this manner. For example, when a car equipped with an ordinary radar system negotiates the curve of a road the ordinary radar system may mistake trees along the side of the road as an oncoming car and the automatic braking system would be triggered. In other words, an ordinary radar system functions optimally when the equipped car is travelling along a straight road but not along a curved road.
  • In another aspect of the invention, the distance mapping system could be incorporated into traffic surveillance system and used to assist in determining the make and model of a vehicle by only calculating the distance map of one profile. The detailed information of the one profile of the vehicle could be extrapolated to recreate a 3D representation of the vehicle, or it could be used to compare with stored library information of 3D representations of vehicles for greater accuracy and identification.
  • In another particular aspect of the invention, a distance mapping system is provided as previously described wherein the distance mapping system is operable to provide environmental three-dimensional information so as to assist an individual who is visually impaired. Due to the ability to freely position the camera device 7, the distance mapping system could be readily incorporated into an assistive cane or incorporated into the outer apparel of a visually impaired individual. The distance mapping system could then provide signals regarding the calculated environmental information to the individual based upon predefined criteria such as the size and the shape of an object. Ordinary echo based warning systems are not capable of discerning whether an object is a man, a tree, or a building. In addition, the distance mapping system could be readily incorporated into a humanoid robotic system to provide omni directional eye vision to more quickly identify its surroundings and avoid obstacles.
  • In yet another particular aspect of the invention, the distance mapping system is operable to provide environmental 3D information for a 3D virtual studio. Due to the ability to freely position the camera device 7, a 3D virtual studio could be readily set up wherein the live scenery is inserted either in the foreground or background of a computer generated graphic, but could be positioned anywhere within the frame as long as the computer generated graphics itself as the distance information in each pixel. In addition, the 3D virtual studio could function in real time and could greatly assist television broadcasts. All too often live reporters are disrupted by individuals walking into the video frame; these individuals could be removed in real time by distance discrimination. The real time editing need not be limited to the removal of individuals, once the 3D information has been obtained for a video frame, virtually anything may be edited into and out of the video feed.
  • In a still other aspect of the present invention, the distance mapping system is incorporated into the cosmetic industry to quickly provide a 3D imaging of a patient without having to manufacture a moulding. More specifically, this 3D imaging could be used to assist a plastic surgeon and subsequently the patient in determining how certain features may appear after a procedure. In addition, the 3D imaging information could be used by an orthodontist who makes teeth mouldings, the provided information could greatly reduce the need of an uncomfortable moulding process. The current distance mapping system would allow for a 3D image to be made without any contact with the patient and a non-invasive manner.
  • In another aspect of the present invention, the distance mapping system may be readily incorporated into a security system and more specifically linked to a fingerprint capture system, wherein the distance mapping system is accomplished in a touch-less non contact method that provides a three-dimensional creation of a 3D map of a fingerprint without having to ink the individual's fingers or touch a panel for scanning of the palm. In another security implementation of the present invention, the distance mapping system may be readily incorporated into surveillance systems to provide for profile information on an individual. If a front profile of an individual has been captured, the distance mapping system could be used to generate a side profile of the individual. Additionally, if the side profile of an individual has been captured, the front profile could be extrapolated based upon the 3D distance mapping information.
  • In another aspect of the present invention, a distance mapping system is provided wherein the distance mapping apparatus may substitute the illuminating light sources with sound transducers to achieve a sonar distance mapping camera for underwater objects like a submarine or a school of fish result.

Claims (23)

1. A method of obtaining three-dimensional information for one or more target objects is provided, comprising the steps of:
(a) selecting one or more target objects;
(b) illuminating the one or more target objects using a first light source at a distance X1 from the one or more target objects, and capturing an image I1 of the one or more target objects;
(c) illuminating the one or more target objects using a second light source at a distance X2 from the one or more target objects, and capturing an image I2 of the one or more target objects; and
(d) calculating the distance X between the first and second light sources, and the one or more target objects, based on the decay of intensities of light sources over distances X1 and X2, using the ratio of the image intensities between the images I1 and I2.
2. The method for obtaining three-dimensional information for one or more target objects as defined in claim 1, wherein the distance X between the two light sources and the one or more target objects is calculated based on the distance between the midpoint of the two light sources and the one or more target objects.
3. The method for obtaining three-dimensional information for one or more target objects as defined in claim 1, wherein:
(a) the first image I1 and the second image I2 are stored on a known storage medium;
(b) the distance between the one or more objects and the midpoint between the two light sources is calculated by analyzing images I1 and I2 on a pixel by pixel basis; and
(c) the calculated pixel distance information is stored using a known coordinate storage medium.
4. The method for obtaining three-dimensional information for one or more target objects as defined in claim 1, wherein:
(a) the decay of the light intensity over distance X is 1/xn where n can be either a positive or negative real number including non-integer.
5. The method for obtaining three-dimensional information for one or more target objects as defined in claim 1, comprising the further step of illuminating the one or more target objects with additional light sources for reduction of the impact of shadow on the measurement of light intensity.
6. The method for obtaining three-dimensional information of one or more target objects as defined in claim 1, further comprising the steps of:
(a) illuminating the one or more target objects using a third light source at a distance X3 from the one or more target objects and capturing an image I3 of the one or more target objects;
(b) illuminating the one or more target objects using a fourth light source at a distance X4 from the one or more target objects and capturing an image I4 of the one or more target objects;
(c) calculating the set of distances X′2 between the third and fourth light sources and the one or more target objects based on the decay of intensities of light sources over distances X3 and X4 using the ratio of the image intensities between the images I3 and I4 on a pixel by pixel basis; and
(d) merging the set of distances developed by the set of distances X′1 between the first and second light sources with the set of distances developed by the second set of distances X′2, thereby minimizing the effect of light source shadowing.
7. A system for obtaining three-dimensional information for one or more target objects comprising:
(a) at least two light sources, including a first light source at a distance X1 from the one or more target objects, and a second light source at a distance X2 from the one or more target objects; and
(b) at least one camera device linked to, or incorporating, at least one computer device, the camera device, or the camera device and computer device together, being operable to:
(i) capture and store digital frame information, including capturing an image I1 of the one or more target objects, illuminated by the first light source, and an image I2 of the same one or more target objects, illuminated by the second light source; and
(ii) calculate the distance X between the at least two light sources and the one or more target objects based on the ratio of the decay of image intensities of light sources over distances X1 and X2 using the ratio of the image intensities between the images I1 and I2.
8. The system for obtaining three-dimensional information for one or more target objects as defined in claim 7, wherein the light sources are of the same type.
9. The system for obtaining three-dimensional information for one or more target objects as defined in claim 7, wherein to minimize the effect of light source shadowing the system further comprises:
(a) an additional set of light sources of the same type.
10. The system for obtaining three-dimensional information for one or more target objects as defined in claim 7, wherein the distance X between the first and second two light sources and the one or more target objects is calculated based on the distance between the one or more target objects and the midpoint of the first and second light sources.
11. The system for obtaining three-dimensional information for one or more target objects as defined in claim 7, wherein the system is linked to a storage medium, and wherein the computer device is operable to:
(a) store the first image I1 and the second image I2 to the storage medium;
(b) calculate the distance between the one or more target objects and the two light sources by analyzing images I1 and I2 on a pixel by pixel basis; and
(c) store the calculated pixel distance information to the storage medium.
12. The system for obtaining three-dimensional information for one or more target objects as defined in claim 11, wherein the system is operable to generate a distance map for the one or more target objects based on the calculated pixel distance information.
13. The system for obtaining three-dimensional information for one or more target objects as defined in claim 7, wherein the camera device is, or the camera device and computer device together are, operable to:
(a) capture an image I3 of the one or more target objects using a third light source at a distance X3 from the one or more target objects;
(b) capture an image I4 of the one or more target objects using a fourth light source at a distance X4 from the one or more target objects and capturing;
(c) calculate a set of distances X′2 between the third and fourth light sources and the one or more target object based on the decay of intensities of light sources over distances X3 and X4 using the ratio of the image intensities between the images I3 and I4 on a pixel by pixel basis; and
(d) merge the set of distances X′1 between the first and second light sources with the set of distances developed by the set of distances X′2, thereby minimizing the effect of light source shadowing.
14. The system for obtaining three-dimensional information for one or more target objects as defined in claim 7, wherein the camera device is linked to at least one video microcontroller for controlling the light sources.
15. The system for obtaining three-dimensional information for one or more target objects as defined in claim 7, wherein the video microcontroller is linked to the light sources, and is operable to signal the first light source and the second light source to illuminate the one or more target objects in a sequential manner.
16. The system for obtaining three-dimensional information for one or more target objects as defined in claim 14, wherein the video microcontroller and the camera device are linked to enable the camera device to capture the images of the one or more target objects sequentially, while illuminated by the first and second light sources in sequence.
17. The system for obtaining three-dimensional information of one or more target objects as defined in claim 7, wherein the system is adapted to calculate distance by replacing the two light sources with two sound transducers.
18. The system for obtaining three-dimensional information of one or more target objects as defined in claim 7, wherein the system is integrated with an automobile distance mapping system or accident prevention system.
19. The system for obtaining three-dimensional information of one or more target objects as defined in claim 7, wherein the system is integrated with a robot to provide distance mapping information to the robot in relation to one or more target objects.
20. The system for obtaining three-dimensional information of one or more target objects as defined in claim 7, wherein the system is integrated with a traffic surveillance system and is operable to obtain three-dimensional information associated with automobiles, such three-dimensional information providing a basis for establishing make and model information for automobiles.
21. The system for obtaining three-dimensional information of one or more target objects as defined in claim 7, wherein the system is integrated with a distance mapping system that is operable to provide distance information relative to one or more target objects to an individual with a visual impairment regarding their physical environment.
22. The system for obtaining three-dimensional information of one or more target objects as defined in claim 7, wherein the system is integrated with a television studio system to provide three-dimensional information that enables editing of one or more target objects in a three-dimensional studio environment.
23. The system for obtaining three-dimensional information of one or more target objects as defined in claim 7, wherein the system is integrated with a biometric authentication system to enable bio-authentication of individuals based on touch-less capture of bio-authentication information such as fingerprints.
US11/690,503 2007-03-23 2007-03-23 Divergence ratio distance mapping camera Abandoned US20080231835A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US11/690,503 US20080231835A1 (en) 2007-03-23 2007-03-23 Divergence ratio distance mapping camera
PCT/CA2008/000502 WO2008116288A1 (en) 2007-03-23 2008-03-03 Divergence ratio distance mapping camera
US12/532,644 US8982191B2 (en) 2007-03-23 2008-03-03 Divergence ratio distance mapping camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/690,503 US20080231835A1 (en) 2007-03-23 2007-03-23 Divergence ratio distance mapping camera

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/532,644 Continuation-In-Part US8982191B2 (en) 2007-03-23 2008-03-03 Divergence ratio distance mapping camera

Publications (1)

Publication Number Publication Date
US20080231835A1 true US20080231835A1 (en) 2008-09-25

Family

ID=39774342

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/690,503 Abandoned US20080231835A1 (en) 2007-03-23 2007-03-23 Divergence ratio distance mapping camera
US12/532,644 Expired - Fee Related US8982191B2 (en) 2007-03-23 2008-03-03 Divergence ratio distance mapping camera

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/532,644 Expired - Fee Related US8982191B2 (en) 2007-03-23 2008-03-03 Divergence ratio distance mapping camera

Country Status (2)

Country Link
US (2) US20080231835A1 (en)
WO (1) WO2008116288A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080278491A1 (en) * 2007-05-08 2008-11-13 Dreamworks Animation Llc System and method for rendering computer graphics utilizing a shadow illuminator
US20090033910A1 (en) * 2007-08-01 2009-02-05 Ford Global Technologies, Llc System and method for stereo photography
US20100051836A1 (en) * 2008-08-27 2010-03-04 Samsung Electronics Co., Ltd. Apparatus and method of obtaining depth image
US20100265346A1 (en) * 2007-12-13 2010-10-21 Keigo Iizuka Camera system and method for amalgamating images to create an omni-focused image
US20140240317A1 (en) * 2013-02-28 2014-08-28 Lg Electronics Inc. Distance detecting device capable of increasing power of output light and image processing apparatus including the same
US20140253688A1 (en) * 2013-03-11 2014-09-11 Texas Instruments Incorporated Time of Flight Sensor Binning
US20150131852A1 (en) * 2013-11-07 2015-05-14 John N. Sweetser Object position determination
US20150292884A1 (en) * 2013-04-01 2015-10-15 Panasonic Intellectual Property Management Co., Ltd. Motion-sensor device having multiple light sources
US20160109938A1 (en) * 2014-10-20 2016-04-21 Microsoft Corporation Silhouette-Based Limb Finder Determination
WO2017035498A1 (en) * 2015-08-26 2017-03-02 Olympus Corporation System and method for depth estimation using multiple illumination sources
US9598078B2 (en) 2015-05-27 2017-03-21 Dov Moran Alerting predicted accidents between driverless cars
US20190051004A1 (en) * 2017-08-13 2019-02-14 Shenzhen GOODIX Technology Co., Ltd. 3d sensing technology based on multiple structured illumination
US10281914B2 (en) 2015-05-27 2019-05-07 Dov Moran Alerting predicted accidents between driverless cars
US10291905B2 (en) * 2010-03-01 2019-05-14 Apple Inc. Non-uniform spatial resource allocation for depth mapping
WO2021116416A1 (en) * 2019-12-13 2021-06-17 Robert Bosch Gmbh Device and method for light-supported distance determination, control unit and working device

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120098971A1 (en) * 2010-10-22 2012-04-26 Flir Systems, Inc. Infrared binocular system with dual diopter adjustment
WO2013067513A1 (en) 2011-11-04 2013-05-10 Massachusetts Eye & Ear Infirmary Contextual image stabilization
US10363102B2 (en) 2011-12-30 2019-07-30 Mako Surgical Corp. Integrated surgery method
US9383753B1 (en) 2012-09-26 2016-07-05 Google Inc. Wide-view LIDAR with areas of special attention
US9192445B2 (en) 2012-12-13 2015-11-24 Mako Surgical Corp. Registration and navigation using a three-dimensional tracking sensor
US9644973B2 (en) * 2013-03-28 2017-05-09 Google Inc. Indoor location signalling via light fittings

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6057909A (en) * 1995-06-22 2000-05-02 3Dv Systems Ltd. Optical ranging camera
US7016519B1 (en) * 1998-10-15 2006-03-21 Nippon Hoso Kyokai Method and device for detecting three-dimensional information
US7058213B2 (en) * 1999-03-08 2006-06-06 Orametrix, Inc. Scanning system and calibration method for capturing precise three-dimensional information of objects
US7361472B2 (en) * 2001-02-23 2008-04-22 Invitrogen Corporation Methods for providing extended dynamic range in analyte assays
US7379584B2 (en) * 2000-04-28 2008-05-27 Orametrix, Inc. Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4425501A (en) * 1981-03-30 1984-01-10 Honeywell Inc. Light aperture for a lenslet-photodetector array
JPS58155377A (en) * 1982-03-10 1983-09-16 Akai Electric Co Ltd Position detector
US4720723A (en) * 1983-06-24 1988-01-19 Canon Kabushiki Kaisha Distance measuring device
JPS6060511A (en) * 1983-09-14 1985-04-08 Asahi Optical Co Ltd Distance measuring device
JPH0616147B2 (en) * 1986-03-26 1994-03-02 チノン株式会社 camera
US4843565A (en) * 1987-07-30 1989-06-27 American Electronics, Inc. Range determination method and apparatus
US5003166A (en) * 1989-11-07 1991-03-26 Massachusetts Institute Of Technology Multidimensional range mapping with pattern projection and cross correlation
US5706417A (en) * 1992-05-27 1998-01-06 Massachusetts Institute Of Technology Layered representation for image coding
US6822563B2 (en) * 1997-09-22 2004-11-23 Donnelly Corporation Vehicle imaging system with accessory control
US7359782B2 (en) * 1994-05-23 2008-04-15 Automotive Technologies International, Inc. Vehicular impact reactive system and method
US6215898B1 (en) * 1997-04-15 2001-04-10 Interval Research Corporation Data processing system and method
US6022124A (en) * 1997-08-19 2000-02-08 Ppt Vision, Inc. Machine-vision ring-reflector illumination system and method
US6266053B1 (en) * 1998-04-03 2001-07-24 Synapix, Inc. Time inheritance scene graph for representation of media content
KR100382439B1 (en) * 1998-05-25 2003-05-09 마쯔시다덴기산교 가부시키가이샤 Range finder and camera
US6549203B2 (en) * 1999-03-12 2003-04-15 Terminal Reality, Inc. Lighting and shadowing methods and arrangements for use in computer graphic simulations
US6362822B1 (en) * 1999-03-12 2002-03-26 Terminal Reality, Inc. Lighting and shadowing methods and arrangements for use in computer graphic simulations
EP1214609B1 (en) * 1999-09-08 2004-12-15 3DV Systems Ltd. 3d imaging system
US6700669B1 (en) * 2000-01-28 2004-03-02 Zheng J. Geng Method and system for three-dimensional imaging using light pattern having multiple sub-patterns
JP2001318303A (en) * 2000-05-08 2001-11-16 Olympus Optical Co Ltd Range-finder for camera
JP4040825B2 (en) * 2000-06-12 2008-01-30 富士フイルム株式会社 Image capturing apparatus and distance measuring method
US6618123B2 (en) * 2000-10-20 2003-09-09 Matsushita Electric Industrial Co., Ltd. Range-finder, three-dimensional measuring method and light source apparatus
US7271839B2 (en) * 2001-03-15 2007-09-18 Lg Electronics Inc. Display device of focal angle and focal distance in iris recognition system
US20030046177A1 (en) * 2001-09-05 2003-03-06 Graham Winchester 3-dimensional imaging service
US20030202120A1 (en) * 2002-04-05 2003-10-30 Mack Newton Eliot Virtual lighting system
US7221437B1 (en) * 2002-08-20 2007-05-22 Schaefer Philip R Method and apparatus for measuring distances using light
US7123351B1 (en) * 2002-08-20 2006-10-17 Schaefer Philip R Method and apparatus for measuring distances using light
US7301472B2 (en) * 2002-09-03 2007-11-27 Halliburton Energy Services, Inc. Big bore transceiver
US7167173B2 (en) * 2003-09-17 2007-01-23 International Business Machines Corporation Method and structure for image-based object editing
DE602004030375D1 (en) * 2003-12-24 2011-01-13 Redflex Traffic Systems Pty Ltd SYSTEM AND METHOD FOR DETERMINING VEHICLE SPEED
US20050267657A1 (en) * 2004-05-04 2005-12-01 Devdhar Prashant P Method for vehicle classification
CA2575704C (en) * 2004-07-30 2014-03-04 Extreme Reality Ltd. A system and method for 3d space-dimension based image processing
US7389041B2 (en) * 2005-02-01 2008-06-17 Eastman Kodak Company Determining scene distance in digital camera images

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6057909A (en) * 1995-06-22 2000-05-02 3Dv Systems Ltd. Optical ranging camera
US7016519B1 (en) * 1998-10-15 2006-03-21 Nippon Hoso Kyokai Method and device for detecting three-dimensional information
US7058213B2 (en) * 1999-03-08 2006-06-06 Orametrix, Inc. Scanning system and calibration method for capturing precise three-dimensional information of objects
US7379584B2 (en) * 2000-04-28 2008-05-27 Orametrix, Inc. Methods for registration of three-dimensional frames to create three-dimensional virtual models of objects
US7361472B2 (en) * 2001-02-23 2008-04-22 Invitrogen Corporation Methods for providing extended dynamic range in analyte assays

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080278491A1 (en) * 2007-05-08 2008-11-13 Dreamworks Animation Llc System and method for rendering computer graphics utilizing a shadow illuminator
US8189003B2 (en) * 2007-05-08 2012-05-29 Dreamworks Animation Llc System and method for rendering computer graphics utilizing a shadow illuminator
US20090033910A1 (en) * 2007-08-01 2009-02-05 Ford Global Technologies, Llc System and method for stereo photography
US8218135B2 (en) * 2007-08-01 2012-07-10 Ford Global Technologies, Llc System and method for stereo photography
US20100265346A1 (en) * 2007-12-13 2010-10-21 Keigo Iizuka Camera system and method for amalgamating images to create an omni-focused image
US8384803B2 (en) * 2007-12-13 2013-02-26 Keigo Iizuka Camera system and method for amalgamating images to create an omni-focused image
US20100051836A1 (en) * 2008-08-27 2010-03-04 Samsung Electronics Co., Ltd. Apparatus and method of obtaining depth image
JP2012501435A (en) * 2008-08-27 2012-01-19 サムスン エレクトロニクス カンパニー リミテッド Depth image acquisition apparatus and method
US8217327B2 (en) * 2008-08-27 2012-07-10 Samsung Electronics Co., Ltd. Apparatus and method of obtaining depth image
US10291905B2 (en) * 2010-03-01 2019-05-14 Apple Inc. Non-uniform spatial resource allocation for depth mapping
US20140240317A1 (en) * 2013-02-28 2014-08-28 Lg Electronics Inc. Distance detecting device capable of increasing power of output light and image processing apparatus including the same
US20140253688A1 (en) * 2013-03-11 2014-09-11 Texas Instruments Incorporated Time of Flight Sensor Binning
US9134114B2 (en) * 2013-03-11 2015-09-15 Texas Instruments Incorporated Time of flight sensor binning
US9784822B2 (en) 2013-03-11 2017-10-10 Texas Instruments Incorporated Time of flight sensor binning
US20150292884A1 (en) * 2013-04-01 2015-10-15 Panasonic Intellectual Property Management Co., Ltd. Motion-sensor device having multiple light sources
JPWO2014162675A1 (en) * 2013-04-01 2017-02-16 パナソニックIpマネジメント株式会社 Motion sensor device having a plurality of light sources
US10473461B2 (en) * 2013-04-01 2019-11-12 Panasonic Intellectual Property Management Co., Ltd. Motion-sensor device having multiple light sources
US20150131852A1 (en) * 2013-11-07 2015-05-14 John N. Sweetser Object position determination
CN105593786A (en) * 2013-11-07 2016-05-18 英特尔公司 Gaze-assisted touchscreen inputs
US9494415B2 (en) * 2013-11-07 2016-11-15 Intel Corporation Object position determination
US20160109938A1 (en) * 2014-10-20 2016-04-21 Microsoft Corporation Silhouette-Based Limb Finder Determination
US10921877B2 (en) * 2014-10-20 2021-02-16 Microsoft Technology Licensing, Llc Silhouette-based limb finder determination
US10281914B2 (en) 2015-05-27 2019-05-07 Dov Moran Alerting predicted accidents between driverless cars
US9598078B2 (en) 2015-05-27 2017-03-21 Dov Moran Alerting predicted accidents between driverless cars
US11755012B2 (en) 2015-05-27 2023-09-12 Dov Moran Alerting predicted accidents between driverless cars
WO2017035498A1 (en) * 2015-08-26 2017-03-02 Olympus Corporation System and method for depth estimation using multiple illumination sources
US10706572B2 (en) 2015-08-26 2020-07-07 Olympus Corporation System and method for depth estimation using multiple illumination sources
US20190051004A1 (en) * 2017-08-13 2019-02-14 Shenzhen GOODIX Technology Co., Ltd. 3d sensing technology based on multiple structured illumination
US10489925B2 (en) * 2017-08-13 2019-11-26 Shenzhen GOODIX Technology Co., Ltd. 3D sensing technology based on multiple structured illumination
WO2021116416A1 (en) * 2019-12-13 2021-06-17 Robert Bosch Gmbh Device and method for light-supported distance determination, control unit and working device

Also Published As

Publication number Publication date
US20100110165A1 (en) 2010-05-06
US8982191B2 (en) 2015-03-17
WO2008116288A1 (en) 2008-10-02
US20150009300A9 (en) 2015-01-08

Similar Documents

Publication Publication Date Title
US8982191B2 (en) Divergence ratio distance mapping camera
CN111448591B (en) System and method for locating a vehicle in poor lighting conditions
Schneider et al. Fusing vision and lidar-synchronization, correction and occlusion reasoning
JP4899424B2 (en) Object detection device
JP5748920B2 (en) How to display around the vehicle
US6734787B2 (en) Apparatus and method of recognizing vehicle travelling behind
US20040066500A1 (en) Occupancy detection and measurement system and method
CN101763640B (en) Online calibration processing method for vehicle-mounted multi-view camera viewing system
JP4893212B2 (en) Perimeter monitoring device
JP2004198212A (en) Apparatus for monitoring vicinity of mobile object
CN107678041B (en) System and method for detecting objects
EP2293588A1 (en) Method for using a stereovision camera arrangement
Lion et al. Smart speed bump detection and estimation with kinect
CN111086451B (en) Head-up display system, display method and automobile
JPH1144533A (en) Preceding vehicle detector
JPH11211738A (en) Speed measurement method of traveling body and speed measuring device using the method
KR20220102774A (en) monitering system with LiDAR for a body
JPH09259282A (en) Device and method for detecting moving obstacle
KR102458582B1 (en) System for distinguishing humans from animals using TOF sensor
JP2002354466A (en) Surrounding monitoring device for vehicle
JP2000231637A (en) Image monitor device
Agrawal et al. RWU3D: Real World ToF and Stereo Dataset with High Quality Ground Truth
Suresha et al. Distance Estimation in Video using Machine Learning
CN115440067A (en) Compound eye imaging system, vehicle using compound eye imaging system, and image processing method thereof
CN114913507A (en) Pedestrian detection method and device based on bus tail screen and bus tail screen

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION