US20060126054A1 - Three-dimensional measurement device and three-dimensional measurement method - Google Patents

Three-dimensional measurement device and three-dimensional measurement method Download PDF

Info

Publication number
US20060126054A1
US20060126054A1 US11/335,508 US33550806A US2006126054A1 US 20060126054 A1 US20060126054 A1 US 20060126054A1 US 33550806 A US33550806 A US 33550806A US 2006126054 A1 US2006126054 A1 US 2006126054A1
Authority
US
United States
Prior art keywords
light
frame
distance
exposure
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/335,508
Inventor
Koichi Kamon
Toshio Norita
Hiroshi Uchino
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Minolta Co Ltd
Original Assignee
Minolta Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Minolta Co Ltd filed Critical Minolta Co Ltd
Priority to US11/335,508 priority Critical patent/US20060126054A1/en
Publication of US20060126054A1 publication Critical patent/US20060126054A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/497Means for monitoring or calibrating
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • G01S17/10Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
    • G01S17/14Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves wherein a voltage or current pulse is initiated and terminated in accordance with the pulse transmission and echo reception respectively, e.g. using counters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/42Simultaneous measurement of distance and other co-ordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4861Circuits for detection, sampling, integration or read-out
    • G01S7/4863Detector arrays, e.g. charge-transfer gates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/483Details of pulse systems
    • G01S7/486Receivers
    • G01S7/4865Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • the present invention relates to a three-dimensional measurement method and device for obtaining position information of an object by projecting light and receiving the light reflected from the object.
  • Three-dimensional measurement can be accomplished using the time of flight (TOF) from the moment of light pulse transmission to the reception of the returning light pulse reflected by an object since this TOF is dependent on distance.
  • TOF time of flight
  • Japanese Laid-Open Patent Application No. H11-508371 discloses a device using a solid state area sensor as a photoreceptor device for modulating light entering the solid state area sensor by a photoelectric modulator.
  • the distance is reflected in the amount of exposure light of the solid state area sensor by photoreception modulation synchronized with the projection light.
  • the distance information to the object can be obtained regardless of the reflectivity of the object by determining the ratio of the amount of exposure light with modulation and the amount of exposure light without modulation.
  • a measurement of distance to multiple points can be accomplished at higher speed using a solid state area sensor than which deflects the optical path by a scanning mechanism.
  • Japanese Laid-Open Patent Application NO. H10-332827 discloses a device for repeatedly projecting pulse light at uniform intervals, standardizing the amount of light of the reflected and returning pulse light which enters a solid state area sensor, and measuring the amount of light exposure in a specific period.
  • the amount of exposure light is proportional to the frequency (number of pulses) of the reflected pulse light, such that the exposure light is slight when time-of-flight is long and the distance far.
  • Three-dimensional input independent of the reflectivity of the object is possible by standardizing the amount of light of the reflected pulse light.
  • An object of the present invention is to eliminate the previously described disadvantages. Another object of the present invention is to realize high-resolution three-dimensional measurement in a compact form factor. Another object of the present invention is to realize three-dimensional measurement capable of input of a desired precision in the three-dimensional measurement. Still another object of the present invention is to increase the distance range in three-dimensional measurement.
  • a three-dimensional measurement method for measuring a distance to a plurality of positions on an object by projecting light and receiving light reflected from the object, said three-dimensional measurement method comprising the steps of: projecting a pulse light on an object; receiving light reflected from the object by an area sensor comprising a plurality of photoelectric conversion elements; controlling the active/inactive timing of the area sensor such that the photoelectric conversion elements are exposed to light with a timing synchronously with the pulse light projection; and measuring the distance to each photoelectric conversion element based on the output of the area sensor.
  • a three-dimensional measurement device for measuring the distance to a plurality of positions on an object by projecting light and receiving the light reflected from the object
  • said three-dimensional measurement device comprising: a projector for projecting pulse light on an object; an area sensor comprising a plurality of photoelectric conversion elements for receiving light reflected from the object; a controller for controlling the ON/OFF states of the photoelectric elements with a timing synchronized with the pulse light projection; and a processor for eliminating the fluctuating component of the received light intensity due to distance or reflectivity of the object from the amount of exposure obtained based on the ON/OFF control.
  • a three-dimensional measurement method for measuring a distance to a plurality of positions on an object by projecting light and receiving light reflected from the object, said three-dimensional measurement method comprising the steps of: sequentially projecting light of a first luminance distribution and light of a second luminance distribution on an object; receiving light reflected by the object in each projection cycle by a solid state area sensor comprising a plurality of photoelectric elements; and measuring the distance to each photoelectric element based on the output of the solid state area sensor in a first projection and the output of the solid state area sensor in a second projection.
  • FIG. 1 ( a ) and FIG. 1 ( b ) the structure of a three-dimensional measure device of a first embodiment
  • FIG. 2 illustrates the measurement principle
  • FIG. 3 shows the basic structure of the image processing circuit
  • FIG. 4 ( a ) and FIG. 4 ( b ) illustrate the operation of the CCD sensor
  • FIG. 5 is a signal waveform diagram showing the control of the projection and reception light when using a CCD sensor
  • FIG. 6 ( a ) and FIG. 6 ( b ) illustrate the operation of a MOS-type sensor
  • FIG. 7 is a signal waveform diagram showing the control of the projection and reception light when using a MOS-type sensor
  • FIG. 8 is a signal waveform diagram showing a first modification of the control
  • FIG. 9 is a block diagram showing a first modification of the image processing circuit
  • FIG. 10 ( a ) and FIG. 10 ( b ) show a second modification of the control
  • FIG. 11 a block diagram showing a second modification of the image processing circuit
  • FIG. 12 is a flow chart of the mode discrimination related to three-dimensional data calculation
  • FIG. 13 is a waveform diagram showing the relationship among the three mode types and light receiving time
  • FIG. 14 ( a ) and FIG. 14 ( b ) show a third modification of the control
  • FIG. 15 shows the summary of a fourth modification of the control
  • FIG. 16 is a block diagram showing a third modification of the image processing circuit
  • FIG. 17 ( a ) and FIG. 17 ( b ) show modifications of the optical system
  • FIG. 18 shows the structure of a three-dimensional measurement device of a second embodiment
  • FIG. 19 ( a ) and FIG. 19 ( b ) illustrate the measurement principle of the short distance mode
  • FIGS. 1 ( a ) and 1 ( b ) show the structure of a three-dimensional measure device of the first embodiment.
  • FIG. 1 ( a ) shows the entire structure
  • FIG. 1 ( b ) shows the structure of the image sensing surface.
  • a three-dimensional measurement device 1 is provided with a light source 11 , projection lens 12 , light receiving lens 21 , and solid state area sensor 22 .
  • the light source 11 receives power from a light generating circuit 32 , and emits laser light.
  • An object Q is illuminated by the laser light passing through the projection lens 12 .
  • the light reflected from the object Q passes through the light receiving lens 21 , and impinges the solid state area sensor 22 .
  • the solid state area sensor 22 has pixels which block the light from outside the device, and part of the laser light from the light source 11 passes through an internal optical path 15 comprised of optical fiber without passing outside and directly impinges the pixel as “standard light.”
  • the light passing through the light receiving lens 21 and impinging the solid state area sensor 22 is referred to as “measurement light”
  • the pixel impinged by this measurement light is referred to as the “measurement pixel”
  • the pixel impinged by the standard light is referred to as the “standard pixel.”
  • the solid state area sensor 22 operates in accordance with a clock of the timing controller 33 , and outputs image signals SG representing the amount of exposure on each pixel of the unit photoreception area to an image processing circuit 34 .
  • the image processing circuit 34 performs specific calculations, and distance data DL obtained from these calculations are transmitted to a storage memory 35 and a display 36 used for a monitor display. Controls relating to the light projection and reception and signal processing in the three-dimensional measurement device 1 are managed by the system controller 31 .
  • FIG. 2 illustrates the measurement principle
  • the light source intermittently emits light in accordance with light-emission control signals which repeatedly and alternatingly switch ON/OFF the light emission in regular periods.
  • the exposure light of the solid state area sensor is an intermittent exposure light synchronized with the light emission timing in frame (n). Although the light emission timing and the exposure timing match completely in the drawing, the timing also may be shifted insofar as they are synchronized, and the length of the period (pulse width) may be slightly different for the emission light and the exposure light.
  • sensor exposure light refers to the light exposure when the sensor is in the active state.
  • Sensor exposure light does not include light exposure when the sensor is in the inactive state.
  • exposure control in the present invention refers to control of the sensor active/inactive timing.
  • part of the emitted light (standard light) is propagated through the internal optical path and impinges the standard pixel.
  • the incidence of the standard light starts from time t 2 after a time Dref has elapsed which matches an emission delay time (offset time) from time t 1 and the time for propagation within the internal optical path.
  • the incidence of the measurement light from an object starts from time t 3 after a time D 1 has elapsed which matches the emission delay time from time t 1 and the time for propagation in the external optical path.
  • the amount of exposure of the measurement pixel in a single exposure is a value corresponding to the reflectivity of the object and the distance to the object.
  • the reflectivity of the object is known beforehand, it is possible to determine the distance based on the amount of exposure.
  • the measurable distance is determined by the length of a single light emission period. Error can be reduced by determining the distance based on the total amount of exposure (accumulated electric charge) of one frame when light is projected and received a plurality of times within one frame period.
  • This calculation is performed by the image processing circuit 34 having the structure shown in FIG. 3 .
  • the image signal SG transmitted from the solid state area sensor 22 is quantified by an AD converter 401 , and output as image data DG.
  • the image data DG (distance data) of frame (n) are temporarily stored in a frame memory 410 .
  • the image data DG (reflectivity data) in frame (n+1) are output from the AD converter 401 , the image data DG of frame (n) are simultaneously output from the frame memory 410 , and a dividing device 420 performs the aforesaid calculation.
  • Corrected distance data (distance data of the measurement pixel)—(distance data of the standard pixel)
  • An image processing circuit 34 may be provided to perform this calculation function, or the calculation may be performed by the system controller 31 .
  • Either a CCD sensor, or MOS-type sensor may be used as the solid state area sensor 22 .
  • FIGS. 4 ( a ) and 4 ( b ) illustrate the operation of a CCD sensor.
  • FIG. 4 ( a ) schematically shows the structure
  • FIG. 4 ( b ) shows the control timing. The state of each time t 0 , t 1 , t 2 , t 3 , t 4 is described below.
  • FIG. 5 is a signal waveform diagram showing the control of the projection and reception light when using a CCD sensor.
  • FIG. 4 The operation of FIG. 4 is repeated to perform intermittent exposure in frame (n), and continuous exposure is performed in frame (n+1).
  • the electric charge accumulated at gate SH 1 in frame (n) is transferred to reverse gate ⁇ T 1 when gate SH 2 switches ON at the end of the frame, and is output to the image processing circuit 34 in parallel with the exposure operation of frame (n+1).
  • FIGS. 6 ( a ) and 6 ( b ) illustrate the operation of a MOS-type sensor.
  • FIG. 6 ( a ) summarizes the structure
  • FIG. 6 ( b ) shows the movement of the electric charge
  • FIG. 6 ( c ) shows the control timing.
  • the state times t 0 , t 1 , t 2 , t 3 , t 4 is described below.
  • the photoelectrically converted electric charge gradually accumulates in period Ton in the condenser C 2 by repeating the aforesaid operation. Since the incidental capacitor C 1 is small compared to the condenser C 2 , the potential of the incidental capacitor C 1 rises greatly by the electric charge accumulation. In this way, when gate ST switches ON, the electric charge is transferred from the incidental capacitor C 1 to the condenser C 2 .
  • FIG. 7 is a signal waveform diagram showing the control of the projection and reception light when a MOS-type sensor is used.
  • frame (n) in PD 11 ⁇ PD 14 shown in FIG. 6 ( a ) corresponds to t 0 ⁇ t 6
  • frame (n+1) corresponds to t 8 ⁇ t 10
  • Frame (n) in PD 21 ⁇ PD 24 corresponds to t 5 ⁇ t 7
  • frame (n+1) corresponds to t 9 ⁇ t 11
  • Frame (n) in PD 31 ⁇ PD 34 corresponds to t 6 ⁇ t 8
  • frame (n+1) corresponds to t 10 ⁇ t 12 .
  • frame (n) intermittent exposure is performed by repeating the operation of FIG. 6 , and in frame (n+1) continuous exposure is performed.
  • the electric charge stored in the condenser C 2 switches ON the gates SL 11 ⁇ SL 14 at the end of the frame, such that signals corresponding to the electric charges of PD 11 ⁇ PD 14 are stored in a line memory.
  • the signals of the PD 11 ⁇ PD 14 stored in the line memory are output to the image processing circuit 34 by sequentially switching ON the switches SV 1 ⁇ SV 3 .
  • the signals of PD 21 ⁇ PD 24 are similarly output, and finally when the signals of PD 31 ⁇ PD 34 are similarly stored in the line memory.
  • the continuous exposure of frame (n+1) starts with PD 11 ⁇ PD 14 , and the signals of PD 31 ⁇ PD 34 are output in parallel with the continuous exposure.
  • an optical filter is used to selectively transmit light of the wavelength range emitted by the light source.
  • environmental light cannot be entirely eliminated by the optical filter. In the embodiments described below, measurement error due to environmental light is prevented.
  • FIG. 8 is a signal waveform diagram showing a first modification of the control.
  • the environmental light component in the exposure light of frame (n) is detected by intermittent exposure without light emission in frame (n+1).
  • the environmental light component in the exposure light of frame (n) is detected by intermittent exposure without light emission in frame (n+1).
  • the environmental light component in the exposure light of frame (n+2) is detected by continuous exposure without emission in frame (+3).
  • FIG. 9 is a block diagram showing a first modification of the image processing circuit.
  • the image data SG transferred from the solid state area sensor 22 are quantified by the AD converter 401 and output as image data DG similar to the case shown in FIG. 3 .
  • the image data DG (distance data) in frame (n) are temporarily stored in line memory 411 .
  • the image data DG of frame (n) are output simultaneously with the image data DG of frame (n+1) from the frame memory 411 .
  • Distance data from which the environmental light component has been eliminated are obtained by calculation by a subtraction device 430 .
  • the output of the subtraction device 430 is temporarily stored in the frame memory 412 .
  • the image data DG (reflectivity data) of frame (n+2) are recorded in the frame memory 411 in parallel with the readout.
  • the image data DG continuous exposure environmental light data
  • the reflectivity data are read from the frame memory 411 , and the reflectivity data excluded from the environmental light component are output from the subtracting device 430 . Then, distance data DL from which the environmental light component has been excluded are obtained by the calculation of the dividing device 420 .
  • FIGS. 10 ( a ) and 10 ( b ) show a second modification of the control.
  • distance data corresponding to the propagation time D 11 and D 11 ′ representing the distance to the object are obtained for both frame (n+1) and frame (n+2). These distance data are distance data from which the reflectivity component has been excluded by division of the reflectivity data of frame (n+3).
  • the distance data obtained in frame (n+1) and the distance data obtained in frame (n+2) are averaged to determine the distance data of each pixel.
  • the measurable distance when viewed from the time of flight, is increased from the emission period to the emission OFF time of the emission cycle.
  • the time D 11 from the transmission pulse rise to the reception pulse rise (hereinafter referred to as “rise time difference”) and the time D 11 ′ from the transmission pulse fall to the reception pulse fall (hereinafter referred to as “fall time difference”) are measured, and their average is designated the measurement value, so as to produce a highly accurate measurement.
  • FIG. 11 is a block diagram showing a second modification of the image processing circuit.
  • the image signal SG output from the solid state area sensor 22 is converted to image data DG by the AD converter 401 , and sequentially recorded in frame memories 411 , 412 , 413 for each frame.
  • the image data DG of frame (n+3) are output from the AD converter 401
  • the image data DG of frame (n+2) are output from frame memory 411
  • the image data DG of frame (n+1) are output from the frame memory 412
  • the image data DG of frame (n) are output from frame memory 413 .
  • the pixel unit division of the image data DG output from each frame memory 411 , 412 , 413 and the image data DG of frame (n+3) output from the AD converter 401 is performed by the dividing devices 421 , 422 , 423 , and the distance data DL are calculated for each frame (n) ⁇ (n+2). Three-dimensional data are calculated based on the distance data DL of these three frames.
  • the system controller 31 performs the calculations.
  • FIG. 12 is a flow chart of the mode discrimination for calculating three-dimensional data
  • FIG. 13 is a waveform diagram showing the relationship between the three modes and light receiving times.
  • “frame” is represented by the symbol “F.”
  • the system controller 31 reads the distance data of frames (n) ⁇ (n+2) obtained by sensing frames (n) ⁇ (n+3) from the frame memory 35 (# 101 ). Then, the magnitude relationship of the distance data values in frames (n) ⁇ (n+2) are determined for each pixel, and calculations are performed for either mode 1 , 2 , or 3 in accordance with the determined relationship (# 102 ⁇ # 112 ).
  • the distance data of frame (n) represents the rise time difference Dref
  • the distance data of frame (n+1) represents the fall time difference Dref′.
  • the propagation time in the internal optical path is sufficiently shorter compared to the emission pulse width. Accordingly, the distance data of frame (n) and frame (n+1) are normally used in the calculation of the standard pixel.
  • the determination of modes 1 , 2 , and 3 may be performed for the standard pixel just as for the measurement pixel, and the distance data selected based on the result.
  • the distance data of frame (n) represents the rise time difference D 101
  • the distance data of frame (n+1) represents the fall time difference D 101 ′. Accordingly, the distance data of frames (n) and (n+1) are used in this calculation.
  • the corrected distance data representing D 101 -Dref and D 101 ′-Dref′ are obtained by subtracting the standard pixel data of the same frame from each frame. The average of the corrected distance data is designated the distance data of the measurement pixel determined in mode 1 .
  • mode 2 the distance data of frames (n+1) and (n+2) are used, and calculation of corrected distance data representing D 102 -Dref and D 102 ′-Dref′ as well as the average value calculation are performed.
  • mode 3 the distance data of frames (n) and (n+2) are used, and calculation of corrected distance data representing D 103 -Dref and D 103 ′-Dref′ as well as the average value calculation are performed.
  • the influence of environmental light can be reduced even when measuring long distances.
  • FIG. 15 shows a summary of a fourth modification of the control.
  • Intermittent emission is performed in frames (n) ⁇ (n+2), and intermittent exposure is performed with the same or delayed timing of emission.
  • frame (n+3) the emission is stopped and intermittent exposure is performed to obtain environmental light data.
  • frame (n+4) intermittent emission and continuous exposure are performed to obtain reflectivity data.
  • frame (n+5) emission is stopped, and continuous exposure is performed.
  • FIG. 16 is a block diagram showing a third modification of the image processing circuit.
  • the distance data DG of frames (n+2), (n+1), (n) are stored in frame memories 411 , 412 , 413 .
  • the environmental light data of frame (n+3) are output from the AD converter 401 , data are simultaneously read from the frame memories 411 ⁇ 413 , and the environmental light component is eliminated by the subtraction devices 431 and 432 .
  • the distance data of the frames (n+2), (n+1), (n) from which the environmental light component has been eliminated are recorded in frame memories 414 , 415 , 416 .
  • the reflectivity data of frame (n+4) are stored in frame memory 411 , and when the environmental light data of frame (n+5) are output from the AD converter 401 , the environmental light component is eliminated from the reflectivity data of frame (n+4) by the subtraction device 431 .
  • Data are read from the frame memories 414 - 416 simultaneously with the output of data from the subtraction device 431 , and the distance data DL of frames (n+2), (n+1), (n) from which the environmental light component has been eliminated are obtained via calculation by the dividing devices 431 , 432 , 433 .
  • FIGS. 17 ( a ) and 17 ( b ) show modifications of the optical system. Structural elements of the example in these drawings are designated by symbols identical to those of FIGS. 1 ( a ) and 1 ( b ).
  • part (a) light emitted from the light source 11 is condensed to the principal point of the receiving lens 21 by the collector lens 13 and polarization mirror 17 , so as to be directed to the object Q.
  • the light reflected by the object Qv passes through the receiving lens 21 , 1 ⁇ 4 wavelength plate 16 , and polarization mirror 17 , and impinges a specific pixel on the solid state area sensor 22 .
  • the polarization mirror 17 reflects the perpendicular polarized light, and transmits the parallel polarized light.
  • the 1 ⁇ 4 wavelength plate 16 inclines the polarized light 45 degrees.
  • the light emitted from the light source 11 is condensed at the principal point of the receiving lens 21 by the collector lens 14 and semi-transparent mirror 18 , and directed to the object Q.
  • the light reflected from the object Q passes through the receiving lens 21 and semi-transparent mirror 18 , and impinges a specific pixel on the solid state area sensor 22 .
  • Laser light is most suitable as the signal medium in distance measurement using the TOF method.
  • measurement of light propagation time is an event performed at high speed, it is difficult to ensure high accuracy at near-field distances. measurement of constant precision can be realized regardless of whether the distance is near-field or not by using the TOF method and the triangulation method.
  • FIG. 18 shows the structure of a three-dimensional measurement device of the second embodiment.
  • the three-dimensional measurement device 2 has, in addition to the structural elements of the three-dimensional measurement device 1 shown in FIG. 1 , a triangulation density gradient filter 19 , filter controller 38 , and mode switch 39 .
  • the function of the density gradient filter 19 is variable, an includes projecting a first light and a second light having different luminous intensity distributions, and a substantially through-like state (uniform luminous intensity distribution).
  • the mode switch 39 is a user interface for specifying the selection of the far-field mode and near-field mode by a user. The mode is not limited to manual selection and may be switched automatically by detecting the distance using a simple rangefinding sensor.
  • the system controller 37 issues specific instructions to the filter controller 38 in accordance with a mode selection signal Sm output from the mode switch 39 .
  • the density gradient filter 19 is controlled so as to uniformly illuminate the object Q, and distance measurement is performed using the TOF method in the same manner as the three-dimensional measurement device 1 .
  • the operation is as described below.
  • FIGS. 19 ( a ) and 19 ( b ) illustrate the measurement principle of the near-field mode.
  • the characteristics of the density gradient filter 19 change the amount of light long one direction (Y direction) in a plane perpendicular to the optical axis.
  • frame (n) light is projected using the density gradient filter 19 , and continuous exposure light is sensed. Then, in frame (n+1), the density gradient filter 19 is rotated 1800 about the optical axis, and the exposure light is sensed. In this way, the projection light intensity ratio of frame (n) and frame (n+1) differs at all angles using the optical axis as standard as shown in part (b).
  • the distance H between each pixel of the solid state area sensor 22 and the light source, and the incidence angle ⁇ of the light received by each pixel are known, and the relationship of correspondence between the projection light intensity ratio and the ray angles ⁇ a, ⁇ b, ⁇ c . . . are also known. Accordingly, the angle of the ray illuminating point q (angle ⁇ b in the example) on the object corresponding to a specific pixel can be understood by determining the intensity ratio of the incident light in frames (n) and (n+1) of that specific pixel.
  • the distance Lq to the point q can be calculated by triangulation.
  • the division calculation for obtaining the light intensity ratio is performed by the image processing circuit 34 , and the distance calculation for each pixel is performed by the system controller 37 .
  • the obtained distance data are recorded and displayed.
  • a solid state area sensor 22 capable of sensing color may be used, so as to perform three-dimensional measurement and two-dimensional color image input, both of which may be recorded and displayed. Although the operation has been described in terms of obtaining a single two-dimensional image, the measurement may be repeated so as to have three-dimensional measurement of a two-dimensional object.
  • the solid state area sensor may also be used to perform a logarithmic compression function.

Abstract

The purpose of the present invention is to realize a compact three-dimensional measurement device of high resolution. In the present invention, a pulse light is projected on an object, the light reflected from the object is received by a solid state area sensor having a plurality of photoelectric conversion elements, the area sensor is controlled with a timing synchronized with the projection of the pulse light, and the distance to each photoelectric conversion element is measured based on the output of the solid state area sensor.

Description

    RELATED APPLICATION
  • This application is based on Patent Application No. 2000-155768 filed in Japan, the entire content of which is hereby incorporated by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to a three-dimensional measurement method and device for obtaining position information of an object by projecting light and receiving the light reflected from the object.
  • 2. Description of the Related Art
  • Three-dimensional measurement can be accomplished using the time of flight (TOF) from the moment of light pulse transmission to the reception of the returning light pulse reflected by an object since this TOF is dependent on distance.
  • Japanese Laid-Open Patent Application No. H11-508371 discloses a device using a solid state area sensor as a photoreceptor device for modulating light entering the solid state area sensor by a photoelectric modulator. The distance is reflected in the amount of exposure light of the solid state area sensor by photoreception modulation synchronized with the projection light. The distance information to the object can be obtained regardless of the reflectivity of the object by determining the ratio of the amount of exposure light with modulation and the amount of exposure light without modulation. A measurement of distance to multiple points (so-called three-dimensional measurement or three-dimensional input) can be accomplished at higher speed using a solid state area sensor than which deflects the optical path by a scanning mechanism.
  • Japanese Laid-Open Patent Application NO. H10-332827 discloses a device for repeatedly projecting pulse light at uniform intervals, standardizing the amount of light of the reflected and returning pulse light which enters a solid state area sensor, and measuring the amount of light exposure in a specific period. The amount of exposure light is proportional to the frequency (number of pulses) of the reflected pulse light, such that the exposure light is slight when time-of-flight is long and the distance far. Three-dimensional input independent of the reflectivity of the object is possible by standardizing the amount of light of the reflected pulse light.
  • It is difficult to make a device compact since a light modulation device must be included in a construction for controlling the exposure timing of the solid state area sensor by light modulation as described above. In a construction for standardizing the amount of light of the reflected pulse light, disadvantages arise inasmuch as the allowed range of reflectivity of the object and the measurable distance range are limited by the performance of the optical system used for such standardization, such that resolution is determined by the period of the projection.
  • In the aforesaid conventional constructions, high-speed measurement at short distances is difficult because each uses a method of detection of the length of the time-of-flight based on the amount of exposure light.
  • SUMMARY OF THE INVENTION
  • An object of the present invention is to eliminate the previously described disadvantages. Another object of the present invention is to realize high-resolution three-dimensional measurement in a compact form factor. Another object of the present invention is to realize three-dimensional measurement capable of input of a desired precision in the three-dimensional measurement. Still another object of the present invention is to increase the distance range in three-dimensional measurement.
  • These and other objects are attained by a three-dimensional measurement method for measuring a distance to a plurality of positions on an object by projecting light and receiving light reflected from the object, said three-dimensional measurement method comprising the steps of: projecting a pulse light on an object; receiving light reflected from the object by an area sensor comprising a plurality of photoelectric conversion elements; controlling the active/inactive timing of the area sensor such that the photoelectric conversion elements are exposed to light with a timing synchronously with the pulse light projection; and measuring the distance to each photoelectric conversion element based on the output of the area sensor.
  • These objects of the present invention are further attained by a three-dimensional measurement device for measuring the distance to a plurality of positions on an object by projecting light and receiving the light reflected from the object, said three-dimensional measurement device comprising: a projector for projecting pulse light on an object; an area sensor comprising a plurality of photoelectric conversion elements for receiving light reflected from the object; a controller for controlling the ON/OFF states of the photoelectric elements with a timing synchronized with the pulse light projection; and a processor for eliminating the fluctuating component of the received light intensity due to distance or reflectivity of the object from the amount of exposure obtained based on the ON/OFF control.
  • These and other objects are attained by a three-dimensional measurement method for measuring a distance to a plurality of positions on an object by projecting light and receiving light reflected from the object, said three-dimensional measurement method comprising the steps of: sequentially projecting light of a first luminance distribution and light of a second luminance distribution on an object; receiving light reflected by the object in each projection cycle by a solid state area sensor comprising a plurality of photoelectric elements; and measuring the distance to each photoelectric element based on the output of the solid state area sensor in a first projection and the output of the solid state area sensor in a second projection.
  • The invention itself, together with further objects and attendant advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1(a) and FIG. 1(b) the structure of a three-dimensional measure device of a first embodiment;
  • FIG. 2 illustrates the measurement principle;
  • FIG. 3 shows the basic structure of the image processing circuit;
  • FIG. 4(a) and FIG. 4(b) illustrate the operation of the CCD sensor;
  • FIG. 5 is a signal waveform diagram showing the control of the projection and reception light when using a CCD sensor;
  • FIG. 6(a) and FIG. 6(b) illustrate the operation of a MOS-type sensor;
  • FIG. 7 is a signal waveform diagram showing the control of the projection and reception light when using a MOS-type sensor;
  • FIG. 8 is a signal waveform diagram showing a first modification of the control;
  • FIG. 9 is a block diagram showing a first modification of the image processing circuit;
  • FIG. 10(a) and FIG. 10(b) show a second modification of the control;
  • FIG. 11 a block diagram showing a second modification of the image processing circuit;
  • FIG. 12 is a flow chart of the mode discrimination related to three-dimensional data calculation;
  • FIG. 13 is a waveform diagram showing the relationship among the three mode types and light receiving time;
  • FIG. 14(a) and FIG. 14(b) show a third modification of the control;
  • FIG. 15 shows the summary of a fourth modification of the control;
  • FIG. 16 is a block diagram showing a third modification of the image processing circuit;
  • FIG. 17(a) and FIG. 17(b) show modifications of the optical system;
  • FIG. 18 shows the structure of a three-dimensional measurement device of a second embodiment; and
  • FIG. 19(a) and FIG. 19(b) illustrate the measurement principle of the short distance mode;
  • In the following description, like parts are designated by like reference numbers throughout the several drawings.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS First Embodiment
  • Device Structure
  • FIGS. 1(a) and 1(b) show the structure of a three-dimensional measure device of the first embodiment. FIG. 1(a) shows the entire structure, and FIG. 1(b) shows the structure of the image sensing surface.
  • A three-dimensional measurement device 1 is provided with a light source 11, projection lens 12, light receiving lens 21, and solid state area sensor 22. The light source 11 receives power from a light generating circuit 32, and emits laser light. An object Q is illuminated by the laser light passing through the projection lens 12. The light reflected from the object Q passes through the light receiving lens 21, and impinges the solid state area sensor 22. The solid state area sensor 22 has pixels which block the light from outside the device, and part of the laser light from the light source 11 passes through an internal optical path 15 comprised of optical fiber without passing outside and directly impinges the pixel as “standard light.” Hereinafter, the light passing through the light receiving lens 21 and impinging the solid state area sensor 22 is referred to as “measurement light,” the pixel impinged by this measurement light is referred to as the “measurement pixel,” and the pixel impinged by the standard light is referred to as the “standard pixel.”
  • The solid state area sensor 22 operates in accordance with a clock of the timing controller 33, and outputs image signals SG representing the amount of exposure on each pixel of the unit photoreception area to an image processing circuit 34. The image processing circuit 34 performs specific calculations, and distance data DL obtained from these calculations are transmitted to a storage memory 35 and a display 36 used for a monitor display. Controls relating to the light projection and reception and signal processing in the three-dimensional measurement device 1 are managed by the system controller 31.
  • Measurement Method
  • FIG. 2 illustrates the measurement principle.
  • The light source intermittently emits light in accordance with light-emission control signals which repeatedly and alternatingly switch ON/OFF the light emission in regular periods. The exposure light of the solid state area sensor is an intermittent exposure light synchronized with the light emission timing in frame (n). Although the light emission timing and the exposure timing match completely in the drawing, the timing also may be shifted insofar as they are synchronized, and the length of the period (pulse width) may be slightly different for the emission light and the exposure light.
  • In the present invention, “sensor exposure light” refers to the light exposure when the sensor is in the active state. “Sensor exposure light” does not include light exposure when the sensor is in the inactive state. Accordingly, “exposure control” in the present invention refers to control of the sensor active/inactive timing.
  • At time t1, part of the emitted light (standard light) is propagated through the internal optical path and impinges the standard pixel. The incidence of the standard light starts from time t2 after a time Dref has elapsed which matches an emission delay time (offset time) from time t1 and the time for propagation within the internal optical path. The incidence of the measurement light from an object starts from time t3 after a time D1 has elapsed which matches the emission delay time from time t1 and the time for propagation in the external optical path. Since the exposure of the solid state area sensor also stops when the emission light stops at time t4, the amount of exposure of the measurement pixel in a single exposure is a value corresponding to the reflectivity of the object and the distance to the object. The chief cause of fluctuation in the amount of exposure dependent on the distance, in addition to the difference in exposure time due to the incidence delay, is the attenuation of the light intensity (i.e., intensity decreases with increasing distance). If the reflectivity of the object is known beforehand, it is possible to determine the distance based on the amount of exposure. The measurable distance is determined by the length of a single light emission period. Error can be reduced by determining the distance based on the total amount of exposure (accumulated electric charge) of one frame when light is projected and received a plurality of times within one frame period.
  • On the other hand, it is difficult in practice to know beforehand the reflectivity of each part of an object. In frame (n+1), there is continuous exposure light on the solid state area sensor. In this way the amount of exposure in frame (n+1) is mainly an amount corresponding to the reflectivity of the object (reflectivity data). Accordingly, distance data excluding the object reflectivity component from the distance data can be obtained by performing the following calculations for each pixel of the solid state area sensor. Distance data=(distance data)+(reflectivity data)=[image data of frame (n)]+[image data of frame (n+1)].
  • This calculation is performed by the image processing circuit 34 having the structure shown in FIG. 3.
  • The image signal SG transmitted from the solid state area sensor 22 is quantified by an AD converter 401, and output as image data DG.
  • The image data DG (distance data) of frame (n) are temporarily stored in a frame memory 410.
  • When the image data DG (reflectivity data) in frame (n+1) are output from the AD converter 401, the image data DG of frame (n) are simultaneously output from the frame memory 410, and a dividing device 420 performs the aforesaid calculation.
  • The effect of the emission delay time is excluded by the following correction calculation based on the distance data of the measurement pixel and the distance data of the standard pixel, and the distance from each measurement pixel to the object can be measured with greater accuracy.
    Corrected distance data=(distance data of the measurement pixel)—(distance data of the standard pixel)
  • An image processing circuit 34 may be provided to perform this calculation function, or the calculation may be performed by the system controller 31.
  • Example of the Solid State Area Sensor
  • Either a CCD sensor, or MOS-type sensor may be used as the solid state area sensor 22.
  • FIGS. 4(a) and 4(b) illustrate the operation of a CCD sensor. FIG. 4(a) schematically shows the structure, and FIG. 4(b) shows the control timing. The state of each time t0, t1, t2, t3, t4 is described below.
    • t0: The electric charge photoelectrically converted by a photodiode (PD) starts accumulating.
    • t1: Gate SHI switches ON, and the accumulated electric charge moves to gate SHI.
    • t2: Gate SHI switches OFF, and again the PD converted electric charge starts accumulating.
    • t3: Gate 0D switches ON, and the accumulated electric charge is discharged from the PD to the substrate at periods t2˜t3.
    • t4: Gate 0D switches OFF, and again the electric charge starts accumulating.
      The photoelectrically converted electric charge gradually accumulates at gate SHI in period Ton by repeating the aforesaid operations.
  • FIG. 5 is a signal waveform diagram showing the control of the projection and reception light when using a CCD sensor.
  • The operation of FIG. 4 is repeated to perform intermittent exposure in frame (n), and continuous exposure is performed in frame (n+1). The electric charge accumulated at gate SH1 in frame (n) is transferred to reverse gate φT1 when gate SH2 switches ON at the end of the frame, and is output to the image processing circuit 34 in parallel with the exposure operation of frame (n+1).
  • FIGS. 6(a) and 6(b) illustrate the operation of a MOS-type sensor. FIG. 6(a) summarizes the structure, FIG. 6(b) shows the movement of the electric charge, and FIG. 6(c) shows the control timing. The state times t0, t1, t2, t3, t4 is described below.
    • t0: Load starts accumulating at PD incidental capacitor C1.
    • t1: Gate ST switches ON, and the electric charge is transferred from incidental capacitor C1 to a sufficiently large capacity condenser C2.
    • t2: Gate ST switches OFF, and again electric charge starts accumulating in incidental capacitor C1.
    • t3: Gate RS switches ON, and the accumulated electric charge is discharged from the incidental capacitor C1 to the power line Vcc at periods t2˜t3. t4: Gate RS switches OFF, and electric charge again starts accumulating.
  • The photoelectrically converted electric charge gradually accumulates in period Ton in the condenser C2 by repeating the aforesaid operation. Since the incidental capacitor C1 is small compared to the condenser C2, the potential of the incidental capacitor C1 rises greatly by the electric charge accumulation. In this way, when gate ST switches ON, the electric charge is transferred from the incidental capacitor C1 to the condenser C2.
  • FIG. 7 is a signal waveform diagram showing the control of the projection and reception light when a MOS-type sensor is used.
  • In FIG. 7, frame (n) in PD11˜PD14 shown in FIG. 6(a) corresponds to t0˜t6, and frame (n+1) corresponds to t8˜t10. Frame (n) in PD21˜PD24 corresponds to t5˜t7, and frame (n+1) corresponds to t9˜t11. Frame (n) in PD31˜PD34 corresponds to t6˜t8, and frame (n+1) corresponds to t10˜t12.
  • In frame (n), intermittent exposure is performed by repeating the operation of FIG. 6, and in frame (n+1) continuous exposure is performed. In frame (n), the electric charge stored in the condenser C2 switches ON the gates SL11˜SL14 at the end of the frame, such that signals corresponding to the electric charges of PD11˜PD14 are stored in a line memory. The signals of the PD11˜PD14 stored in the line memory are output to the image processing circuit 34 by sequentially switching ON the switches SV1˜SV3.
  • Next, the signals of PD21˜PD24 are similarly output, and finally when the signals of PD31˜PD34 are similarly stored in the line memory. the continuous exposure of frame (n+1) starts with PD11˜PD14, and the signals of PD31˜PD34 are output in parallel with the continuous exposure.
  • Counteracting Environmental Light
  • Since environmental light is not dependent on the distance to the object, the impingement of environmental light on the solid state area sensor is a source of measurement error. As a countermeasure for environmental light, an optical filter is used to selectively transmit light of the wavelength range emitted by the light source. However, environmental light cannot be entirely eliminated by the optical filter. In the embodiments described below, measurement error due to environmental light is prevented.
  • FIG. 8 is a signal waveform diagram showing a first modification of the control.
  • In frame (n), intermittent light emission is performed, an intermittent exposure is performed with a timing identical to the emission timing.
  • The environmental light component in the exposure light of frame (n) is detected by intermittent exposure without light emission in frame (n+1).
  • Intermittent emission and continuous exposure are performed in frame (n+2) to obtain reflectivity data.
  • The environmental light component in the exposure light of frame (n) is detected by intermittent exposure without light emission in frame (n+1).
  • Then, the environmental light component in the exposure light of frame (n+2) is detected by continuous exposure without emission in frame (+3).
  • FIG. 9 is a block diagram showing a first modification of the image processing circuit.
  • In the image processing circuit 34 b, the image data SG transferred from the solid state area sensor 22 are quantified by the AD converter 401 and output as image data DG similar to the case shown in FIG. 3.
  • The image data DG (distance data) in frame (n) are temporarily stored in line memory 411.
  • When the image data (intermittent exposure and environmental light data) of frame (n+1) are output from the AD converter 401, the image data DG of frame (n) are output simultaneously with the image data DG of frame (n+1) from the frame memory 411. Distance data from which the environmental light component has been eliminated are obtained by calculation by a subtraction device 430. The output of the subtraction device 430 is temporarily stored in the frame memory 412. Furthermore, the image data DG (reflectivity data) of frame (n+2) are recorded in the frame memory 411 in parallel with the readout.
  • When the image data DG (continuous exposure environmental light data) of frame (n+3) are output from the AD converter 401, the reflectivity data are read from the frame memory 411, and the reflectivity data excluded from the environmental light component are output from the subtracting device 430. Then, distance data DL from which the environmental light component has been excluded are obtained by the calculation of the dividing device 420.
  • Enlargement of the Measurable Distance Range
  • In the method of controlling the exposure timing and measuring distance using the amount of reflected exposure light, basically long distances wherein the time of flight is longer than the emission time (emission pulse width) cannot be measured. Long distances can be measured in the embodiments described below.
  • FIGS. 10(a) and 10(b) show a second modification of the control.
  • In frame (n), intermittent emission is performed, and intermittent exposure is performed with the same timing as the emission. In frame (n+1), intermittent emission is performed and intermittent exposure is performed with a timing delayed by the exposure time relative to frame (n). In frame (n+2), intermittent emission is performed, an intermittent exposure is performed with a timing delayed by the exposure time of frame (n+1). Then, in frame (n+3), intermittent emission and continuous exposure are performed to obtain reflectivity data.
  • For example, when the impingement of the measurement light starts from time t3 in the exposure period (t12˜t14) of frame (n+1) shown in the drawing, distance data corresponding to the propagation time D11 and D11′ representing the distance to the object are obtained for both frame (n+1) and frame (n+2). These distance data are distance data from which the reflectivity component has been excluded by division of the reflectivity data of frame (n+3). The distance data obtained in frame (n+1) and the distance data obtained in frame (n+2) are averaged to determine the distance data of each pixel.
  • Since the light returning from the object within the emission period from one emission to the next emission is always received by setting the exposure timing to be mutually shifted from frame to frame, the measurable distance, when viewed from the time of flight, is increased from the emission period to the emission OFF time of the emission cycle. The time D11 from the transmission pulse rise to the reception pulse rise (hereinafter referred to as “rise time difference”) and the time D11′ from the transmission pulse fall to the reception pulse fall (hereinafter referred to as “fall time difference”) are measured, and their average is designated the measurement value, so as to produce a highly accurate measurement.
  • FIG. 11 is a block diagram showing a second modification of the image processing circuit.
  • In the image processing circuit 34 c, the image signal SG output from the solid state area sensor 22 is converted to image data DG by the AD converter 401, and sequentially recorded in frame memories 411, 412, 413 for each frame. When the image data DG of frame (n+3) are output from the AD converter 401, the image data DG of frame (n+2) are output from frame memory 411, and the image data DG of frame (n+1) are output from the frame memory 412, and the image data DG of frame (n) are output from frame memory 413. The pixel unit division of the image data DG output from each frame memory 411, 412, 413 and the image data DG of frame (n+3) output from the AD converter 401 is performed by the dividing devices 421, 422, 423, and the distance data DL are calculated for each frame (n)˜(n+2). Three-dimensional data are calculated based on the distance data DL of these three frames. The system controller 31 performs the calculations.
  • When performing the three-dimensional data calculations, data of two frames representing the propagation times D11 and D11′ are selected from the distance data DL of the three frames. A description of this portion of the process follows below.
  • FIG. 12 is a flow chart of the mode discrimination for calculating three-dimensional data, and FIG. 13 is a waveform diagram showing the relationship between the three modes and light receiving times. In FIG. 12, “frame” is represented by the symbol “F.”
  • The system controller 31 reads the distance data of frames (n)˜(n+2) obtained by sensing frames (n)˜(n+3) from the frame memory 35 (#101). Then, the magnitude relationship of the distance data values in frames (n)˜(n+2) are determined for each pixel, and calculations are performed for either mode 1, 2, or 3 in accordance with the determined relationship (#102˜#112).
  • As shown in FIG. 13, in the standard pixel, the distance data of frame (n) represents the rise time difference Dref, and the distance data of frame (n+1) represents the fall time difference Dref′. The propagation time in the internal optical path is sufficiently shorter compared to the emission pulse width. Accordingly, the distance data of frame (n) and frame (n+1) are normally used in the calculation of the standard pixel. The determination of modes 1, 2, and 3 may be performed for the standard pixel just as for the measurement pixel, and the distance data selected based on the result.
  • In mode 1, the distance data of frame (n) represents the rise time difference D101, and the distance data of frame (n+1) represents the fall time difference D101′. Accordingly, the distance data of frames (n) and (n+1) are used in this calculation. The corrected distance data representing D101-Dref and D101′-Dref′ are obtained by subtracting the standard pixel data of the same frame from each frame. The average of the corrected distance data is designated the distance data of the measurement pixel determined in mode 1.
  • Similarly, in mode 2 the distance data of frames (n+1) and (n+2) are used, and calculation of corrected distance data representing D102-Dref and D102′-Dref′ as well as the average value calculation are performed. In mode 3 the distance data of frames (n) and (n+2) are used, and calculation of corrected distance data representing D103-Dref and D103′-Dref′ as well as the average value calculation are performed.
  • When measuring long distance as described above, it is possible to increase resolution to achieve high accuracy measurement by shifting the exposure timing between frames so as to have the exposure period of each frame shorter than the emission time as shown in FIGS. 14(a) and 14(b), and perform a plurality of exposures in the emission period (pulse light projection period) with the assumed appearance of simultaneously of a plurality of frames. In the example shown in FIGS. 14(a) and 14(b) the distance data of frame (n) representing the rise time difference D11 and the distance data of frame (n+3) representing the fall time difference D11′ are used to determine the distance value.
  • The influence of environmental light can be reduced even when measuring long distances.
  • FIG. 15 shows a summary of a fourth modification of the control.
  • Intermittent emission is performed in frames (n)˜(n+2), and intermittent exposure is performed with the same or delayed timing of emission. In frame (n+3) the emission is stopped and intermittent exposure is performed to obtain environmental light data. In frame (n+4) intermittent emission and continuous exposure are performed to obtain reflectivity data. In frame (n+5) emission is stopped, and continuous exposure is performed.
  • FIG. 16 is a block diagram showing a third modification of the image processing circuit.
  • In the image processing circuit 34 d, the distance data DG of frames (n+2), (n+1), (n) are stored in frame memories 411, 412, 413. When the environmental light data of frame (n+3) are output from the AD converter 401, data are simultaneously read from the frame memories 411˜413, and the environmental light component is eliminated by the subtraction devices 431 and 432. The distance data of the frames (n+2), (n+1), (n) from which the environmental light component has been eliminated are recorded in frame memories 414, 415, 416.
  • Then, the reflectivity data of frame (n+4) are stored in frame memory 411, and when the environmental light data of frame (n+5) are output from the AD converter 401, the environmental light component is eliminated from the reflectivity data of frame (n+4) by the subtraction device 431. Data are read from the frame memories 414-416 simultaneously with the output of data from the subtraction device 431, and the distance data DL of frames (n+2), (n+1), (n) from which the environmental light component has been eliminated are obtained via calculation by the dividing devices 431, 432, 433.
  • Other Examples of Optical Systems
  • FIGS. 17(a) and 17(b) show modifications of the optical system. Structural elements of the example in these drawings are designated by symbols identical to those of FIGS. 1(a) and 1(b).
  • In the structure of part (a), light emitted from the light source 11 is condensed to the principal point of the receiving lens 21 by the collector lens 13 and polarization mirror 17, so as to be directed to the object Q. The light reflected by the object Qv passes through the receiving lens 21, ¼ wavelength plate 16, and polarization mirror 17, and impinges a specific pixel on the solid state area sensor 22. The polarization mirror 17 reflects the perpendicular polarized light, and transmits the parallel polarized light. The ¼ wavelength plate 16 inclines the polarized light 45 degrees.
  • In the structure of part (b), the light emitted from the light source 11 is condensed at the principal point of the receiving lens 21 by the collector lens 14 and semi-transparent mirror 18, and directed to the object Q. The light reflected from the object Q passes through the receiving lens 21 and semi-transparent mirror 18, and impinges a specific pixel on the solid state area sensor 22.
  • Second Embodiment
  • Laser light is most suitable as the signal medium in distance measurement using the TOF method. However, since measurement of light propagation time is an event performed at high speed, it is difficult to ensure high accuracy at near-field distances. measurement of constant precision can be realized regardless of whether the distance is near-field or not by using the TOF method and the triangulation method.
  • Device Construction
  • FIG. 18 shows the structure of a three-dimensional measurement device of the second embodiment.
  • The three-dimensional measurement device 2 has, in addition to the structural elements of the three-dimensional measurement device 1 shown in FIG. 1, a triangulation density gradient filter 19, filter controller 38, and mode switch 39.
  • The function of the density gradient filter 19 is variable, an includes projecting a first light and a second light having different luminous intensity distributions, and a substantially through-like state (uniform luminous intensity distribution). The mode switch 39 is a user interface for specifying the selection of the far-field mode and near-field mode by a user. The mode is not limited to manual selection and may be switched automatically by detecting the distance using a simple rangefinding sensor. The system controller 37 issues specific instructions to the filter controller 38 in accordance with a mode selection signal Sm output from the mode switch 39.
  • When the far-field mode is specified, the density gradient filter 19 is controlled so as to uniformly illuminate the object Q, and distance measurement is performed using the TOF method in the same manner as the three-dimensional measurement device 1. When the near-field mode is specified, the operation is as described below.
  • Near-field Mode Operation
  • FIGS. 19(a) and 19(b) illustrate the measurement principle of the near-field mode.
  • As shown in Part (a), the characteristics of the density gradient filter 19 change the amount of light long one direction (Y direction) in a plane perpendicular to the optical axis.
  • In frame (n), light is projected using the density gradient filter 19, and continuous exposure light is sensed. Then, in frame (n+1), the density gradient filter 19 is rotated 1800 about the optical axis, and the exposure light is sensed. In this way, the projection light intensity ratio of frame (n) and frame (n+1) differs at all angles using the optical axis as standard as shown in part (b).
  • The distance H between each pixel of the solid state area sensor 22 and the light source, and the incidence angle α of the light received by each pixel are known, and the relationship of correspondence between the projection light intensity ratio and the ray angles θa, θb, θc . . . are also known. Accordingly, the angle of the ray illuminating point q (angle θb in the example) on the object corresponding to a specific pixel can be understood by determining the intensity ratio of the incident light in frames (n) and (n+1) of that specific pixel. The distance Lq to the point q can be calculated by triangulation. The division calculation for obtaining the light intensity ratio is performed by the image processing circuit 34, and the distance calculation for each pixel is performed by the system controller 37. The obtained distance data are recorded and displayed.
  • In the first and second embodiments, a solid state area sensor 22 capable of sensing color may be used, so as to perform three-dimensional measurement and two-dimensional color image input, both of which may be recorded and displayed. Although the operation has been described in terms of obtaining a single two-dimensional image, the measurement may be repeated so as to have three-dimensional measurement of a two-dimensional object. The solid state area sensor may also be used to perform a logarithmic compression function.
  • Although the present invention has been fully described by way of examples with reference to the accompanying drawings, it is to be noted that various changes and modification will be apparent to those skilled in the art. Therefore, unless otherwise such changes and modifications depart from the scope of the present invention, they should be construed as being included therein.

Claims (2)

1. A three-dimensional measurement method for measuring a distance to a plurality of positions on an object by projecting light and receiving light reflected from the object, said three-dimensional measurement method comprising the steps of:
projecting a pulse light on an object;
receiving light reflected from the object by an area sensor comprising a plurality of photoelectric conversion elements;
controlling the active/inactive timing of the area sensor such that the photoelectric conversion elements are exposed to light with a timing synchronously with the pulse light projection; and
measuring the distance to each photoelectric conversion element based on the output of the area sensor.
2-19. (canceled)
US11/335,508 2000-05-26 2006-01-20 Three-dimensional measurement device and three-dimensional measurement method Abandoned US20060126054A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/335,508 US20060126054A1 (en) 2000-05-26 2006-01-20 Three-dimensional measurement device and three-dimensional measurement method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2000-155768 2000-05-26
JP2000155768A JP2001337166A (en) 2000-05-26 2000-05-26 Method and device for three-dimensional input
US09/855,506 US7009690B2 (en) 2000-05-26 2001-05-16 Three-dimensional measurement device and three-dimensional measurement method
US11/335,508 US20060126054A1 (en) 2000-05-26 2006-01-20 Three-dimensional measurement device and three-dimensional measurement method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/855,506 Division US7009690B2 (en) 2000-05-26 2001-05-16 Three-dimensional measurement device and three-dimensional measurement method

Publications (1)

Publication Number Publication Date
US20060126054A1 true US20060126054A1 (en) 2006-06-15

Family

ID=18660653

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/855,506 Expired - Lifetime US7009690B2 (en) 2000-05-26 2001-05-16 Three-dimensional measurement device and three-dimensional measurement method
US11/335,508 Abandoned US20060126054A1 (en) 2000-05-26 2006-01-20 Three-dimensional measurement device and three-dimensional measurement method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/855,506 Expired - Lifetime US7009690B2 (en) 2000-05-26 2001-05-16 Three-dimensional measurement device and three-dimensional measurement method

Country Status (2)

Country Link
US (2) US7009690B2 (en)
JP (1) JP2001337166A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271763A1 (en) * 2011-04-18 2013-10-17 Tao Li Inspection device and method thereof
US9264693B2 (en) 2011-12-26 2016-02-16 Semiconductor Energy Laboratory Co., Ltd. Motion recognition device
US9921312B2 (en) 2012-12-20 2018-03-20 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional measuring device and three-dimensional measuring method
US10866321B2 (en) 2016-09-01 2020-12-15 Sony Semiconductor Solutions Corporation Imaging device

Families Citing this family (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6906302B2 (en) * 2002-07-30 2005-06-14 Freescale Semiconductor, Inc. Photodetector circuit device and method thereof
CN100440545C (en) 2004-03-17 2008-12-03 松下电工株式会社 Light detecting element and control method of light detecting element
DE602005009432D1 (en) 2004-06-17 2008-10-16 Cadent Ltd Method and apparatus for color forming a three-dimensional structure
DE102004042466A1 (en) * 2004-09-02 2006-03-23 Robert Bosch Gmbh Apparatus and method for optical distance measurement
WO2006073120A1 (en) 2005-01-05 2006-07-13 Matsushita Electric Works, Ltd. Photo-detector, space information detection device using the photo-detector, and photo-detection method
JP4200328B2 (en) * 2005-04-18 2008-12-24 パナソニック電工株式会社 Spatial information detection system
KR100785594B1 (en) * 2005-06-17 2007-12-13 오므론 가부시키가이샤 Image process apparatus
US7592615B2 (en) * 2005-10-11 2009-09-22 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Optical receiver with a modulated photo-detector
US8018579B1 (en) 2005-10-21 2011-09-13 Apple Inc. Three-dimensional imaging and display system
US7483151B2 (en) * 2006-03-17 2009-01-27 Alpineon D.O.O. Active 3D triangulation-based imaging method and device
JP4944635B2 (en) 2007-02-15 2012-06-06 本田技研工業株式会社 Environment recognition device
FI20075269A0 (en) * 2007-04-19 2007-04-19 Pulse Finland Oy Method and arrangement for antenna matching
US7925075B2 (en) * 2007-05-07 2011-04-12 General Electric Company Inspection system and methods with autocompensation for edge break gauging orientation
KR101344490B1 (en) * 2007-11-06 2013-12-24 삼성전자주식회사 Image generating method and apparatus
US7554652B1 (en) 2008-02-29 2009-06-30 Institut National D'optique Light-integrating rangefinding device and method
DE102008018718B4 (en) * 2008-04-14 2010-02-25 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Optical distance meter and method for optical distance measurement
KR101467509B1 (en) * 2008-07-25 2014-12-01 삼성전자주식회사 Image sensor and operating method for image sensor
JP4981780B2 (en) * 2008-10-20 2012-07-25 本田技研工業株式会社 Ranging system and ranging method
JP4837757B2 (en) * 2009-03-16 2011-12-14 シャープ株式会社 Optical distance measuring sensor and electronic device
JP5401645B2 (en) * 2009-07-07 2014-01-29 学校法人立命館 Human interface device
US9645681B2 (en) 2009-09-23 2017-05-09 Pixart Imaging Inc. Optical touch display system
TWI443308B (en) 2009-12-03 2014-07-01 Pixart Imaging Inc Distance-measuring device, 3d image sensing device, and optical touch system
US8638425B2 (en) 2009-12-03 2014-01-28 Pixart Imaging Inc. Distance-measuring device with increased signal-to-noise ratio and method thereof
JP6072417B2 (en) * 2012-02-10 2017-02-01 日本電気株式会社 Imaging system and imaging method
US10324033B2 (en) * 2012-07-20 2019-06-18 Samsung Electronics Co., Ltd. Image processing apparatus and method for correcting an error in depth
JP6127558B2 (en) * 2013-02-13 2017-05-17 オムロン株式会社 Imaging device
JP2014169921A (en) * 2013-03-04 2014-09-18 Denso Corp Distance sensor
WO2014207983A1 (en) * 2013-06-27 2014-12-31 パナソニックIpマネジメント株式会社 Distance measuring device
DE102013108824A1 (en) * 2013-08-14 2015-02-19 Huf Hülsbeck & Fürst Gmbh & Co. Kg Sensor arrangement for detecting operating gestures on vehicles
CN105452807A (en) * 2013-08-23 2016-03-30 松下知识产权经营株式会社 Distance measurement system and signal generation device
EP3112901A4 (en) * 2014-02-28 2017-03-01 Panasonic Intellectual Property Management Co., Ltd. Distance measuring apparatus and distance measuring method
US9675430B2 (en) 2014-08-15 2017-06-13 Align Technology, Inc. Confocal imaging apparatus with curved focal surface
DE102014013099B4 (en) * 2014-09-03 2019-11-14 Basler Aktiengesellschaft Method and device for simplified acquisition of a depth image
CN107430193B (en) * 2014-12-22 2020-08-07 特林布尔有限公司 Distance measuring instrument
KR102367123B1 (en) * 2015-01-26 2022-02-25 주식회사 히타치엘지 데이터 스토리지 코리아 Controlling method in distance measuring device using TOF
WO2016208214A1 (en) * 2015-06-24 2016-12-29 株式会社村田製作所 Distance sensor
WO2017022152A1 (en) * 2015-07-31 2017-02-09 パナソニックIpマネジメント株式会社 Range imaging device and solid-state imaging device
US10942261B2 (en) 2015-10-21 2021-03-09 Samsung Electronics Co., Ltd Apparatus for and method of range sensor based on direct time-of-flight and triangulation
WO2017085916A1 (en) * 2015-11-16 2017-05-26 パナソニックIpマネジメント株式会社 Imaging device and solid-state imaging element used in same
WO2017141957A1 (en) 2016-02-17 2017-08-24 パナソニックIpマネジメント株式会社 Distance measuring device
JP2017195573A (en) * 2016-04-22 2017-10-26 ソニー株式会社 Imaging apparatus and electronic apparatus
US10527728B2 (en) 2017-01-27 2020-01-07 Samsung Electronics Co., Ltd Apparatus and method for range measurement
WO2018175344A1 (en) * 2017-03-21 2018-09-27 Magic Leap, Inc. Depth sensing techniques for virtual, augmented, and mixed reality systems
WO2018193609A1 (en) * 2017-04-21 2018-10-25 パナソニックIpマネジメント株式会社 Distance measurement device and moving body
DE102017115385B4 (en) 2017-07-10 2022-08-11 Basler Ag Apparatus and method for acquiring a three-dimensional depth image
JP6913598B2 (en) * 2017-10-17 2021-08-04 スタンレー電気株式会社 Distance measuring device
WO2019164959A1 (en) 2018-02-20 2019-08-29 The Charles Stark Draper Laboratory, Inc. Time-resolved contrast imaging for lidar
CN110310963A (en) * 2018-03-27 2019-10-08 恒景科技股份有限公司 The system for adjusting light source power
JP7358771B2 (en) * 2019-04-26 2023-10-11 凸版印刷株式会社 3D imaging unit, camera, and 3D image generation method
CN110133675B (en) * 2019-06-10 2021-07-23 炬佑智能科技(苏州)有限公司 Data processing method and device for light emitting distance measurement, electronic equipment and light processing circuit
US11589819B2 (en) * 2019-06-20 2023-02-28 Cilag Gmbh International Offset illumination of a scene using multiple emitters in a laser mapping imaging system
JP7401211B2 (en) * 2019-06-25 2023-12-19 ファナック株式会社 Distance measuring device with external light illuminance measurement function and method for measuring external light illuminance
JP7257275B2 (en) * 2019-07-05 2023-04-13 株式会社日立エルジーデータストレージ 3D distance measuring device
US11443447B2 (en) 2020-04-17 2022-09-13 Samsung Electronics Co., Ltd. Three-dimensional camera system

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4678323A (en) * 1984-07-20 1987-07-07 Canon Kabushiki Kaisha Distance measuring devices and light integrators therefor
US4797561A (en) * 1985-08-31 1989-01-10 Kyocera Corporation Reading apparatus with improved performance
US4902886A (en) * 1989-01-23 1990-02-20 Hewlett-Packard Company Noise reduction for photodiode arrays
US4947202A (en) * 1988-07-07 1990-08-07 Ricoh Company, Ltd. Distance measuring apparatus of a camera
US5699151A (en) * 1994-06-28 1997-12-16 Mitsubishi Denki Kabushiki Kaisha Distance measurement device
US5831258A (en) * 1996-08-20 1998-11-03 Xerox Corporation Pixel circuit with integrated amplifer
US5930383A (en) * 1996-09-24 1999-07-27 Netzer; Yishay Depth sensing camera systems and methods
US5990948A (en) * 1996-02-29 1999-11-23 Kabushiki Kaisha Toshiba Noise cancelling circuit for pixel signals and an image pickup device using the noise cancelling circuit
US6091905A (en) * 1995-06-22 2000-07-18 3Dv Systems, Ltd Telecentric 3D camera and method
US6252655B1 (en) * 1997-07-07 2001-06-26 Nikon Corporation Distance measuring apparatus
US6252215B1 (en) * 1998-04-28 2001-06-26 Xerox Corporation Hybrid sensor pixel architecture with gate line and drive line synchronization
US20020030753A1 (en) * 1997-11-05 2002-03-14 Alan H. Kramer Circuit for detecting leaky access switches in cmos imager pixels
US6587183B1 (en) * 1998-05-25 2003-07-01 Matsushita Electric Industrial Co., Ltd. Range finder and camera
US6812964B1 (en) * 1999-04-13 2004-11-02 Pentax Corporation Three-dimensional image capturing device
US6833871B1 (en) * 1998-02-26 2004-12-21 Foveon, Inc. Exposure control in electronic cameras by detecting overflow from active pixels

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3570160B2 (en) 1997-05-30 2004-09-29 富士ゼロックス株式会社 Distance measuring method and device

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4678323A (en) * 1984-07-20 1987-07-07 Canon Kabushiki Kaisha Distance measuring devices and light integrators therefor
US4797561A (en) * 1985-08-31 1989-01-10 Kyocera Corporation Reading apparatus with improved performance
US4947202A (en) * 1988-07-07 1990-08-07 Ricoh Company, Ltd. Distance measuring apparatus of a camera
US4902886A (en) * 1989-01-23 1990-02-20 Hewlett-Packard Company Noise reduction for photodiode arrays
US5699151A (en) * 1994-06-28 1997-12-16 Mitsubishi Denki Kabushiki Kaisha Distance measurement device
US6091905A (en) * 1995-06-22 2000-07-18 3Dv Systems, Ltd Telecentric 3D camera and method
US5990948A (en) * 1996-02-29 1999-11-23 Kabushiki Kaisha Toshiba Noise cancelling circuit for pixel signals and an image pickup device using the noise cancelling circuit
US5831258A (en) * 1996-08-20 1998-11-03 Xerox Corporation Pixel circuit with integrated amplifer
US5930383A (en) * 1996-09-24 1999-07-27 Netzer; Yishay Depth sensing camera systems and methods
US6252655B1 (en) * 1997-07-07 2001-06-26 Nikon Corporation Distance measuring apparatus
US20020030753A1 (en) * 1997-11-05 2002-03-14 Alan H. Kramer Circuit for detecting leaky access switches in cmos imager pixels
US6833871B1 (en) * 1998-02-26 2004-12-21 Foveon, Inc. Exposure control in electronic cameras by detecting overflow from active pixels
US6252215B1 (en) * 1998-04-28 2001-06-26 Xerox Corporation Hybrid sensor pixel architecture with gate line and drive line synchronization
US6587183B1 (en) * 1998-05-25 2003-07-01 Matsushita Electric Industrial Co., Ltd. Range finder and camera
US6812964B1 (en) * 1999-04-13 2004-11-02 Pentax Corporation Three-dimensional image capturing device

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130271763A1 (en) * 2011-04-18 2013-10-17 Tao Li Inspection device and method thereof
US9188543B2 (en) * 2011-04-18 2015-11-17 General Electric Company Inspection device and method thereof
US9264693B2 (en) 2011-12-26 2016-02-16 Semiconductor Energy Laboratory Co., Ltd. Motion recognition device
US9921312B2 (en) 2012-12-20 2018-03-20 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional measuring device and three-dimensional measuring method
US10866321B2 (en) 2016-09-01 2020-12-15 Sony Semiconductor Solutions Corporation Imaging device

Also Published As

Publication number Publication date
JP2001337166A (en) 2001-12-07
US7009690B2 (en) 2006-03-07
US20010046317A1 (en) 2001-11-29

Similar Documents

Publication Publication Date Title
US7009690B2 (en) Three-dimensional measurement device and three-dimensional measurement method
US10764518B2 (en) Pixel structure
JP4488170B2 (en) Method and apparatus for recording a three-dimensional distance image
KR20190055238A (en) System and method for determining distance to an object
US6741082B2 (en) Distance information obtaining apparatus and distance information obtaining method
KR20200096828A (en) Systems and methods for determining distance to objects
US6388754B1 (en) Shape measuring system and method
JP3574607B2 (en) 3D image input device
US6522393B2 (en) Distance measuring device
CN113970757A (en) Depth imaging method and depth imaging system
JP4391643B2 (en) 3D image input device
JP4105801B2 (en) 3D image input device
US6721007B1 (en) Three-dimensional image capturing device
US6437853B2 (en) Three-dimensional image capturing device
JP3767201B2 (en) Optical sensor
US6812964B1 (en) Three-dimensional image capturing device
JPH0346507A (en) Distance measuring instrument
JP5215547B2 (en) Spatial information detection device
JP4369575B2 (en) 3D image detection device
JP4930742B2 (en) Position detection device
US20220099836A1 (en) Method for measuring depth using a time-of-flight depth sensor
JP2000083260A (en) Three-dimensional image input device
JP2001153624A (en) Three-dimensional image input device
US20220326359A1 (en) Measurement and compensation for phase errors in time-of-flight-cameras
JPH07168090A (en) Range finder of camera

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION