WO1992014118A1 - An optical sensor - Google Patents

An optical sensor Download PDF

Info

Publication number
WO1992014118A1
WO1992014118A1 PCT/GB1992/000221 GB9200221W WO9214118A1 WO 1992014118 A1 WO1992014118 A1 WO 1992014118A1 GB 9200221 W GB9200221 W GB 9200221W WO 9214118 A1 WO9214118 A1 WO 9214118A1
Authority
WO
WIPO (PCT)
Prior art keywords
detector
light source
focus
image
pattern
Prior art date
Application number
PCT/GB1992/000221
Other languages
French (fr)
Inventor
Colin George Morgan
Original Assignee
Oxford Sensor Technology Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oxford Sensor Technology Limited filed Critical Oxford Sensor Technology Limited
Priority to JP4504040A priority Critical patent/JP2973332B2/en
Priority to US08/104,084 priority patent/US5381236A/en
Priority to DE69207176T priority patent/DE69207176T2/en
Priority to EP92904077A priority patent/EP0571431B1/en
Publication of WO1992014118A1 publication Critical patent/WO1992014118A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/026Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by measuring distance between sensor and object

Definitions

  • This invention relates to an optical sensor and, more particularly, to a sensor used to sense the range of one or more parts of an object so as to form an image thereof.
  • the system detects which parts of the object are in focus by analysing the detected signal for high frequency components which are caused by features, such as edges or textured parts, which change rapidly across the scene. Because of this, the system is not suitable for imaging plain or smooth surfaces, such as a flat painted wall, which have no such features.
  • the senor detects those parts of the target object that are in focus by analysing the image for features that match the spatial frequencies present in the projected pattern.
  • Various analysis techniques such as convolution and synchronous detection are described.
  • convolution and synchronous detection are described.
  • convolution and synchronous detection are described.
  • a further patent by the same company US4640620 describes a method which aims to overcome this problem by the use of a liquid crystal light valve device to convert the required high spatial frequency components present in the image into an amplitude variation that can be detected directly.
  • an optical sensor comprising: a structured light source for producing a pattern of constrasting areas; a detector which comprises an array of detector elements having dimensions matched to the pattern produced by the light source; an optical system for projecting a primary image of the light source onto an object that is to be sensed and for forming a secondary image on the detector of the primary image thus formed on the object; adjustment means for adjusting at least part of the optical system so as to vary the focussing of the primary image on the object, the arrangement being such that when the primary image is in focus on the object, the secondary image on the detector Is also in focus; and processing means for analysing signals produced by the detector in conjunction with information on the adjustment of the optical system, wherein the structured light source is adjustable so as to interchange the positions of contrasting areas of the pattern produced by the light source and in that the processing means is arranged to analyse the secondary images received by the detector elements with the contrasting areas in the interchanged positions to determine those parts of the secondary images which are in focus on the detector and thereby determine the
  • a method of determining the range of at least part of an object being viewed using an optical sensor comprising: a structured light source which produces a pattern of contrasting areas; a detector which comprises an array of detector elements having dimensions matched to the pattern produced by the light source; an optical system which projects a primary image of the light source onto an object that is to be sensed and forms a secondary image on the detector of the primary image thus formed on the object; adjustment means which adjusts at least part of the optical system so as to vary the focussing of the primary image on the object, the arrangement being such that when the primary image is in focus on the object, the secondary image on the detector is also In focus; and processing means which analyses signals produced by the detector in conjunction with information on the adjustment of the optical system, the method involving adjustment of the structured light source so as to interchange the positions of contrasting areas of the pattern produced by the light source and the processing means being arranged to analyse the secondary images received by the detector elements with the contrasting areas in the interchanged positions to determine those parts of the secondary images
  • the invention thus transforms analysis of the image sensed by the detector into the temporal domain.
  • Figure 1 illustrates the basic concept of an optical sensor such as that described In US4629324 and shows the main components and the optical pathways between them;
  • Figure 2 shows a box diagram of an optical sensor of the type shown in Figure 1 with a particular control unit and signal processing arrangement according to one embodiment of the present invention
  • Figure 3 illustrates how an image of an object being viewed by the type of system shown In Figures 1 and 2 can be built up and displayed;
  • Figures 4, 5 and 6 show alternative optical arrangements which may be used in the sensor;
  • Figure 7 shows an alternative form of beam splitter which may be used in the sensor
  • Figures 8(A) and 8(B) show a further embodiment of a sensor according to the invention.
  • Figure 9(A) and 9(B) show another embodiment of a sensor according to the invention.
  • Figure 10 shows a box diagram of a control unit and a signal processor which may be used with the embodiment shown in Figure 9.
  • the optical sensor described herein combines the concepts of a 'sweep focus ranger' of the type described above with the concept of an active confocal light source as described in US4629324 together with means to transform the depth information analysis into the temporal domain.
  • 'confocal' is used in this specification to describe an optical system arranged so that two images formed by the system are in focus at the same time. In most cases, this means that if the relevant optical paths are 'unfolded', the respective images or objects coincide with each other.
  • the basic concept of a sensor such as that described in US4629324 is illustrated in Figure 1.
  • the sensor comprises a detector 1 and a lens system 2 for focussing an image of an object 3 which is to be viewed onto the detector 1.
  • Positioning means 4 are provided to adjust the position of the lens system 2 to focus different 'slices' of the object 3 on the detector 1.
  • the sensor corresponds to a conventional 'sweep focus sensor'.
  • the sensor is also provided with a structured light source comprising a lamp 5, a projector lens 6 and a grid or spatial filter 7 together with a beam-splitter 8 which enables an image of the filter 7 to be projected through the lens system 2 onto the object 3.
  • the filter 7 and detector 1 are accurately positioned so that when a primary image of the filter 7 is focussed by the lens system 2 onto the object 3, a secondary image of the primary image formed on the object 3 is also focussed by the lens system 2 onto the detector 1. This is achieved by positioning the detector 1 and filter 7 equidistant from the beam splitter 8 so the system is 'confocal'.
  • An absorber 9 is also provided to help ensure that the portion of the beam from the filter 7 which passes through the beam splitter 8 is absorbed and is not reflected onto the detector 1.
  • the lens system 2 has a wide aperture with a very small depth of focus and in order to form an image of the object 3 the lens system 2 is successively positioned at hundreds of discrete, precalculated positions and the images received by the detector 1 for each position of the lens system 2 are analysed to detect those parts of the image which are in focus.
  • the distance between the lens system 2 and the parts of the object 3 which are in focus at that time can be calculated using the standard lens equation; which for a simple lens is:
  • f is the focal length of the lens system 2
  • U is the object distance (i.e. the distance between the lens system 2 and the in focus parts of the object 3) and V is the image distance (i.e. the distance between the lens system 2 and the detector 1).
  • the images formed on the object 3 and detector 1 are preferably in the form of a uniform structured pattern with a high spatial frequency, i.e. having a small repeat distance, comprising contrasting areas, such as a series of light and dark bands.
  • a uniform structured pattern with a high spatial frequency, i.e. having a small repeat distance, comprising contrasting areas, such as a series of light and dark bands.
  • Such patterns can, for example, be provided by slots or square cut-outs in the filter 7.
  • those parts of the image which are in focus on the object 3 being viewed produce a corresponding image on the detector 1 of light and dark bands whereas the out of focus parts of the image rapidly 'break-up' and so produce a more uniform illumination of the detector 1 which is also significantly less bright than the light areas of the parts of the image which are in focus.
  • the structured pattern should preferably have as high a spatial frequency as possible, although this is limited by the resolution of the lens system 2 and the size of the detector array, as the depth resolution is proportional to this
  • FIG. 2 is a box diagram of a sensor of the type shown in Figure 1 together with control and processing units as used in an embodiment of the invention to be described.
  • the light source 5 of the sensor is controlled by a projector lamp control unit 9, a piezo-electric grid control unit 10 is provided for moving the grid or filter 7 (for reasons to be discussed later) and adjustment of the lens system 2 is controlled by a sweep lens control unit 11.
  • the control units 9, 10 and 11, together with the output of the CCD detector 1, are connected to a computer 12 provided with an image processor, frame grabber, frame stores (computer memory corresponding to the pixel array of the detector 1) and a digital signal processor ( DSP) 13 for processing the signals received and providing appropriate instructions to the control units.
  • the output of the signal processor may then be displayed on a monitor 14.
  • the detector 1 comprises an array of detector elements or pixels such as charged coupled devices (CCD's) or charged injection devices (CID's).
  • CCD's charged coupled devices
  • CID's charged injection devices
  • Such detectors are preferred over other TV type sensors because the precise nature of the detector geometry makes it possible to align the unfolded optical path of the structured image with the individual pixels of the detector array.
  • the grid or spatial filter 7 consists of a uniform structured pattern with a high spatial frequency i.e. having a small repeat distance, comprising contrasting areas, such as a series of light and dark bands or a chequer-board pattern.
  • the spatial frequency of the grid pattern 7 should be matched to the pixel dimensions of the detector 1, i.e. the repeat distance of the pattern should be n pixels wide, where n is a small integer e.g. two (which is the preferred repeat distance). The value of n will be limited by the resolution of the lens system 2.
  • the detector should be matched with the pattern In both dimensions, so for optimum resolution it should comprise an array of square rather than rectangular pixels as the lens system 2 will have the same resolving power in both the x and y dimensions. Also, the use of square pixels simplifies the analysis of data received by the detector.
  • the light and dark portion of the pattern should also be complementary so that added together they give a uniform intensity.
  • the choice of pattern will depend on the resolution characteristics of the optical system used but should be chosen such that the pattern 'breaks up' as quickly and cleanly as possible as it goes out of focus.
  • the pattern need not have a simple light/dark or clear/opaque structure. It could also comprise a pattern having smoothly changing (e.g. sinusoidal) opacity. This would tend to reduce the higher frequency components of the pattern so that it 'breaks up' more smoothly as it moves out of focus.
  • the grid pattern In order to avoid edge effects in the image formed on the object 3 or detector 1, it is preferable for the grid pattern to comprise more repeat patterns than the CCD detector, ie for the image formed to be larger than the detector array 1.
  • grid pattern 7 is that known as a 'Ronchi ruling resolution target' which comprises a pattern of parallel stripes which are alternatively clear and opaque.
  • a good quality lens system 2 (such as that used in a camera) will typically be able to resolve a pattern having between 50 and 125 lines/mm in the image plane depending upon its aperture, and special lenses (such as those used in aerial-photography) would be even better.
  • the resolution can also be further improved by limiting the wavelength band used, e.g. by using a laser or sodium lamp to illuminate the grid pattern 7.
  • a typical CCD detector has pixels spaced at about 20 microns square so with two pixel widths, i.e. 40 microns, corresponding to the pattern repeat distance, this sets a resolution limit of 25 lines/mm.
  • the number of repeats of the structured pattern across the area of the object being viewed should preferably be as high as possible for the lens system and detector array used. With a typical detector array of the type described above comprising a matrix of 512 ⁇ 512 pixels, the maximum number of repeats would be 256 across the image. Considerably higher repeat numbers can be achieved using a linear detector array (described further below).
  • the lower limit on the spatial frequency of the grid pattern is set by the minimum depth resolution required. As the grid pattern becomes coarser, the number of repeats of the pattern across the image is reduced and the quality of the image produced by the sensor is degraded. An image with less than, say, 40 pattern repeats across it is clearly going to provide only very crude information about the object being sensed.
  • positioning means may be provided for moving the grid pattern 7. This is used to move the grid 7 in its own plane by one half of the pattern repeat distance between each frame grab. This movement is typically very small, e.g. around 20 microns (where the pattern repeats every 2 pixels), and is preferably carried out using piezo-electric positioning means.
  • the sum of the intensities 11 and i2 is a measure of the brightness of that pixel and the difference a measure of the depth of modulation. Dividing the difference signal by the brightness gives a normalized measure of the high pass component, which is a measure of how 'in-focus' that pixel is.
  • the sign of the term "i1 - i2" will, of course, alternate from pixel to pixel depending whether i1 or i2 corresponds to a light area of the grid pattern.
  • a correction can be applied by capturing an image with the light source switched off in a third frame store.
  • the background intensity "13" for each pixel can then be subtracted from the values i1 and i2 used in the above equations.
  • 'I' In order to avoid spurious, noisy data, it is desirable to impose a minimum threshold value on the signal 'I'. Where 'I' is very low, for example when looking at a black object or when the object is very distant, then that sample should be ignored (e.g. by setting M to zero or some such value).
  • the DSP is now used to perform the following functions on the raw image data: for each pixel (or group of pixels) the functions I and M described above are constructed.
  • the function M can be displayed from frame store 1 on the monitor (this corresponds to the in-focus contours which are shown bright against a dark background).
  • the Intensity information signal 'I' which corresponds to the information obtained by a conventional sweep focus ranger as the average of i1 and i2 is effectively a uniform Illumination signal (i.e. without a structured light source), is also helpful in interpreting the data. Displaying the signal 'I' on a monitor gives a normal TV style image of the scene except for the very small depth of focus inherent in the system. If desired, changing patterns of intensity found using standard edge detector methods can therefore be used to provide information on in-focus edges and textured areas of the object and this information can be added to that obtained from the *M* signal (which is provided by the use of a structured light source).
  • the technique of temporal modulation has the advantage that as each pixel in the image is analysed independently, edges or textures present on the object do not interfere with the depth measurement (this may not be the case for techniques based upon spatial modulation).
  • the technique can also be used with a chequer-board grid pattern instead of vertical or horizontal bands. Such a pattern would probably 'break-up' more effectively than the bands as the lens system 2 moves out of focus and would therefore be the preferred choice.
  • the sensors described above use a structured light source of relatively high spatial frequency to enable the detector to detect accurately when the image is in focus and when it 'breaks-up' as it goes out of focus. This is in contrast with some known range finding systems which rely on detecting a shift between two halves of the image as it goes out of focus.
  • the calculations described above may be performed pixel by pixel in software or by using specialist digital signal processor (DSP) hardware at video frame rates.
  • DSP digital signal processor
  • a display of the function M on a CRT would have the appearance of a dark screen with bright contours corresponding to the "in- focus" components of the object being viewed. As the focus of the lens system is swept, so these contours would move to show the in-focus parts of the object. From an analysis of these contours a 3-dimensional map of the object can be constructed.
  • a piezo-electric positioner can also be used on the detector 1 to increase the effective resolution in both the spatial and depth dimensions. To do this, the detector array is moved so as to effectively provide a smaller pixel dimension (a technique used in some high definition CCD cameras) and thus allow a finer grid pattern to be used.
  • piezo-electric devices for the fine positioning of an article is well known, e.g. in the high definition CCD cameras mentioned above and in scanning tunnelling microscopes, are capable of very accurately positioning an article even down to atomic dimensions.
  • other devices could be used such as a loudspeaker voice coil positioning mechanism.
  • LCD liquid crystal display
  • Figure 3 illustrates how the lens system 2 is adjusted to focus on successive planes of the object 3, and a 3-dimenslonal electronic image, or range map, of the object is built up from these 'slices' in the computer memory.
  • the contours of the object 3 which are in focus for any particular position of the lens system 2 can be displayed on the monitor 14.
  • Other images of the 3-dlmenslonal model built up in this way can also be displayed using well known image display and processing techniques.
  • the optical sensor described above is an 'active' system (i.e. uses a light source to illuminate the object being viewed) rather than passive (i.e. relying on ambient light to illuminate the object).
  • a pattern onto the object being viewed it is able to sense plain, un-textured surfaces, such as painted walls, floors, skin, etc., and is not restricted to edge or textured features like a conventional sweep focus ranger.
  • the use of a pattern which is projected onto the object to be viewed and which is of a form which can be easily analysed to determine those parts which are In focus thus provides significant advantages over a conventional 'sweep focus ranger'.
  • the outgoing projected beam is subject to the same focussing sweep as the Incoming beam sensed by the detector 1, the projected pattern is only focussed on those parts of the object which are themselves in focus. This improves on the resolving capability of the conventional sweep focus ranger by effectively 'doubling up' the focussing action using both the detector 1 and the light source.
  • the symmetry of the optical system means that when the object is in focus, the spatial frequency of the signal formed at the detector 1 will exactly equal that of the grid pattern 7.
  • the scale of measurements over which the type of sensor described above can be used ranges from the large, e.g. a few metres, down to the small, e.g. a few millimetres. It should also be possible to apply the same method to the very small, so effectively giving a 3D microscope, using relatively simple and inexpensive equipment.
  • the technology is easier to apply at the smaller rather than the larger scale. This is because for sensors working on the small scale, the depth resolution is comparable with the spatial resolution, whereas on the larger scale the depth resolution falls off (which is to be expected as the effective triangulation angle of the lens system is reduced). Also, being an active system, the illumination required will increase as the square of the distance between the sensor and the object being viewed. Nevertheless, the system is well suited to use as a robot sensor covering a range up to several metres.
  • fluorescence Another common microscopy technique that can be adapted for use with the sensor described above is fluorescence. This involves staining the sample with a fluorescent dye so that when illuminated with light of a particular wavelength, e.g. UV light, it will emit light of some other wavelength, say yellow. In this way it is possible to see which part of a sample has taken up the dye. If a light source of the appropriate wavelength is used and the detector arranged to respond only to the resulting fluorescence by the use of an appropriate filter, a 3D fluorescence model of the sample can be built up.
  • a particular wavelength e.g. UV light
  • the lens system 2 acts as the microscope objective lens.
  • Figure 4 shows a "side-by-side" arrangement in which separate lens systems 2A and 2B are used to focus a primary image of the grid 7 on the object 3 and to focus the secondary image thereof on the CCD detector 1.
  • the system is again arranged so that when the primary image is in focus on the object 3, the secondary image on the detector is also in focus.
  • This is achieved by making the lens systems 2A and 2B identical and positioning them side by side or one above the other (depending whether a horizontal or vertical pattern is used) so they are equi-distant from the projector grid 7 and detector 1, respectively. Movement of the two lens systems 2A and 2B would also be matched during the sweeping action.
  • the two optical systems in this case are effectively combined by overlap on the object 3 of the area illuminated by the projector grid 7 and the area sensed by the detector 1.
  • Figure 5 shows an arrangement in which separate optical systems are combined using a mirror (or prism) 15 and a beam-splitter 16. Again, care is taken to ensure that the two lens systems 2A and 2B are identical are accurately positioned and their movements matched, to ensure the system is 'confocal'.
  • Figure 6 shows part of an arrangement corresponding to that of Figure 1 in which an intermediate imaging stage is inserted In the detector optics.
  • the secondary image is first focussed on a translucent screen 17, e.g. of ground glass, by the optical system 2 and then focussed onto the detector 1 by a detector lens 18.
  • the arrangement shown in Figure 6 is particularly suited to long range sensors (e.g. more than 2 m) where it is desirable to use a larger lens (to give better depth resolution) than would normally be applicable for use with a CCD detector which is typically about 1 cm square.
  • a mirror type of lens system such as a Cassegrain mirror type microscope objective, as this offers a much larger aperture, and hence better depth resolution, than a conventional lens system.
  • the positions of the light source and detector 1 may also be interchanged if desired.
  • the beam splitter 8 may be a simple half-silvered mirror of conventional design. However, a prism type of beam splitter, as shown in Figure 7, is preferred to help reduce the amount of stray light reaching the detector 1. As shown in Figure 7, total internal reflection prevents light from the grid pattern 7 Impinging directly on the detector 1. Polarizing beam splitters combined with polarizing filters may also be used to reduce further the problems caused by stray light reaching the detector 1.
  • FIG. 8A shows a linear detector array 1A, a lens system 2, lamp 5, projector lens 6 and grid 7A and a beam splitter 8 arranged in a similar manner to the components of the embodiment shown in Figure 1.
  • An optional scanning mirror 19 (to be described further below) is also shown.
  • Figure 8B shows front views of the linear CCD detector array 1A and the linear grid pattern 7A comprising a line of light and dark areas which have dimensions corresponding to those of the pixels of the CCD detector 1A.
  • the 1-dimensional arrangement shown in Figure 8 corresponds to a single row of data from a 2-dimensional sensor and gives a 2-dimensional cross-sectional measurement of the object 3 being viewed rather than a full 3-dimensional image. There are many applications where this is all that is required but a full 3-dimensional image can be obtained if a mechanism is used to scan the beam through an angle in order to give the extra dimension.
  • a scanning mirror 19 is shown performing this task although many other methods could be used.
  • the scanning device 19 may either operate faster or slower than the sweep scan of the lens system 2 whichever is more convenient, i.e. either a complete lens system 2 focus sweep can be carried out for each scan position or a complete scan of the object is carried out for each position of the lens system 2 sweep.
  • the extra dimension can be obtained by moving the object being viewed.
  • This arrangement is therefore suitable for situations where the object is being carried passed the sensor, e.g. by a conveyor belt.
  • a 1-dimensional system has a number of advantages:-
  • the signal level at the detector 1A is much lower than for a 2-dimensional arrangement. This is because, the projected light falls over a large area, most of which is not viewed by the detector 1A. Hence, the further out of focus the object is, the smaller the observed signal will be. This helps simplify the data analysis as the fall off in intensity is proportional to the "out-of- focus" distance (as well as the normal factors). This is in contrast to the 2-dimensional detector arrangement in which the average light level remains much the same, and simply falls off with the distance between the sensor and the object according to the normal Inverse square law.
  • a 1-dimensional version of the sensor is therefore suitable for use in sensing:
  • Extrusions e.g. for checking the cross-sections of extruded metal, rubber or plastics articles or processed food products.
  • the grid pattern and detector of the sensor may each be further reduced to a 'structured point'.
  • this can be provided by a small grid pattern, e.g. in the form of a two-by-two chequer-board pattern 7B and a small detector, e.g. in the form of a quadrant detector 1B, as shown in Figures 9A and 9B.
  • a two element detector and corresponding two element grid pattern may be used. The signals from such a detector are processed as if forming a single pixel of information. It will be appreciated that the optical arrangement of such a sensor is similar to that of a scanning confocal microscope but with the addition of a structured light source and detector rather than a simple point.
  • this form of sensor acts as a 1D range finder.
  • extra dimensions can be obtained by using scanning mechanisms to deflect the beam across the object of Interest and/or by moving the object past the sensor, e.g. on a conveyor.
  • a double galvanometer scanner could, for example, be used to give a full 3-dimensional image or a single galvonometer scanner to give a 2-dimensional cross-sectional image.
  • matched light sources e.g. laser diodes, arranged in a quadrant and connected together in opposite pairs with an optical system to focus the light onto a smaller quadrant grid mask.
  • the mean illumination level remains constant and the AC synchronous component of the signal is detected to indicate when the image is in focus (as the opposite pairs of quadrants will be alternately bright and dark).
  • the out of focus components will slmply provide a level DC signal (as all four quadrants receive similar signals for both states of the illumination grid).
  • the electronics is arranged to detect the difference in signals received by opposite pairs of quadrant in synchrony with the changing light source.
  • the intensity level falls off rapidly when the object is out of focus. In this case, however, the fall off is very much faster as (in addition to the normal factors) the fall off is approximately proportional to the square of the "out-of-focus" distance (as similarly occurs in a scanning confocal microscope).
  • the synchronous detection scheme described above further distinguishes spurious light scattered from the fog and any background illumination from the genuine signal reflected from an in-focus object.
  • Modulation depth M' (A+C) - (B+D)
  • This modulation depth signals M' is then passed through a standard synchronous detection circuit.
  • the signal M' is multiplied by plus or minus one depending upon the phase of the clock.
  • the output M' and I' then pass through a low pass filter and a divider to give a signal M that corresponds to a measure of the 'in-focus' signal.
  • the signals M and I are then digitized and sent to the computer.
  • the ordering of the above operations could be changed with no net difference to the result.
  • the division could be performed before or after the low pass filtering.
  • the division could also be performed digitally after the analog to digital conversion (ADC) stage.
  • ADC analog to digital conversion
  • the signal M may be integrated to give a better signal to noise ratio.
  • a test will also be required to reject measurements where I is very low.
  • this sensor is such that the mean intensity I rapidly drops away if the sweep position is not close to the in-focus position. This fact can be used to skip rapidly over non- useful data.
  • the net effect is that only the temporally modulated signal corresponding to the in-focus grid pattern is detected.
  • the lens sweep position corresponding to the maximum M value can thus be determined and hence the range of the object.
  • the lens system 2 may simply comprise a fast, high quality camera lens which has the advantage of being relatively inexpensive.
  • a zoom lens may also be used.
  • a wide angle view could then be taken, followed by a telephoto close-up of areas of interest to give a higher resolution.
  • a disadvantage of most zoom lenses is their restricted aperture which limits the depth resolution possible.
  • the large number of optical surfaces within the lens system also tend to lead to problems with stray light.
  • a mirror lens may be the preferred option for larger scale ranging as conventional lenses become too heavy and cumbersome.
  • a camera mirror lens is compact and its Cassegrain geometry offers a triangulation diameter greater than its aperture number and so should provide better depth resolution.

Abstract

The sensor comprises a structured light source (5, 6, 7) which is adjustable so as to interchange the positions of contrasting areas of the pattern it provides, a detector (1) which comprises an array of detector elements having dimensions matched to the pattern produced by the light source, an optical system (2, 8) for projecting a primary image of the light source pattern onto an object (3) that is to be sensed and for forming a secondary image on the detector (1) of the primary image thus formed on the object (3), positioning means (4) for moving at least part (2) of the optical system so as to vary the focussing of the primary image on the object (3) and processing means (12) for analysing signals produced by the detector (1) in conjunction with information on the adjustement of the optical system (2, 8). The optical arrangement is 'confocal' so that, when the primary image is in focus on the object (3), the secondary image on the detector (1) is also in focus. The processing means (12) is arranged to analyse the images received by the detector (1) with the contrasting areas thereof in the interchanged positions to determine which parts of the images are in focus and hence determine the range of the corresponding parts of the object being viewed.

Description

AN OPTICAL SENSOR
TECHNICAL FIELD
This invention relates to an optical sensor and, more particularly, to a sensor used to sense the range of one or more parts of an object so as to form an image thereof.
BACKGROUND ART
A variety of different optical sensors are available for providing an image of objects within the field of view of the sensor. One such system known as a 'sweep focus ranger' uses a video camera with a single lens of very short depth of field to produce an image in which only a narrow interval of range in object space is in focus at any given time. By using a computer-controlled servo drive, the lens is positioned (or 'swept') with great accuracy over a series of positions so as to view different range 'slices' of an object. A three-dimensional image of the object is then built up from these 'slices'. The system detects which parts of the object are in focus by analysing the detected signal for high frequency components which are caused by features, such as edges or textured parts, which change rapidly across the scene. Because of this, the system is not suitable for imaging plain or smooth surfaces, such as a flat painted wall, which have no such features.
This limitation is common to all passive rangefinding techniques. One way to overcome the problem is to actively project a pattern of light onto the target objects which can then be observed by the sensor. If this pattern contains high spatial frequencies, then these features can be used by the sensor to estimate the range of otherwise plain surfaces. A particularly elegant way of projecting such a pattern is described In US patent number 4629324 by Robotic Vision Systems Inc.
In this prior art, the sensor detects those parts of the target object that are in focus by analysing the image for features that match the spatial frequencies present in the projected pattern. Various analysis techniques such as convolution and synchronous detection are described. However, such methods are potentially time consuming. In an extension of these ideas a further patent by the same company US4640620 describes a method which aims to overcome this problem by the use of a liquid crystal light valve device to convert the required high spatial frequency components present in the image into an amplitude variation that can be detected directly.
The present invention aims to provide a simpler solution to this problem.
DISCLOSURE OF INVENTION
According to one aspect of the present invention, there is provided an optical sensor comprising: a structured light source for producing a pattern of constrasting areas; a detector which comprises an array of detector elements having dimensions matched to the pattern produced by the light source; an optical system for projecting a primary image of the light source onto an object that is to be sensed and for forming a secondary image on the detector of the primary image thus formed on the object; adjustment means for adjusting at least part of the optical system so as to vary the focussing of the primary image on the object, the arrangement being such that when the primary image is in focus on the object, the secondary image on the detector Is also in focus; and processing means for analysing signals produced by the detector in conjunction with information on the adjustment of the optical system, wherein the structured light source is adjustable so as to interchange the positions of contrasting areas of the pattern produced by the light source and in that the processing means is arranged to analyse the secondary images received by the detector elements with the contrasting areas in the interchanged positions to determine those parts of the secondary images which are in focus on the detector and thereby determine the range of corresponding parts of the object which are thus in focus.
According to another aspect of the invention, there is provided a method of determining the range of at least part of an object being viewed using an optical sensor comprising: a structured light source which produces a pattern of contrasting areas; a detector which comprises an array of detector elements having dimensions matched to the pattern produced by the light source; an optical system which projects a primary image of the light source onto an object that is to be sensed and forms a secondary image on the detector of the primary image thus formed on the object; adjustment means which adjusts at least part of the optical system so as to vary the focussing of the primary image on the object, the arrangement being such that when the primary image is in focus on the object, the secondary image on the detector is also In focus; and processing means which analyses signals produced by the detector in conjunction with information on the adjustment of the optical system, the method involving adjustment of the structured light source so as to interchange the positions of contrasting areas of the pattern produced by the light source and the processing means being arranged to analyse the secondary images received by the detector elements with the contrasting areas in the interchanged positions to determine those parts of the secondary images which are in focus on the detector and thereby determine the range of corresponding parts of the object which are thus in focus.
The invention thus transforms analysis of the image sensed by the detector into the temporal domain.
Preferred and optional features of the invention will be apparent from the following description and from the subsidiary claims of the specification.
BRIEF DESCRIPTION OF DRAWINGS
The invention will now be further described, merely by way of example, with reference to the accompanying drawings, in which:-
Figure 1 illustrates the basic concept of an optical sensor such as that described In US4629324 and shows the main components and the optical pathways between them;
Figure 2 shows a box diagram of an optical sensor of the type shown in Figure 1 with a particular control unit and signal processing arrangement according to one embodiment of the present invention;
Figure 3 illustrates how an image of an object being viewed by the type of system shown In Figures 1 and 2 can be built up and displayed; Figures 4, 5 and 6 show alternative optical arrangements which may be used in the sensor;
Figure 7 shows an alternative form of beam splitter which may be used in the sensor;
Figures 8(A) and 8(B) show a further embodiment of a sensor according to the invention;
Figure 9(A) and 9(B) show another embodiment of a sensor according to the invention; and
Figure 10 shows a box diagram of a control unit and a signal processor which may be used with the embodiment shown in Figure 9.
BEST MODE OF CARRYING OUT THE INVENTION
The optical sensor described herein combines the concepts of a 'sweep focus ranger' of the type described above with the concept of an active confocal light source as described in US4629324 together with means to transform the depth information analysis into the temporal domain.
The term 'confocal' is used in this specification to describe an optical system arranged so that two images formed by the system are in focus at the same time. In most cases, this means that if the relevant optical paths are 'unfolded', the respective images or objects coincide with each other.
The basic concept of a sensor such as that described in US4629324 is illustrated in Figure 1. The sensor comprises a detector 1 and a lens system 2 for focussing an image of an object 3 which is to be viewed onto the detector 1. Positioning means 4 are provided to adjust the position of the lens system 2 to focus different 'slices' of the object 3 on the detector 1. Thus far, the sensor corresponds to a conventional 'sweep focus sensor'. As described in US4629324, the sensor is also provided with a structured light source comprising a lamp 5, a projector lens 6 and a grid or spatial filter 7 together with a beam-splitter 8 which enables an image of the filter 7 to be projected through the lens system 2 onto the object 3. The filter 7 and detector 1 are accurately positioned so that when a primary image of the filter 7 is focussed by the lens system 2 onto the object 3, a secondary image of the primary image formed on the object 3 is also focussed by the lens system 2 onto the detector 1. This is achieved by positioning the detector 1 and filter 7 equidistant from the beam splitter 8 so the system is 'confocal'. An absorber 9 is also provided to help ensure that the portion of the beam from the filter 7 which passes through the beam splitter 8 is absorbed and is not reflected onto the detector 1.
The lens system 2 has a wide aperture with a very small depth of focus and in order to form an image of the object 3 the lens system 2 is successively positioned at hundreds of discrete, precalculated positions and the images received by the detector 1 for each position of the lens system 2 are analysed to detect those parts of the image which are in focus. When the Images of the grid 7 on the object 3 and on the detector 1 are in focus, the distance between the lens system 2 and the parts of the object 3 which are in focus at that time can be calculated using the standard lens equation; which for a simple lens is:
1/f = 1/U + 1/V
where f is the focal length of the lens system 2, U is the object distance (i.e. the distance between the lens system 2 and the in focus parts of the object 3) and V is the image distance (i.e. the distance between the lens system 2 and the detector 1).
To enable areas of object which are in-focus to be detected, the images formed on the object 3 and detector 1 are preferably in the form of a uniform structured pattern with a high spatial frequency, i.e. having a small repeat distance, comprising contrasting areas, such as a series of light and dark bands. Such patterns can, for example, be provided by slots or square cut-outs in the filter 7. For such patterns, those parts of the image which are in focus on the object 3 being viewed produce a corresponding image on the detector 1 of light and dark bands whereas the out of focus parts of the image rapidly 'break-up' and so produce a more uniform illumination of the detector 1 which is also significantly less bright than the light areas of the parts of the image which are in focus. The structured pattern should preferably have as high a spatial frequency as possible, although this is limited by the resolution of the lens system 2 and the size of the detector array, as the depth resolution is proportional to this spatial frequency.
Figure 2 is a box diagram of a sensor of the type shown in Figure 1 together with control and processing units as used in an embodiment of the invention to be described. The light source 5 of the sensor is controlled by a projector lamp control unit 9, a piezo-electric grid control unit 10 is provided for moving the grid or filter 7 (for reasons to be discussed later) and adjustment of the lens system 2 is controlled by a sweep lens control unit 11. The control units 9, 10 and 11, together with the output of the CCD detector 1, are connected to a computer 12 provided with an image processor, frame grabber, frame stores (computer memory corresponding to the pixel array of the detector 1) and a digital signal processor (DSP) 13 for processing the signals received and providing appropriate instructions to the control units. The output of the signal processor may then be displayed on a monitor 14.
In the sensor described herein, the detector 1 comprises an array of detector elements or pixels such as charged coupled devices (CCD's) or charged injection devices (CID's). Such detectors are preferred over other TV type sensors because the precise nature of the detector geometry makes it possible to align the unfolded optical path of the structured image with the individual pixels of the detector array.
The grid or spatial filter 7 consists of a uniform structured pattern with a high spatial frequency i.e. having a small repeat distance, comprising contrasting areas, such as a series of light and dark bands or a chequer-board pattern. The spatial frequency of the grid pattern 7 should be matched to the pixel dimensions of the detector 1, i.e. the repeat distance of the pattern should be n pixels wide, where n is a small integer e.g. two (which is the preferred repeat distance). The value of n will be limited by the resolution of the lens system 2. If a chequer-board type pattern is used, the detector should be matched with the pattern In both dimensions, so for optimum resolution it should comprise an array of square rather than rectangular pixels as the lens system 2 will have the same resolving power in both the x and y dimensions. Also, the use of square pixels simplifies the analysis of data received by the detector.
The light and dark portion of the pattern should also be complementary so that added together they give a uniform intensity. The choice of pattern will depend on the resolution characteristics of the optical system used but should be chosen such that the pattern 'breaks up' as quickly and cleanly as possible as it goes out of focus. The pattern need not have a simple light/dark or clear/opaque structure. It could also comprise a pattern having smoothly changing (e.g. sinusoidal) opacity. This would tend to reduce the higher frequency components of the pattern so that it 'breaks up' more smoothly as it moves out of focus.
In order to avoid edge effects in the image formed on the object 3 or detector 1, it is preferable for the grid pattern to comprise more repeat patterns than the CCD detector, ie for the image formed to be larger than the detector array 1.
One possible form of grid pattern 7 is that known as a 'Ronchi ruling resolution target' which comprises a pattern of parallel stripes which are alternatively clear and opaque. A good quality lens system 2 (such as that used in a camera) will typically be able to resolve a pattern having between 50 and 125 lines/mm in the image plane depending upon its aperture, and special lenses (such as those used in aerial-photography) would be even better. The resolution can also be further improved by limiting the wavelength band used, e.g. by using a laser or sodium lamp to illuminate the grid pattern 7. A typical CCD detector has pixels spaced at about 20 microns square so with two pixel widths, i.e. 40 microns, corresponding to the pattern repeat distance, this sets a resolution limit of 25 lines/mm. Finer devices are likely to become available in the future. The number of repeats of the structured pattern across the area of the object being viewed should preferably be as high as possible for the lens system and detector array used. With a typical detector array of the type described above comprising a matrix of 512 × 512 pixels, the maximum number of repeats would be 256 across the image. Considerably higher repeat numbers can be achieved using a linear detector array (described further below).
The lower limit on the spatial frequency of the grid pattern is set by the minimum depth resolution required. As the grid pattern becomes coarser, the number of repeats of the pattern across the image is reduced and the quality of the image produced by the sensor is degraded. An image with less than, say, 40 pattern repeats across it is clearly going to provide only very crude information about the object being sensed.
In the sensor described herein, the high spatial frequency component of the light source is converted into the temporal domain. As Indicated in Figure 2, positioning means may be provided for moving the grid pattern 7. This is used to move the grid 7 in its own plane by one half of the pattern repeat distance between each frame grab. This movement is typically very small, e.g. around 20 microns (where the pattern repeats every 2 pixels), and is preferably carried out using piezo-electric positioning means. The effect of this movement is that in one position of the grid the unfolded optical path of the grid pattern overlaps the detector pixels in such a way that half the pixels correspond to light areas and half to dark and when the grid is moved by one half repeat distance then the opposite situation exists for each pixel (the pattern of light and dark areas on the array of pixels corresponding to the pattern of the structured light source). Each pixel of the detector 1 is thus mapped to an area of the pattern produced by the light source which alternates between light and dark as the structured light source is moved. Pairs of images may then be captured in the first and second frame stores with the grid 7 in the two positions and the Intensities (i1 and i2) of corresponding pixels in the two frame stores determined so the following functions can be produced for each pixel:
Brightness I = i1 + i2 High Pass Component M = | i1 - i2 I / I
The sum of the intensities 11 and i2 is a measure of the brightness of that pixel and the difference a measure of the depth of modulation. Dividing the difference signal by the brightness gives a normalized measure of the high pass component, which is a measure of how 'in-focus' that pixel is. The sign of the term "i1 - i2" will, of course, alternate from pixel to pixel depending whether i1 or i2 corresponds to a light area of the grid pattern.
Those parts of the image which are in focus on the object 3 and on the detector 1, produce a pattern of light and dark areas which alternate as the grid 7 is moved. One of the signals i2 and i2 will therefore be high (for a bright area) and one will be low (for a dark area). In contrast, for parts of the image which are not in focus on the detector 1, the intensities i1 and i2 will be similar to each other and lower than the Intensity of an in focus bright area.
If the background illumination is significant, a correction can be applied by capturing an image with the light source switched off in a third frame store. The background intensity "13" for each pixel can then be subtracted from the values i1 and i2 used in the above equations.
In order to avoid spurious, noisy data, it is desirable to impose a minimum threshold value on the signal 'I'. Where 'I' is very low, for example when looking at a black object or when the object is very distant, then that sample should be ignored (e.g. by setting M to zero or some such value).
The process of constructing a complete 3D surface map of an object being viewed may thus proceed as follows:-
1) Clear the 3D surface model map file.
2) Set the lens sweep to the starting position. 3) For each lens sweep position three frames are captured into the three frame stores as follows:
1 = Illumination on, grid pattern in position 1
2 = Illumination on, grid pattern in position 2
3 = No Illumination
4) The DSP is now used to perform the following functions on the raw image data: for each pixel (or group of pixels) the functions I and M described above are constructed.
5) The background signal is then subtracted: i1 := i1 - i3
i2 := i2 - i3
6) Construct mean intensity I into frame store 3:
I = i3 := i1 + i2
7) Construct modulation depth M if I exceeds threshold value into frame store 1:
IF 13 > min THEN
M = i1 := I i1 - i2 I / i3 (or could use I for i3)
ELSE
M = i1 := 0
8) The function M can be displayed from frame store 1 on the monitor (this corresponds to the in-focus contours which are shown bright against a dark background).
9) Clean up this 'M' data: a) Check for pixel to pixel continuity.
b) Look for the local maxima positions, interpolating to sub-pixel positions, then convert to corresponding object coordinates.
c) Construct data chains describing the in-focus contour positions. d) Compare these contours with the adjacent sweep position contours. e) Add this sweep position contours to the 3D surface model map.
10) Proceed to next lens sweep position and repeat the above stages until a complete surface model map is constructed.
The Intensity information signal 'I', which corresponds to the information obtained by a conventional sweep focus ranger as the average of i1 and i2 is effectively a uniform Illumination signal (i.e. without a structured light source), is also helpful in interpreting the data. Displaying the signal 'I' on a monitor gives a normal TV style image of the scene except for the very small depth of focus inherent in the system. If desired, changing patterns of intensity found using standard edge detector methods can therefore be used to provide information on in-focus edges and textured areas of the object and this information can be added to that obtained from the *M* signal (which is provided by the use of a structured light source).
The technique of temporal modulation has the advantage that as each pixel in the image is analysed independently, edges or textures present on the object do not interfere with the depth measurement (this may not be the case for techniques based upon spatial modulation). The technique can also be used with a chequer-board grid pattern instead of vertical or horizontal bands. Such a pattern would probably 'break-up' more effectively than the bands as the lens system 2 moves out of focus and would therefore be the preferred choice. It should be noted that the sensors described above use a structured light source of relatively high spatial frequency to enable the detector to detect accurately when the image is in focus and when it 'breaks-up' as it goes out of focus. This is in contrast with some known range finding systems which rely on detecting a shift between two halves of the image as it goes out of focus.
The calculations described above may be performed pixel by pixel in software or by using specialist digital signal processor (DSP) hardware at video frame rates. A display of the function M on a CRT would have the appearance of a dark screen with bright contours corresponding to the "in- focus" components of the object being viewed. As the focus of the lens system is swept, so these contours would move to show the in-focus parts of the object. From an analysis of these contours a 3-dimensional map of the object can be constructed.
A piezo-electric positioner can also be used on the detector 1 to increase the effective resolution in both the spatial and depth dimensions. To do this, the detector array is moved so as to effectively provide a smaller pixel dimension (a technique used in some high definition CCD cameras) and thus allow a finer grid pattern to be used.
The use of piezo-electric devices for the fine positioning of an article is well known, e.g. in the high definition CCD cameras mentioned above and in scanning tunnelling microscopes, are capable of very accurately positioning an article even down to atomic dimensions. As an alternative to using a piezo-electric positioner, other devices could be used such as a loudspeaker voice coil positioning mechanism.
Instead of moving part of the light source it is also possible to modulate the light source directly, e.g. by using: 1) a liquid crystal display (LCD) (e.g. of the type which can be used with some personal computers to project images onto an overhead projector)
2) a small cathode ray tube (CRT) to write the pattern directly
3) magneto optical modulation (i.e. the Faraday Effect) together with a polarized light.
4) a purpose made, interlaced, fibre optic light guide bundle with two alternating light sources.
Figure 3 illustrates how the lens system 2 is adjusted to focus on successive planes of the object 3, and a 3-dimenslonal electronic image, or range map, of the object is built up from these 'slices' in the computer memory. The contours of the object 3 which are in focus for any particular position of the lens system 2 can be displayed on the monitor 14. Other images of the 3-dlmenslonal model built up in this way can also be displayed using well known image display and processing techniques.
The optical sensor described above is an 'active' system (i.e. uses a light source to illuminate the object being viewed) rather than passive (i.e. relying on ambient light to illuminate the object). As it projects a pattern onto the object being viewed it is able to sense plain, un-textured surfaces, such as painted walls, floors, skin, etc., and is not restricted to edge or textured features like a conventional sweep focus ranger. The use of a pattern which is projected onto the object to be viewed and which is of a form which can be easily analysed to determine those parts which are In focus, thus provides significant advantages over a conventional 'sweep focus ranger'. In addition, since the outgoing projected beam is subject to the same focussing sweep as the Incoming beam sensed by the detector 1, the projected pattern is only focussed on those parts of the object which are themselves in focus. This improves on the resolving capability of the conventional sweep focus ranger by effectively 'doubling up' the focussing action using both the detector 1 and the light source. The symmetry of the optical system means that when the object is in focus, the spatial frequency of the signal formed at the detector 1 will exactly equal that of the grid pattern 7.
The scale of measurements over which the type of sensor described above can be used ranges from the large, e.g. a few metres, down to the small, e.g. a few millimetres. It should also be possible to apply the same method to the very small, so effectively giving a 3D microscope, using relatively simple and inexpensive equipment.
In general, the technology is easier to apply at the smaller rather than the larger scale. This is because for sensors working on the small scale, the depth resolution is comparable with the spatial resolution, whereas on the larger scale the depth resolution falls off (which is to be expected as the effective triangulation angle of the lens system is reduced). Also, being an active system, the illumination required will increase as the square of the distance between the sensor and the object being viewed. Nevertheless, the system is well suited to use as a robot sensor covering a range up to several metres.
One of the problems with conventional microscopes is the "fuzzyness" that results from the very small depth of focus. To try to overcome this, CCD cameras have been attached to microscopes to capture images directly and software packages are now available which can process a series of captured Images and perform a de-convolution operation to remove the fuzzyness. In this way images of a clarity comparable with those obtainable with a very much more expensive confocal scanning microscope are possible. The method described above of capturing and differencing pairs of frames, also enables clear cross-sectional images to be formed. This is in many ways easier than the software approach as the data contains height Information in a much more accessible form. By scanning over height as described (by moving the lens system 2) a 3D image of the sample can be built up. In the case of biological samples which are of a translucent nature, it may no longer be appropriate to form a simple surface profile. Instead, it may be more appropriate to form a full 3D "optical density" model to reflect this . quality of the sample.
Another common microscopy technique that can be adapted for use with the sensor described above is fluorescence. This involves staining the sample with a fluorescent dye so that when illuminated with light of a particular wavelength, e.g. UV light, it will emit light of some other wavelength, say yellow. In this way it is possible to see which part of a sample has taken up the dye. If a light source of the appropriate wavelength is used and the detector arranged to respond only to the resulting fluorescence by the use of an appropriate filter, a 3D fluorescence model of the sample can be built up.
It will be appreciated that in such applications, the lens system 2 acts as the microscope objective lens.
Various optical arrangements can be used in the type of sensor described above and some of these are illustrated in Figures 4, 5 and 6. Figure 4 shows a "side-by-side" arrangement in which separate lens systems 2A and 2B are used to focus a primary image of the grid 7 on the object 3 and to focus the secondary image thereof on the CCD detector 1. The system is again arranged so that when the primary image is in focus on the object 3, the secondary image on the detector is also in focus. This is achieved by making the lens systems 2A and 2B identical and positioning them side by side or one above the other (depending whether a horizontal or vertical pattern is used) so they are equi-distant from the projector grid 7 and detector 1, respectively. Movement of the two lens systems 2A and 2B would also be matched during the sweeping action. The two optical systems in this case are effectively combined by overlap on the object 3 of the area illuminated by the projector grid 7 and the area sensed by the detector 1.
Figure 5 shows an arrangement in which separate optical systems are combined using a mirror (or prism) 15 and a beam-splitter 16. Again, care is taken to ensure that the two lens systems 2A and 2B are identical are accurately positioned and their movements matched, to ensure the system is 'confocal'.
Figure 6 shows part of an arrangement corresponding to that of Figure 1 in which an intermediate imaging stage is inserted In the detector optics. The secondary image is first focussed on a translucent screen 17, e.g. of ground glass, by the optical system 2 and then focussed onto the detector 1 by a detector lens 18. The arrangement shown in Figure 6 is particularly suited to long range sensors (e.g. more than 2 m) where it is desirable to use a larger lens (to give better depth resolution) than would normally be applicable for use with a CCD detector which is typically about 1 cm square.
If a structured light source having a pattern within a repeat distance greater than two pixels (i.e. n > 2), the preferred solution is to use an additional imaging stage as in Figure 6 to effectively reduce n to two.
However, larger CCD arrays are likely to become more readily available in the near future. When the sensor is designed to work on the very small scale, it is easier to sweep not just the lens system 2 but the entire sensor relative to the object being viewed (as in a conventional microscope). As the lens to detector distance remains constant, the field of view angle and the magnification remain constant so each pixel of the detector 1 corresponds to a fixed x,y location in the object plane. Analysis of the image formed on the detector 1 is thus greatly simplified.
When the system is used as a microscope, it is preferable to use a mirror type of lens system, such as a Cassegrain mirror type microscope objective, as this offers a much larger aperture, and hence better depth resolution, than a conventional lens system.
In each of the arrangements described above, the positions of the light source and detector 1 may also be interchanged if desired.
The beam splitter 8 may be a simple half-silvered mirror of conventional design. However, a prism type of beam splitter, as shown in Figure 7, is preferred to help reduce the amount of stray light reaching the detector 1. As shown in Figure 7, total internal reflection prevents light from the grid pattern 7 Impinging directly on the detector 1. Polarizing beam splitters combined with polarizing filters may also be used to reduce further the problems caused by stray light reaching the detector 1.
The embodiments described above use a 2-dimensional grid pattern and a 2-dimensional detector array 1. A similar arrangement can be used with a 1-dimensional grid pattern, i.e. a single line of alternate light and dark areas, and a 1-dimensional detector array, i.e. a single line of detector pixels. Such an arrangement is illustrated in Figures 8A and B. Figure 8A shows a linear detector array 1A, a lens system 2, lamp 5, projector lens 6 and grid 7A and a beam splitter 8 arranged in a similar manner to the components of the embodiment shown in Figure 1. An optional scanning mirror 19 (to be described further below) is also shown. Figure 8B shows front views of the linear CCD detector array 1A and the linear grid pattern 7A comprising a line of light and dark areas which have dimensions corresponding to those of the pixels of the CCD detector 1A. The 1-dimensional arrangement shown in Figure 8 corresponds to a single row of data from a 2-dimensional sensor and gives a 2-dimensional cross-sectional measurement of the object 3 being viewed rather than a full 3-dimensional image. There are many applications where this is all that is required but a full 3-dimensional image can be obtained if a mechanism is used to scan the beam through an angle in order to give the extra dimension. In figure 8A, a scanning mirror 19 is shown performing this task although many other methods could be used. The scanning device 19 may either operate faster or slower than the sweep scan of the lens system 2 whichever is more convenient, i.e. either a complete lens system 2 focus sweep can be carried out for each scan position or a complete scan of the object is carried out for each position of the lens system 2 sweep.
Alternatively, the extra dimension can be obtained by moving the object being viewed. This arrangement is therefore suitable for situations where the object is being carried passed the sensor, e.g. by a conveyor belt.
A 1-dimensional system has a number of advantages:-
1) Lower cost when only 2-dimensional Information is required.
2) The much lower data rates from the 1-dimensional detector allows more rapid sweep scans (the full 3-dimensional system described above is limited by the video data readout speed, i.e. 50-60Hz for standard video). A 1-dimensional detector can be read out at many thousands of scans/sec.
3) Less computing is required to process 2-dimensional data rather than 3-dimensional data.
4) Longer detector arrays are available in 1-dimensional form. Typical 2-dimensional detectors have about 512 × 512 pixels, whereas 1-dimensional, linear detectors having 4096 pixels are readily available.
5) Perhaps the most important advantage is that when the object being viewed is not in focus, the signal level at the detector 1A is much lower than for a 2-dimensional arrangement. This is because, the projected light falls over a large area, most of which is not viewed by the detector 1A. Hence, the further out of focus the object is, the smaller the observed signal will be. This helps simplify the data analysis as the fall off in intensity is proportional to the "out-of- focus" distance (as well as the normal factors). This is in contrast to the 2-dimensional detector arrangement in which the average light level remains much the same, and simply falls off with the distance between the sensor and the object according to the normal Inverse square law.
A 1-dimensional version of the sensor is therefore suitable for use in sensing:
1) Logs in a saw mill, e.g. in deciding the most economic way to cut each log.
2) Extrusions, e.g. for checking the cross-sections of extruded metal, rubber or plastics articles or processed food products.
3) Food, e.g. for measuring the size of items of fruit, fish etc.
4) Objects carried by conveyors, e.g. for identifying objects or their position and orientation.
In another extension of the principle described above, the grid pattern and detector of the sensor may each be further reduced to a 'structured point'. In practice, this can be provided by a small grid pattern, e.g. in the form of a two-by-two chequer-board pattern 7B and a small detector, e.g. in the form of a quadrant detector 1B, as shown in Figures 9A and 9B. Alternatively, a two element detector and corresponding two element grid pattern may be used. The signals from such a detector are processed as if forming a single pixel of information. It will be appreciated that the optical arrangement of such a sensor is similar to that of a scanning confocal microscope but with the addition of a structured light source and detector rather than a simple point. In its simplest form, this form of sensor acts as a 1D range finder. As in the 1-dimensional detector case described above, extra dimensions can be obtained by using scanning mechanisms to deflect the beam across the object of Interest and/or by moving the object past the sensor, e.g. on a conveyor. A double galvanometer scanner could, for example, be used to give a full 3-dimensional image or a single galvonometer scanner to give a 2-dimensional cross-sectional image.
With temporal modulation the illuminated pair of grid elements is alternated between observations. However, the data rates possible using piezo-electric positioning means would be too slow to exploit the sampling rates of the detector so other means should preferably be used to alternate the light source, such as:
1) a Faraday effect screen and projection optics.
2) a pair of matched laser diodes feeding light Into pairs of fibre optics which are connected to opposite pairs of the quadrant.
3) using four matched light sources, e.g. laser diodes, arranged in a quadrant and connected together in opposite pairs with an optical system to focus the light onto a smaller quadrant grid mask.
4) using an acousto-optic modulator to deflect a laser beam between the sections of the grid pattern.
With this form of sensor, the mean illumination level remains constant and the AC synchronous component of the signal is detected to indicate when the image is in focus (as the opposite pairs of quadrants will be alternately bright and dark). The out of focus components will slmply provide a level DC signal (as all four quadrants receive similar signals for both states of the illumination grid). The electronics is arranged to detect the difference in signals received by opposite pairs of quadrant in synchrony with the changing light source. With this form of sensor, ail the projected light has to come from the small, structured light source, so the most efficient arrangement is to use a laser as the light source. Also, available quadrant detectors may be larger than required, so an intermediate imaging stage may be required to match the size of detector with the grid pattern.
As with the 1-dimensional sensor described above, the intensity level falls off rapidly when the object is out of focus. In this case, however, the fall off is very much faster as (in addition to the normal factors) the fall off is approximately proportional to the square of the "out-of-focus" distance (as similarly occurs in a scanning confocal microscope).
This provides the possibility of using the sensor for imaging in difficult situations, for example through fog, smoke or in cloudy water. This is because:
1) Each small area of the object is looked at In turn. (Illumination of the entire object would result in 'fogging out' the entire image In the same way as using full beam headlights of a car in fog can reduce visibility due to an increase in the amount of scattered light).
2) The intensity of the unwanted scattered light from out-of-focus regions of the object is kept to a minimum by the fall-off being proportional to the square of the 'out-of-focus' distance as described above.
3) The scattered light that does reach the detector will be out-of-focus and so effect all four quadrants equally. This contributes to 'I' (the mean signal) but contributes very little to the difference term 'M'.
4) The synchronous detection scheme described above further distinguishes spurious light scattered from the fog and any background illumination from the genuine signal reflected from an in-focus object.
As with the 1-dimensional version described above, whether the lens focus sweep is fast and the image scanning slow or vice-versa is a matter of convenience. With an intelligent system, a particular sample can be terminated when the in-focus position has been found and it can then proceed to the next position, using the current range information as the starting guess for the next element, and so on.
Figure 10 shows the overall block structure for the embodiment shown in Figure 9 having a simple quadrant detector and grid pattern. This system assumes that the quadrant light source is constructed using two matched laser diodes connected to pairs of fibre optics arranged in a quadrant pattern that matches the detector. A clock signal is used to synchronize the activity of the light sources and detection electronics. The light sources are illuminated alternately in such a way that the average light Intensity remains constant. The outputs from the four quadrants A,B,C,D of the detector are amplified and combined to form the following terms.
Mean Intensity I' = (A+C) + (B+D)
Modulation depth M' = (A+C) - (B+D)
This modulation depth signals M' is then passed through a standard synchronous detection circuit. The signal M' is multiplied by plus or minus one depending upon the phase of the clock. The output M' and I' then pass through a low pass filter and a divider to give a signal M that corresponds to a measure of the 'in-focus' signal. The signals M and I are then digitized and sent to the computer.
Clearly the ordering of the above operations could be changed with no net difference to the result. For example, the division could be performed before or after the low pass filtering. The division could also be performed digitally after the analog to digital conversion (ADC) stage.
If necessary, the signal M may be integrated to give a better signal to noise ratio. A test will also be required to reject measurements where I is very low.
As discussed above, the characteristics of this sensor are such that the mean intensity I rapidly drops away if the sweep position is not close to the in-focus position. This fact can be used to skip rapidly over non- useful data.
The net effect is that only the temporally modulated signal corresponding to the in-focus grid pattern is detected. The lens sweep position corresponding to the maximum M value can thus be determined and hence the range of the object. By combining this procedure with a beam scanning mechanism, 2-dimensional or 3-dimensional range maps can be constructed.
The projection of a structured image onto the object which is to be viewed, together with the use of a detector able to sense the structure of the image, provides the advantages discussed above and enables the resultant image received by the detector 1B to be quickly and reliably analysed in the manner described to provide the required information. In addition, the use of this form of image and detector, enables a system with sufficient performance for a wide variety of applications to be provided using relatively simple and Inexpensive apparatus. In all the arrangements described, temporal modulation (in which the grid pattern is moved half a repeat distance or the light and dark portions of the image are alternated between observations) is used as previously described to simplify the analysis of the signals received by the detector.
It will be appreciated that shifting the signal analysis into the temporal domain provides significant advantages over prior art techniques. The analysis is made a lot simpler so allowing the signals to be analysed more quickly so a. greater amount of image information can be obtained within a given time. Individual pixels of the detector can also be analysed independently of each other.
The performance of the sensors described above will depend critically upon the characteristics of the lens system 2. Depth resolution improves proportionally to the aperture of the lens and the larger the aperture of the lens the better. Depth resolution also requires a lens of high resolving power. One way to Improve resolution is to restrict the spectral bandwidth (to minimize chromatic aberration) with filters or by choice of light source e.g. laser diodes. A filter would then be used in front of the detector 1 to match the light source in order to cut out unwanted background light.
The lens system 2 may simply comprise a fast, high quality camera lens which has the advantage of being relatively inexpensive.
A single element large aperture aspheric lens may also be used. This has minimal spherical aberrations and would give good resolution if used over a narrow spectra bandwidth.
A zoom lens may also be used. A wide angle view could then be taken, followed by a telephoto close-up of areas of interest to give a higher resolution. A disadvantage of most zoom lenses is their restricted aperture which limits the depth resolution possible. The large number of optical surfaces within the lens system also tend to lead to problems with stray light.
A mirror lens may be the preferred option for larger scale ranging as conventional lenses become too heavy and cumbersome. A camera mirror lens is compact and its Cassegrain geometry offers a triangulation diameter greater than its aperture number and so should provide better depth resolution.
A variable focus lens system may also be used. One problem with a normal lens system is that the magnification changes during the sweep of the lens (as the detector to lens distance changes). This means that it is not possible to associate a particular pixel with a direction in space, and the same pixel can come Into focus more than once during the lens sweep (at least in principle, particularly for those pixels towards the edge of the detector). By using a variable focal length lens system such as a zoom lens, we can in principle perform the sweep in focusing distance and at the same time compensate by changing the focal length such that the net magnification, i.e. the field of view, remains constant. With this arrangement, it becomes possible to associate each pixel with a direction vector in space which can simplify the analysis task of building the 3D model of the object. Another alternative is to change the radii of curvature of the lens to adjust the focus. This can be done, for example, with a flexible liquid filled lens, in which the internal pressure controls the curvature of the lens, in combination with a conventional lens. With this type of lens, the optical components remain in a fixed position during the sweep and the change in focus is acheived by changing the curvature and hence the focal length of the lens. The range in lens power required for the sweep is relatively small so the liquid filled lens need only be relatively weak compared to the conventional lens which provides most of the power of the lens system. Other types of variable focus lens systems could also be used.
Ideally the lens system should have a flat Image plane, but In practice that may not be the case. The circular symmetry of the lens system should, however, imply a similar symmetry to the image plane position. A calibration procedure can be used to establish the relationship between the lens sweep position and the in focus position as a function of the off-axis distance. From this a calibration table can be constructed relating the target distance to sweep position and off-axis distance. Suitable functions can then be fitted and used in converting observations into range. In addition, if a suitable calibration pattern is used that can be read by the detector, e.g. a piece of graph paper, the scaling in the image plane can be automatically determined and fitted in a similar manner.
A possible problem with the optical sensors described above is the likelihood of stray light from the light source entering the detector 1. Such stray light will not be in focus, but will have the effect of reducing the contrast of the images and thus degrading performance. This problem becomes worse as the scale of the ranging increases as a larger scale instrument will require a brighter light source and will receive back less light from the object being viewed. In order to minimize this problem a number of strategies are possible:-
1) Use a high quality lens system with multi-coated surfaces. 2) Match the field of view (FoV) of the light source to the needs of the lens system.
3) Use crossed polarizing filter over the grid pattern and the detector.
4) Use a polarizing beam-splitter with the above.
5) Use the minimum number of optical surfaces (a single element aspheric lens may be useful).
6) Make sure the unwanted half of the beam that emerges from the beam- splitter is well absorbed.
Figures 4 and 5 show another approach to the stray light problem in the case of large scale ranging where the single lens is replaced by a matched pair of separate lens for the light source and detector. In Figure 4, the two lenses are shown mounted one above the other and with their axes parallel. In this case, a vertical grid pattern must be used because of parallax and the lenses should have a flat image plane so that the vertical bands align with the pixel array of the detector. An intermediate solution is for the two beams to be combined externally rather than internally as shown in Figure 5. As less optical surfaces are involved, the stray light problem will be less acute. In this second example, the preferred chequer-board grid pattern can again be used.
The other unwanted source of illumination is background or ambient light. This becomes progressively worse as the scale of the ranging Increases and will determine the upper limit of usefulness of the sensor. The "active" projected light source must therefore be dominant. A number of precautions can be applied to minimize this problem:-
1 ) Use a bright, narrow spectra-bandwidth light source with matching filter over the detector.
2) Use a stroboscopic light source with synchronized short exposure frame-grab. 3) Capture images with no internal light source and subtract this background signal from the active images.
The sensor relies on the triangulation angle afforded by the finite diameter of the lens system 2. Light is collected from the object 3 over the full area of the lens system 2 and the larger this area the smaller the depth of focus. In the same way, it is important that the outgoing light from the sensor also makes full use of the total lens area available in order to project a pattern with a short depth of field onto the object 3. The optics of the light source should therefore be designed so that from each point on the grid a cone of light is directed towards the lens so as to cover its full area. This cone of light should be of a uniform intensity (or even perhaps biased towards the edge of the lens which contributes the greater triangulation angle).
Optical sensors of the type described herein have a wide range of possible applications. These include:
1) Autonomous guided vehicles (AGU's), i.e. free ranging robotic vehicles which use their own sensors to plan their motion.
2) Medical imaging, e.g. facial mapping for planning plastic surgery or mapping of a patient's back to investigate spinal disorders.
3) Dentistry, e.g. mapping dental cavities for automatic machining of ceramic filling inserts.
4) Inspection of products such as printed circuit boards, e.g. checking components are in place etc.
5) Industrial gauging, e.g. measuring components such as vehicle bodies automatically.
INDUSTRIAL APPLICABILITY
It will be appreciated from the above that the sensor described herein can be used in a wide variety of industries and a wide variety of applications.

Claims

CLAIMS An optical sensor comprising: a structured light source for producing a pattern of constrasting areas; a detector which comprises an array of detector elements having dimensions matched to the pattern produced by the light source; an optical system for projecting a primary image of the light source onto an object that is to be sensed and for forming a secondary image on the detector of the primary image thus formed on the object; adjustment means for adjusting at least part of the optical system so as to vary the focussing of the primary image on the object, the arrangement being such that when the primary image is in focus on the object, the secondary image on the detector is also in focus; and processing means for analysing signals produced by the detector in conjunction with information on the adjustment of the optical system, wherein the structured light source is aojustable so as to Interchange the positions of contrasting areas of the pattern produced by the light source and in that the processing means is arranged to analyse the secondary images received by the detector elements with the contrasting areas in the Interchanged positions to determine those parts of the secondary Images which are in focus on the detector and thereby determine the range of corresponding parts of the object which are thus in focus.
An optical sensor as claimed in claim 1 in which the structured light source is arranged to produce a pattern of light and dark areas, for example light and dark bands or a chequer-board pattern, and adjustment means are provided for moving the structured light source so as to interchange the positions of the light and dark areas.
An optical sensor as claimed in claim 1 in which the structured light source comprises a plurality of elements arranged in a pattern which can be alternately switched on and off so as to interchange the positions of contrasting areas of the primary and secondary images formed on the object and on the detector.
4. An optical sensor as claimed in claim 1, 2 or 3 in wnich the structured light source comprises a regular pattern having a repeat distance which is n times the dimensions of the detector elements, where n is a small integer, e.g. 2.
5. An optical sensor as claimed in any preceding claim in which the processing means is arranged to analyse signals representing the intensities (i1 and i2) of illumination received by each detector element when the image thereon is in the two respective positions and to determine those areas of the detector for which the difference between the two signals (|i1-i2|) is relatively high and thus indicate the parts of the secondary image which are in focus on the dectector.
6. An optical sensor as claimed in any preceding claim in which the detector comprises a 2-dimensional array of detector elements and the structured light source comprises a corresponding 2-dimensional pattern of contrasting areas and in which the processing means is arranged to form a 2-dimensional image of those parts of the object which are in focus for a given focussing adjustment of the optical system and to build up a 3-dimensional model of the object from a plurality of said 2-dimensional images as the focussing adjustment of the optical system is varied.
7. An optical sensor as claimed in any preceding claim in which the detector comprises a linear array of detector elements and the structured light source comprises a corresponding linear pattern of light and dark areas and in which the processing means is arranged to build up a 2-dimensional image of a cross-section of the object being sensed by detecting those parts of the secondary image which are in focus on the detector as the focussing adjustment of the optical system is varied.
8. An optical sensor as claimed in any preceding claim in which the detector effectively comprises a structured point detector, e.g. in the form of a quadrant or two element sensor, and the structured light source effectively comprises a structured point source, e.g. in the form of a corresponding two-by-two chequer-board or two element pattern, and in which the processing means is arranged to determine the range of an object being sensed by detecting when the structure of the image formed on the detector, as the focussing adjustment of the optical system is varied, corresponds to the pattern of the structured light source.
9. An optical sensor as claimed in claim 7 or 8 comprising scanning means for scanning the image produced by the light source relative to the object being sensed whereby at least one additional dimension may be added to the image the sensor is able to form of the object.
10. A method of determining the range of at least part of an object being viewed using an optical sensor comprising: a structured light source which produces a pattern of contrasting areas; a detector which comprises an array of detector elements having dimensions matched to the pattern produced by the light source; an optical system which projects a primary image of the light source onto an object that is to be sensed and forms a secondary image on the detector of the primary image thus formed on the object; adjustment means which adjusts at least part of the optical system so as to vary the focussing of the primary image on the object, the arrangement being such that when the primary image is in focus on the object, the secondary image on the detector is also in focus; and processing means which analyses signals produced by the detector in conjunction with information on the adjustment of the optical system, the method involving adjustment of the structured light source so as to interchange the positions of contrasting areas of the pattern produced by the light source and the processing means being arranged to analyse the secondary images received by the detector elements with the contrasting areas in the interchanged positions to determine those parts of the secondary images which are in focus on the detector and thereby determine the range of corresponding parts of the object which are thus in focus.
PCT/GB1992/000221 1991-02-12 1992-02-06 An optical sensor WO1992014118A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP4504040A JP2973332B2 (en) 1991-02-12 1992-02-06 Light sensor
US08/104,084 US5381236A (en) 1991-02-12 1992-02-06 Optical sensor for imaging an object
DE69207176T DE69207176T2 (en) 1991-02-12 1992-02-06 Optical sensor
EP92904077A EP0571431B1 (en) 1991-02-12 1992-02-06 An optical sensor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB919102903A GB9102903D0 (en) 1991-02-12 1991-02-12 An optical sensor
GB9102903.3 1991-02-12

Publications (1)

Publication Number Publication Date
WO1992014118A1 true WO1992014118A1 (en) 1992-08-20

Family

ID=10689869

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1992/000221 WO1992014118A1 (en) 1991-02-12 1992-02-06 An optical sensor

Country Status (7)

Country Link
US (1) US5381236A (en)
EP (1) EP0571431B1 (en)
JP (1) JP2973332B2 (en)
AU (1) AU1195892A (en)
DE (1) DE69207176T2 (en)
GB (1) GB9102903D0 (en)
WO (1) WO1992014118A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997002466A1 (en) * 1995-06-30 1997-01-23 Siemens Aktiengesellschaft Optical distance sensor
US6731383B2 (en) 2000-09-12 2004-05-04 August Technology Corp. Confocal 3D inspection system and process
US6773935B2 (en) 2001-07-16 2004-08-10 August Technology Corp. Confocal 3D inspection system and process
WO2004068400A2 (en) * 2003-01-25 2004-08-12 Spiral Scratch Limited Methods and apparatus for making images including depth information
US6870609B2 (en) 2001-02-09 2005-03-22 August Technology Corp. Confocal 3D inspection system and process
US6882415B1 (en) 2001-07-16 2005-04-19 August Technology Corp. Confocal 3D inspection system and process
US6970287B1 (en) 2001-07-16 2005-11-29 August Technology Corp. Confocal 3D inspection system and process
WO2013049646A1 (en) 2011-09-29 2013-04-04 Fei Company Microscope device
US9134126B2 (en) 2010-06-17 2015-09-15 Dolby International Ab Image processing device, and image processing method
US9350921B2 (en) 2013-06-06 2016-05-24 Mitutoyo Corporation Structured illumination projection with enhanced exposure control
US11539937B2 (en) 2009-06-17 2022-12-27 3Shape A/S Intraoral scanning apparatus
US11701208B2 (en) 2014-02-07 2023-07-18 3Shape A/S Detecting tooth shade

Families Citing this family (69)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7983817B2 (en) * 1995-06-07 2011-07-19 Automotive Technologies Internatinoal, Inc. Method and arrangement for obtaining information about vehicle occupants
KR100417567B1 (en) * 1995-06-07 2004-02-05 제이콥 엔 올스테드터 A camera for generating image of a scene in a three dimensional imaging system
AU746605B2 (en) * 1995-06-07 2002-05-02 Jacob N. Wohlstadter Three-dimensional imaging system
US5867604A (en) * 1995-08-03 1999-02-02 Ben-Levy; Meir Imaging measurement system
US6044170A (en) * 1996-03-21 2000-03-28 Real-Time Geometry Corporation System and method for rapid shape digitizing and adaptive mesh generation
US5870220A (en) * 1996-07-12 1999-02-09 Real-Time Geometry Corporation Portable 3-D scanning system and method for rapid shape digitizing and adaptive mesh generation
IT1286838B1 (en) * 1996-09-25 1998-07-17 Consiglio Nazionale Ricerche METHOD FOR COLLECTING IMAGES IN CONFOCAL MICROSCOPY
US6172349B1 (en) * 1997-03-31 2001-01-09 Kla-Tencor Corporation Autofocusing apparatus and method for high resolution microscope system
AU737617B2 (en) * 1997-04-04 2001-08-23 Isis Innovation Limited Microscopy imaging apparatus and method
JP3585018B2 (en) * 1997-05-15 2004-11-04 横河電機株式会社 Confocal device
DE19720832C2 (en) * 1997-05-17 2003-02-27 Diehl Stiftung & Co Target detection device
US6094269A (en) * 1997-12-31 2000-07-25 Metroptic Technologies, Ltd. Apparatus and method for optically measuring an object surface contour
US6098031A (en) * 1998-03-05 2000-08-01 Gsi Lumonics, Inc. Versatile method and system for high speed, 3D imaging of microscopic targets
US6366357B1 (en) * 1998-03-05 2002-04-02 General Scanning, Inc. Method and system for high speed measuring of microscopic targets
AU3991799A (en) 1998-05-14 1999-11-29 Metacreations Corporation Structured-light, triangulation-based three-dimensional digitizer
IL125659A (en) * 1998-08-05 2002-09-12 Cadent Ltd Method and apparatus for imaging three-dimensional structure
US20140152823A1 (en) * 1998-11-30 2014-06-05 American Vehicular Sciences Llc Techniques to Obtain Information About Objects Around a Vehicle
NL1011080C2 (en) * 1999-01-20 2000-07-21 Kwestar B V Apparatus and method for sorting asparagus.
US6734962B2 (en) * 2000-10-13 2004-05-11 Chemimage Corporation Near infrared chemical imaging microscope
DE19944516B4 (en) * 1999-09-16 2006-08-17 Brainlab Ag Three-dimensional shape detection with camera images
GB9926014D0 (en) * 1999-11-04 2000-01-12 Burton David R Measurement of objects
US6505140B1 (en) * 2000-01-18 2003-01-07 Intelligent Automation, Inc. Computerized system and method for bullet ballistic analysis
US6785634B2 (en) * 2000-01-18 2004-08-31 Intelligent Automation, Inc. Computerized system and methods of ballistic analysis for gun identifiability and bullet-to-gun classifications
US6462814B1 (en) 2000-03-15 2002-10-08 Schlumberger Technologies, Inc. Beam delivery and imaging for optical probing of a device operating under electrical test
US7065242B2 (en) * 2000-03-28 2006-06-20 Viewpoint Corporation System and method of three-dimensional image capture and modeling
EP1297486A4 (en) 2000-06-15 2006-09-27 Automotive Systems Lab Occupant sensor
US6369879B1 (en) * 2000-10-24 2002-04-09 The Regents Of The University Of California Method and apparatus for determining the coordinates of an object
ATE493683T1 (en) * 2001-04-07 2011-01-15 Zeiss Carl Microimaging Gmbh METHOD AND ARRANGEMENT FOR DEPTH-RESOLVED OPTICAL DETECTION OF A SAMPLE
US7274446B2 (en) * 2001-04-07 2007-09-25 Carl Zeiss Jena Gmbh Method and arrangement for the deep resolved optical recording of a sample
US6968073B1 (en) 2001-04-24 2005-11-22 Automotive Systems Laboratory, Inc. Occupant detection system
JP2003098439A (en) * 2001-09-25 2003-04-03 Olympus Optical Co Ltd Microscope capable of changing over observation
US6597437B1 (en) * 2002-01-03 2003-07-22 Lockheed Martin Corporation Closed loop tracking and active imaging of an out-of-band laser through the use of a fluorescent conversion material
GB0200819D0 (en) * 2002-01-15 2002-03-06 Cole Polytechnique Federale De Microscopy imaging apparatus and method for generating an image
US6750974B2 (en) 2002-04-02 2004-06-15 Gsi Lumonics Corporation Method and system for 3D imaging of target regions
US7255558B2 (en) * 2002-06-18 2007-08-14 Cadent, Ltd. Dental imaging instrument having air stream auxiliary
US6898377B1 (en) * 2002-06-26 2005-05-24 Silicon Light Machines Corporation Method and apparatus for calibration of light-modulating array
US7218336B2 (en) * 2003-09-26 2007-05-15 Silicon Light Machines Corporation Methods and apparatus for driving illuminators in printing applications
US7406181B2 (en) * 2003-10-03 2008-07-29 Automotive Systems Laboratory, Inc. Occupant detection system
EP1607064B1 (en) 2004-06-17 2008-09-03 Cadent Ltd. Method and apparatus for colour imaging a three-dimensional structure
US7212949B2 (en) * 2004-08-31 2007-05-01 Intelligent Automation, Inc. Automated system and method for tool mark analysis
US7115848B1 (en) * 2004-09-29 2006-10-03 Qioptiq Imaging Solutions, Inc. Methods, systems and computer program products for calibration of microscopy imaging devices
US20060095172A1 (en) * 2004-10-28 2006-05-04 Abramovitch Daniel Y Optical navigation system for vehicles
US7573631B1 (en) 2005-02-22 2009-08-11 Silicon Light Machines Corporation Hybrid analog/digital spatial light modulator
US7477400B2 (en) * 2005-09-02 2009-01-13 Siimpel Corporation Range and speed finder
DE102007018048A1 (en) * 2007-04-13 2008-10-16 Michael Schwertner Method and arrangement for optical imaging with depth discrimination
US8184364B2 (en) * 2007-05-26 2012-05-22 Zeta Instruments, Inc. Illuminator for a 3-D optical microscope
US7729049B2 (en) * 2007-05-26 2010-06-01 Zeta Instruments, Inc. 3-d optical microscope
US20090002271A1 (en) * 2007-06-28 2009-01-01 Boundary Net, Incorporated Composite display
US20090323341A1 (en) * 2007-06-28 2009-12-31 Boundary Net, Incorporated Convective cooling based lighting fixtures
CA2597891A1 (en) * 2007-08-20 2009-02-20 Marc Miousset Multi-beam optical probe and system for dimensional measurement
US8712116B2 (en) * 2007-10-17 2014-04-29 Ffei Limited Image generation based on a plurality of overlapped swathes
DE102008016767B4 (en) * 2008-04-02 2016-07-28 Sick Ag Optoelectronic sensor and method for detecting objects
US10568535B2 (en) * 2008-05-22 2020-02-25 The Trustees Of Dartmouth College Surgical navigation with stereovision and associated methods
US11690558B2 (en) * 2011-01-21 2023-07-04 The Trustees Of Dartmouth College Surgical navigation with stereovision and associated methods
JP5403458B2 (en) * 2008-07-14 2014-01-29 株式会社ブイ・テクノロジー Surface shape measuring method and surface shape measuring apparatus
US20100019997A1 (en) * 2008-07-23 2010-01-28 Boundary Net, Incorporated Calibrating pixel elements
US20100020107A1 (en) * 2008-07-23 2010-01-28 Boundary Net, Incorporated Calibrating pixel elements
US20100019993A1 (en) * 2008-07-23 2010-01-28 Boundary Net, Incorporated Calibrating pixel elements
US7978346B1 (en) * 2009-02-18 2011-07-12 University Of Central Florida Research Foundation, Inc. Methods and systems for realizing high resolution three-dimensional optical imaging
US9389408B2 (en) 2010-07-23 2016-07-12 Zeta Instruments, Inc. 3D microscope and methods of measuring patterned substrates
AT509884B1 (en) * 2010-07-27 2011-12-15 Alicona Imaging Gmbh Microscopy method and device
US8581962B2 (en) * 2010-08-10 2013-11-12 Larry Hugo Schroeder Techniques and apparatus for two camera, and two display media for producing 3-D imaging for television broadcast, motion picture, home movie and digital still pictures
US9561022B2 (en) 2012-02-27 2017-02-07 Covidien Lp Device and method for optical image correction in metrology systems
US10546441B2 (en) 2013-06-04 2020-01-28 Raymond Anthony Joao Control, monitoring, and/or security, apparatus and method for premises, vehicles, and/or articles
US9675430B2 (en) 2014-08-15 2017-06-13 Align Technology, Inc. Confocal imaging apparatus with curved focal surface
JP6027220B1 (en) * 2015-12-22 2016-11-16 Ckd株式会社 3D measuring device
JP7035831B2 (en) 2018-06-13 2022-03-15 オムロン株式会社 3D measuring device, controller, and control method in 3D measuring device
JP2020153798A (en) * 2019-03-19 2020-09-24 株式会社リコー Optical device, distance measuring optical unit, distance measuring device, and distance measuring system
US10809378B1 (en) * 2019-09-06 2020-10-20 Mitutoyo Corporation Triangulation sensing system and method with triangulation light extended focus range using variable focus lens

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4640620A (en) * 1983-12-29 1987-02-03 Robotic Vision Systems, Inc. Arrangement for rapid depth measurement using lens focusing

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4629324A (en) * 1983-12-29 1986-12-16 Robotic Vision Systems, Inc. Arrangement for measuring depth based on lens focusing
JP2928548B2 (en) * 1989-08-02 1999-08-03 株式会社日立製作所 Three-dimensional shape detection method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4640620A (en) * 1983-12-29 1987-02-03 Robotic Vision Systems, Inc. Arrangement for rapid depth measurement using lens focusing

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
APPLIED OPTICS. vol. 26, no. 12, 15 June 1987, NEW YORK US pages 2416 - 2420; T.R.CORLE E.A.: 'DISTANCE MEASUREMENTS BY DIFFERENTIAL CONFOCAL OPTICAL RANGING' *
APPLIED OPTICS. vol. 29, no. 10, 1 April 1990, NEW YORK US pages 1474 - 1476; J.DIRICKX E.A.: 'AUTOMATIC CALIBRATION METHOD FOR PHASE SHIFT SHADOW MOIRE INTERFEROMETRY' *
IBM TECHNICAL DISCLOSURE BULLETIN. vol. 16, no. 2, July 1973, NEW YORK US pages 433 - 444; J.R. MALIN: 'OPTICAL MICROMETER' *
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE vol. 11, no. 11, November 1989, NEW YORK US pages 1225 - 1228; MAKOTO MATSUKI E.A.: 'A REAL-TIME SECTIONAL IMAGE MAESURING SYSTEM USING TIME SEQUENTIALLY CODED GRATING METHOD' *
OPTICAL ENGINEERING. vol. 29, no. 12, December 1990, BELLINGHAM US pages 1439 - 1444; JIAN LI E.A.: 'IMPROVED FOURIER TRANSFORM PROFILOMETRY FOR THE AUTOMATIC MEASUREMENT OF THREE-DIMENSIONAL OBJECT SHAPES' *
TECHNISCHE RUNDSCHAU. vol. 79, no. 41, 9 October 1987, BERN CH pages 94 - 98; E.SENN: 'DREIDIMENSIONALE MULTIPUNKTMESSUNG MIT STRUKTURIERTEM LICHT' *

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997002466A1 (en) * 1995-06-30 1997-01-23 Siemens Aktiengesellschaft Optical distance sensor
US5991040A (en) * 1995-06-30 1999-11-23 Siemens Aktiengesellschaft Optical distance sensor
US6731383B2 (en) 2000-09-12 2004-05-04 August Technology Corp. Confocal 3D inspection system and process
US6870609B2 (en) 2001-02-09 2005-03-22 August Technology Corp. Confocal 3D inspection system and process
US6773935B2 (en) 2001-07-16 2004-08-10 August Technology Corp. Confocal 3D inspection system and process
US6882415B1 (en) 2001-07-16 2005-04-19 August Technology Corp. Confocal 3D inspection system and process
US6970287B1 (en) 2001-07-16 2005-11-29 August Technology Corp. Confocal 3D inspection system and process
WO2004068400A2 (en) * 2003-01-25 2004-08-12 Spiral Scratch Limited Methods and apparatus for making images including depth information
WO2004068400A3 (en) * 2003-01-25 2004-12-09 Spiral Scratch Ltd Methods and apparatus for making images including depth information
US11622102B2 (en) 2009-06-17 2023-04-04 3Shape A/S Intraoral scanning apparatus
US11539937B2 (en) 2009-06-17 2022-12-27 3Shape A/S Intraoral scanning apparatus
US11671582B2 (en) 2009-06-17 2023-06-06 3Shape A/S Intraoral scanning apparatus
US11831815B2 (en) 2009-06-17 2023-11-28 3Shape A/S Intraoral scanning apparatus
US9134126B2 (en) 2010-06-17 2015-09-15 Dolby International Ab Image processing device, and image processing method
DE102011114500A1 (en) 2011-09-29 2013-04-04 Ludwig-Maximilians-Universität microscope device
DE102011114500B4 (en) 2011-09-29 2022-05-05 Fei Company microscope device
WO2013049646A1 (en) 2011-09-29 2013-04-04 Fei Company Microscope device
US9350921B2 (en) 2013-06-06 2016-05-24 Mitutoyo Corporation Structured illumination projection with enhanced exposure control
US11701208B2 (en) 2014-02-07 2023-07-18 3Shape A/S Detecting tooth shade
US11707347B2 (en) 2014-02-07 2023-07-25 3Shape A/S Detecting tooth shade
US11723759B2 (en) 2014-02-07 2023-08-15 3Shape A/S Detecting tooth shade

Also Published As

Publication number Publication date
US5381236A (en) 1995-01-10
GB9102903D0 (en) 1991-03-27
AU1195892A (en) 1992-09-07
JP2973332B2 (en) 1999-11-08
EP0571431B1 (en) 1995-12-27
JPH06505096A (en) 1994-06-09
DE69207176T2 (en) 1996-07-04
DE69207176D1 (en) 1996-02-08
EP0571431A1 (en) 1993-12-01

Similar Documents

Publication Publication Date Title
EP0571431B1 (en) An optical sensor
US4645347A (en) Three dimensional imaging device
US5305092A (en) Apparatus for obtaining three-dimensional volume data of an object
US6611344B1 (en) Apparatus and method to measure three dimensional data
JP3481631B2 (en) Apparatus and method for determining a three-dimensional shape of an object using relative blur in an image due to active illumination and defocus
EP0380904A1 (en) Solid state microscope
US6909509B2 (en) Optical surface profiling systems
EP0244781A2 (en) Method and apparatus of using a two beam interference microscope for inspection of integrated circuits and the like
US20030072011A1 (en) Method and apparatus for combining views in three-dimensional surface profiling
CN112469361B (en) Apparatus, method and system for generating dynamic projection patterns in confocal cameras
WO2002082009A1 (en) Method and apparatus for measuring the three-dimensional surface shape of an object using color informations of light reflected by the object
US7369309B2 (en) Confocal microscope
CA2089079C (en) Machine vision surface characterization system
CA2360936A1 (en) Automatic on-the-fly focusing for continuous image acquisition in high-resolution microscopy
US20120120232A1 (en) Shape measuring device, observation device, and image processing method
US6765606B1 (en) Three dimension imaging by dual wavelength triangulation
CA2334225C (en) Method and device for opto-electrical acquisition of shapes by axial illumination
US6556307B1 (en) Method and apparatus for inputting three-dimensional data
CN108253905B (en) Vertical color confocal scanning method and system
JPH11132748A (en) Multi-focal point concurrent detecting device, stereoscopic shape detecting device, external appearance inspecting device, and its method
WO1990009560A1 (en) Distance gauge
JP3321866B2 (en) Surface shape detecting apparatus and method
KR101523336B1 (en) apparatus for examining pattern image of semiconductor wafer
US10948284B1 (en) Optical profilometer with color outputs
EP0890822A2 (en) A triangulation method and system for color-coded optical profilometry

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH DE DK ES FI GB HU JP KP KR LK LU MG MW NL NO PL RO RU SD SE US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BF BJ CF CG CH CI CM DE DK ES FR GA GB GN GR IT LU MC ML MR NL SE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)

Free format text: OAPI PATENT (BF,BJ,CF,CG,CL,CM,GA,GN,ML,MR,SN,TD,TG), AT,AU,BB,BG,BR,CA,DE,DK,FI,HU,KP,KR,LK,LU,MG,MW,NL,NO,PL,RO,RU,SD,SE

WWE Wipo information: entry into national phase

Ref document number: 08104084

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 1992904077

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1992904077

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

WWG Wipo information: grant in national office

Ref document number: 1992904077

Country of ref document: EP