WO2006109308A1 - Real-time imaging method and system using structured light - Google Patents

Real-time imaging method and system using structured light Download PDF

Info

Publication number
WO2006109308A1
WO2006109308A1 PCT/IL2006/000461 IL2006000461W WO2006109308A1 WO 2006109308 A1 WO2006109308 A1 WO 2006109308A1 IL 2006000461 W IL2006000461 W IL 2006000461W WO 2006109308 A1 WO2006109308 A1 WO 2006109308A1
Authority
WO
WIPO (PCT)
Prior art keywords
light
region
interest
illumination
image data
Prior art date
Application number
PCT/IL2006/000461
Other languages
French (fr)
Inventor
Sharon Ehrlich
Original Assignee
Sharon Ehrlich
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharon Ehrlich filed Critical Sharon Ehrlich
Publication of WO2006109308A1 publication Critical patent/WO2006109308A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2545Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with one projection direction and several detection directions, e.g. stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light

Definitions

  • This invention relates to an imaging method and system for three- dimensional modeling of spaces and objects.
  • the invention is particularly useful for 3D geometry documentation and 3D shape capturing.
  • One kind of existing technique in this field utilizes systems based on photographing. These include mechanized systems based on structured light (DOE), which are precise systems (of 30 ⁇ m precision), intended for short operational range (up to about 3m), have a small measurement volume (2m'x2m'x2m'), are very costly, and require professional skills and expertise in operation and processing. These systems are intended for use in the automobile and aircraft industries, product control and reverse engineering.
  • DOE structured light
  • the system produces an RGB image belonging to 3D cloud of points.
  • Other systems of this kind include conventional ground photogrammetry based systems. These systems are based on overlapping images and control points, require manual operation, and data processing is slow. The result is inadequate, non-integrated non-natural 3D data.
  • the systems operate with a range of 2-3Om, accuracy of 1/2 mm while specifically operated and of 15mm in the regular operational mode, the operation is distance dependent, and the data analysis is strongly dependent on illumination of the surroundings.
  • Another existing system utilizes 3D laser scanners and includes devices with tiny, short and middle range of operation.
  • the tiny-range systems have a
  • 2.5mm accuracy (distance dependent), and are intended for mapping of general bodies, archeology, objects' reverse engineering and quality control.
  • These systems are mobile, lightweight, and operable by a skilled user.
  • the system produces colored 3D cloud of point presentation.
  • the system and the technology utilized therein are costly; the obtained model is static and requires to be further treated manually.
  • the middle-range systems have a 2-15Om measurement distance, and a 3-5mm accuracy.
  • These systems are intended to be used in general mapping (building, industry, archeology, etc.); the systems are mobile, of relatively cumbersome operation, easy to operate but require a skilled operator.
  • the system produces 3D cloud of points with true (primary) colors (RGB) and intensity map presentation.
  • the known systems in the field of animation include those utilizing sampling of static bodies by means of a scanner, and mobility acquisition using appropriate software.
  • This technique enables model construction, and motion creation by means of model manipulation and mathematical procedures.
  • This technique is costly, slow, and requires too many manual operations for correlating different procedures (or alternatively expensive tools for continuous calculation).
  • the other known system used in the field of animation is based on an arrangement of features in significant points for movement (features which could be easily analyzed by cameras or other sensors), and sampling thereof by sensors or cameras for obtaining a continuity of motion in an image, meaning the image with no model; the entire process is cumbersome and costly.
  • the invention provides a novel system and method enabling indoor/outdoor capture and reconstruction of a static or live dynamic model, including material composition of objects and spaces with real reference points.
  • the invention also relates to a method of using image data for constructing a continuous model, which may be independent or adaptive, as well as generating simulation of motion and events based on the model. Additionally, the invention relates to a method of real time following a body's motion in space.
  • the technique of the present invention provides for a 3D real time dynamic shot sampler, allowing for tracking the performance and/or description of an existing state.
  • the invented technique is capable of constructing an integral natural 3D model utilizing 3D cloud of points combined with true colors (RGB) in a desired coordinate system.
  • the system of the present invention can operate on the basis of an image and scanned model, enabling automatic modeling.
  • the system provides real time calculations allowing for dynamic model creation.
  • the system is inexpensive, and allows for effective operation with no specific training.
  • the system is insensitive to illumination conditions, i.e. is operable under reasonable daylight, as well as at night (cloud of points presentation with no colors).
  • the system allows for redlining on the image and/or model for the purposes of updating.
  • the system can be configured for local operational mode or for remote mode (via a communication network).
  • the present invention provides sampling and imaging allowing construction of a live and active model in space, combining a cloud of scan points and an image.
  • the invented system utilizes an array of guided mirrors that produce from a single light source an array of point-like light sources, dynamically controlled by a control system.
  • the system of the present invention is aimed at replacing the technique of constructing a model from an image (i.e., photogrammetry based techniques) and replacing a model constructed from a cloud of points (COP), where each point presents a distance of a sampled point from a point in space formed by laser scanners.
  • the simple implementations of the invented system utilize an array of very small light emitters, controlled to produce the sample.
  • the invention provides for automatically and/or manually receiving a dynamic active and live model of a space region or an object sampled in two modes: as a dynamic cloud of points synchronized in space and time; and as a dynamic model combining data from the cloud of points and data from a regular photo (image) correlated in space and time.
  • the invented technique also provides for dynamically tracking object motion (tower, pole, people, etc.), and enables full-sphere data reading by automatic means.
  • the invention also provides for intelligent use of illumination of dynamically varying wavelengths and angular distribution; analysis of light absorption (by means of analyzing a structure of points of light reflection); estimation of the materials and objects; and dealing with light filtering ranges.
  • the invention can be used for constructing live models, both static and dynamic, of objects and spaces, with a possibility of creating an independent model or as an integration and completion of existing models (for the purposes of updating or matching test of the existing state to the model).
  • the invention can also be used for tracking the motion in space to create animation or control the motion of a body (including a human body) for medical and other applications; creation of models to reconstruct a fragment of events, including the creation of a dynamic model (e.g. reconstruct a car accident or terror act).
  • the created model enables to create a characterizing signature of an object (including a live object) by means of a shape and material composition allowing its identification (including a human face).
  • the invention can utilize both professional and amateur imaging systems for receiving an image and a 3D model in a cost- effective way.
  • the technique of the present invention is revolutionary in the scanning market, by enabling a non-professional user to create and process a 3D model ("as made” or other), completely by himself. This is a major innovation, which will make use of a data model as common as using a camera to track the construction process.
  • an imaging method for use in 3D real time dynamic sampling of a region of interest comprising: (i) illuminating the region of interest with structured light of predetermined illumination conditions, the structured light being in the form of a main pattern formed by a number of spatially separated light beams of at least one wavelength of light;
  • the light coming from the region of interest is indicative of the structured light returned (reflected) from the region of interest as well as the ambient light (surroundings).
  • the main pattern carries a reference pattern.
  • the latter is formed by a few (at least two) reference marks formed by a predetermined arrangement of some of the light beams involved in the main pattern.
  • the structured light is produced by emitting a light beam and splitting it into the predetermined number of spatially separated light beams. The splitting may be achieved by impinging the emitted light beam onto a surface formed by an array of light deflectors (mirrors), which are controllable, activated and angularly displaced.
  • one or more masks can be placed in the optical path of deflected light to further split the light into a larger number of beams. Such a mask grid may be selectively operated to be either in or out of the optical path (i.e. shiftable between its operative and inoperative positions).
  • the structured light is produced by controllably operating an array of separate light emitters.
  • one or more masks e.g. shiftable between operative and inoperative positions
  • the structured light is produced by carrying out an initial adaptation of the imaging to required resolution, illumination conditions, and a distance from an imaging system to the region of interest.
  • the initial adaptation comprises successively illuminating the region of interest and detecting the image data, with at least one varying parameter from the following: a number of the light beams at least within one or more selected areas of the region of interest, intensity of detected illumination, polarization of illuminating light, wavelength of illuminating light and the distance to the region of interest, and analyzing the successive image data upon detecting an optimal condition of substantially no missing light points in the image and non-matched points in the multiple images, within a certain predefined threshold.
  • the illumination intensity variation can be achieved by controllably varying intensity of emitted light (e.g.
  • the intensity of emitted light can be initially increased, and then a number of mirrors involved in the splitting and deflecting of said light is increased.
  • the illumination intensity variation can be achieved by increasing an exposure time of light detector(s).
  • the method utilizes data based on the nature of the region of interest to be modeled and the environmental conditions.
  • the light points projected onto the region of interest by said light beams, respectively are captured simultaneously during a regular image acquisition.
  • complete synchronized measured data is obtained from the cloud of points' data and a gray level image data.
  • the model is thus created from the cloud of sampled points added to the regular image data.
  • the method may utilize illumination with light of different wavelengths. This allows for creation of a model indicative of a material composition of the region of interest.
  • the region of interest is illuminated with a certain initial resolution reference pattern
  • the image data is generated using two light detectors
  • the data analysis consists of identifying and matching the relative locations of reference points (reference pattern) and correlating the images using this reference pattern, and then identifying the area(s) of missing or not-matched points in the images to apply the selective adaptation.
  • the adaptation procedure includes successively varying the illumination conditions by increasing the number of the projected light beams at least within the selected one or more areas of the region of interest (where missing or not- matching points have been identified), and upon identifying the missing or not- matched points while reaching the maximal number of light beams involved in the illumination, projecting the additional coded pattern onto the selected area(s).
  • a method for use in 3D real time dynamic sampling of a region of interest comprising:
  • a method for use in 3D real time dynamic sampling of a region of interest comprising illuminating the region of interest by structured light including a main pattern formed by a predetermined number of spatially separated light beams of at least one wavelength of light, and a reference pattern within said main pattern, the reference pattern including a few spaced apart reference marks formed by a predetermined arrangement of some of said light beams forming the main pattern.
  • an imaging system for use in 3D real time dynamic sampling of a region of interest, the system comprising: a) an illumination unit configured and operable for producing and projecting onto the region of interest structured light of predetermined illumination conditions, the structured light being in the form of a main pattern formed by a number of spatially separated light beams of at least one wavelength of light; b) a light detection unit for detecting light from the region of interest and generating image data indicative thereof; c) a control unit configured and operable for analyzing the image data and selectively operating the illumination unit for adapting the main pattern until detecting optimal image data corresponding to an optimal projected pattern, the control unit operating the illumination unit to obtain at least one of the following: increase a number of the projected light beams at least within at least one selected area of said region of interest, and project an additional coded pattern onto at least one selected area of said region of interest, to enable processing of the optimal image data to calculate cloud of points related data and create a model of the region of interest.
  • an illumination unit configured and operable for producing and projecting onto the
  • Fig. IA is a block diagram of a system of the present invention configured for 3D real time dynamic modeling (sampling) a region of interest;
  • Fig. IB schematically illustrates an example of the configuration of the system of Fig. IA using a MEMS mirrors array
  • Figs. 1C and ID illustrate the operational principles of a telecentric lens system suitable to be used in the sampling system of the present invention
  • Fig. 2 shows another example of the implementation of the system of Fig. IA, as a portable 3D real time dynamic shot-sampling system
  • Fig. 3 shows yet another example of the configuration of the system of Fig. IA, as a 360°, full sphere, shot-sampling system
  • Figs. 4 and 5 show yet further examples of the system of the present invention configured as a professional real time, full space, dynamic movement system for animation and/or medical use;
  • Fig. 6 illustrates the principle of the 3D real time dynamic shot-sampler operation of the present invention using LED and Laser Diode Array
  • Figs. 7 and 8 illustrate the principle of depth calculation of a known point, from two points of view (triangulation), suitable to be used in the system of the present invention
  • Fig. 9 exemplifies the system operation to achieve variable light power (and resolution);
  • Figs. 1OA to 1OC show an example of a method of the present invention for 3D real time dynamic modeling (sampling) a region of interest;
  • Fig. 11 exemplifies an image of structured light pattern carrying a reference pattern, according to the invention.
  • Fig. 12 exemplifies a cyclic sweep pattern (coded pattern) suitable to be used in the present invention.
  • FIG. IA 5 there are illustrated, by way of a block diagram, the main elements of an imaging-sampling system 10 of the present invention configured and operable as a 3D Real Time Dynamic Shot-Sampler system.
  • System 10 includes an illumination unit 12 configured and operable for producing and projecting structured light onto a region of interest ROI; a light detection unit 14 for detecting light returned from the region of interest and generating measured data indicative thereof; and a control unit 16.
  • System 10 is operable for real time creation of a 3D model of the region of interest, with simple and inexpensive system configuration.
  • Illumination unit 12 includes either a light emitter associated with a light splitter and deflector an array of spatially separating light beams producing from a light beam emitted by the light emitter, or an array (2D array) of light emitters producing an array of light beams, respectively.
  • the illumination unit may also include one or more additional splitters (e.g. grids) in the optical path of said array of light beams thus creating even more separate light beams.
  • the illumination unit can be constructed using simple and inexpensive technologies, for example including a LED and/or laser based light source with an array of controllably displaceable mirrors (such MEMS based technique); or an array of LEDs and/or lasers, or a spatial light modulator (SLM).
  • the illumination unit is preferably configured and operable to form, within the structured light pattern (main pattern), a further local pattern indicative of a few (generally at least two) reference marks (points) of a predefined arrangement. To this end, some of said separated light beams of the main pattern are arranged to present the reference points' pattern.
  • Such reference marks' pattern within the main pattern of points is exemplified in Fig. 11.
  • Light detection unit 14 includes one or preferably more than one camera detecting light from the region of interest and generating data indicative thereof.
  • the cameras are oriented with respect to each other and with respect to the region of interest so as to provide the desired coverage of the region of interest by the fields of view of the cameras, with the relative orientation between the cameras and the illumination unit being known.
  • the cameras are preferably mounted for movement relative to the region of interest.
  • Control unit 16 is configured and operable to carry out inter alia an image processing (pattern recognition) to identify the reference points and their correlation in the two cameras' images and in the successive images acquired by each of the cameras.
  • the control unit is further configured for carrying out initial system adaptation to the required resolution, illumination conditions, and a distance to the region of interest; and further measurement adaptation to an object (region of interest).
  • the latter includes image data analysis to identify whether the illumination unit is to be operated to increase a number of light beams (points) involved in the illumination of at least some parts of the region of interest (increased resolution) and/or whether a sweeping procedure (illumination by a coded pattern) is to be applied. Examples of the system operation will be described further below.
  • Fig. IB illustrates a specific but not limiting example of the configuration of an imaging-sampling system 100 of the present invention.
  • System 100 includes an illumination (projector) unit, a light detection unit, and a control unit.
  • the illumination unit is configured to define an array of small (point-like) light sources controllably operable to project structured light in the form of a pattern of spatially separated light beams (cloud of points) onto a region of interest (object/surface/space) to be sampled.
  • the illumination unit includes a light emitter unit 110 (including for example LED and/or laser light source), associated with an optical unit 111 (one or more lenses); and a light splitting and deflecting unit 103 operating to form the structure light from the light generated by light emitter unit 110 and appropriately project the structured light pattern onto a region of interest.
  • splitter/deflector unit 103 includes N mirrors controllably guided for angular displacement and associated with an appropriate optical unit 108 (e.g. including a spherical mirror to increase a solid angle of propagation of the light beams).
  • the control unit is a computer system including various controlling and processing utilities operating as mentioned above and as will be described more specifically further below, to control the light projection onto the region of interest and to process and analyze the detected light.
  • System 100 is configured such that light generated by light emitter unit 110 is projected through optical system 111 on a surface formed by N mirrors' unit 103 guided for angular displacement.
  • This may be an array of Digital Light
  • DLP Dynamic Light Processing
  • Optical unit 103 may for example include mirrors of a 13x13 ⁇ m size.
  • the array of N mirrors thus presents N point-like spatially separated controllably switched light sources producing structured light in the form of N spatially separated light beams.
  • the latter are projected onto a targeted object (not shown) by optical system 108 (and possibly also by means of a grid 114, that might be needed in special cases to improve the focusing).
  • Two cameras (imagers) 101 and 102 operate to detect the projected points and their surroundings, when the light intensity at these points is relatively high as compared to that of the surroundings.
  • the control unit operates to calculate a distance to each of the projected points, for the following known parameters: a distance between the two cameras and a relative angle between them, a location of the reference point at each camera, the profile of the mirrors that transmit / block the light incident thereon
  • Activation of one or more of the mirror profiles results in the creation of a cloud of points (COP), in which the location of each point in space corresponds to the object being sampled and a distance between the object and a measurement system.
  • the system may be configured to detect a gray level or color (being indicative of information about the object), providing additional data of the sampled object, directly and fully correlated in space (as being sampled from the same source).
  • the illumination intensity projected by the sampling system of the present invention can be varying, which can be implemented in three ways:
  • Such variation of the illumination intensity is needed in order to produce light intensity providing a contrast indicative of a height of the points projecting the light onto an image acquired by the camera.
  • This intensity variation is needed when a distance to the sampled object is large or alternatively when the illumination from the surroundings (ambient light) is relatively strong. For example, in case the intensity of emitted light is not uniform, such variation may occur only for the wavelength of light produced by the light source (e.g. in a NIR spectrum) and can thus be distinguished from the surroundings.
  • Another way is to simultaneously vary the switching of a number of mirrors (local array) such that this local array presents a source of light formed by the number of separate light sources.
  • This process is controlled by a respective controller utility 107 of the control unit, and is needed for example in case when a distance to the sampled object is high or the illumination in the vicinity of the object (ambient light) is relatively strong, and the contrast between the illuminated point and its surroundings is low.
  • This process of the illumination intensity variation reduces the resolution of a single sample (because more mirrors are involved in the construction of the single sample), while allowing for obtaining a desired resolution for a suitable assembly of a higher number of samples. This is exemplified in Fig.
  • each mirror can present an independent, separate light source.
  • each one of N mirrors can present a pixel ( ⁇ /2 resolution), and a high resolution measure is available without using the low speed sweep mode.
  • Higher light power requires to operate a bundle of mirrors as a single light source, thus resulting in higher light power and reduction of the resolution (without sweeping).
  • the third option to obtain the illumination intensity variation consists of increasing the exposure time of a sensor in the camera (integration time) which enables more photons to integrate within one exposure period. This process is performed by means of a controller utility 104 of the control unit associated with the cameras. Longer exposure time is possible, within the limitation of the sensor in the camera, for example in case of sampling in darkness of an object located at a higher distance and limited illumination intensity of the points (to obtain a high contrast of the points relative to their surroundings).
  • a local processing unit 106 being part of the control unit
  • Such decision-making may be partially implemented automatically and partially based on user input.
  • the illumination unit is preferably configured to produce light of different (or variable) wavelengths for different uses.
  • appropriate spectral filters 112 and 113 can be used associated with cameras 102 and 102, respectively.
  • IR illumination together with appropriate filtering allows for receiving samples also for the case where the object is significantly illuminated (e.g. for sampling a model for animation), with no disturbance of sampling and the illumination of the object.
  • polarized illuminating light can be used to reduce the disturbances.
  • the resolution of sampling i.e. a number of points illuminated in each sample
  • the results of sampling are transmitted through a communication controller 105.
  • the optical system used in the sampling system of the present invention may utilize a telecentric lens systems (in front of the camera), which enhances accuracy during short range and relatively small objects testing.
  • the operational principles of the telecentric lens system are generally known and are briefly described below with reference to Figs. 1C and ID.
  • Fig. 1C shows a ray diagram of a conventional telecentric lens system.
  • An object is at O, an image at I.
  • a camera is placed at I.
  • a "stop" or aperture is at S to block all the rays except for those in a narrow bundle. This serves not only as an aperture stop, controlling the amount of light that reaches the imager (camera), but is also strategically located at the focal point of the front and rear lens elements.
  • a telecentric lens "sees" a cylindrical tube of space of a diameter equal to that of the front lens element. It is limited to imaging objects whose lateral dimensions do not exceed the diameter of the lens.
  • the subject (object) is rendered on the camera isometrically, such that equal distances, whatever their orientation, are equal distances on the camera.
  • Parallel lines of the subject are parallel on the camera.
  • a slight readjustment of lens elements can result in an "entocentric" lens in which parallel lines converge, but in a scene opposite to that of normal imaging perspective.
  • the entocentric picture renders more distant objects larger than nearer ones.
  • the system of Fig. 1C is telecentric in both the image and object space. Moving the object or camera relative to the lens, results in no change of image size.
  • ID shows a simpler system that is telecentric only in object space, with simple and non-achromatic lenses.
  • This system includes two positive lenses, Ll and L2, and a digital camera.
  • the camera lens is at S, and is not explicitly shown. In fact, all that is really necessary at plane S is a small aperture stop to limit rays to narrow bundles. If a large diameter lens is used, a need for the second Ll is eliminated.
  • the extra lens L2 serves only to present to the camera a larger angle of view.
  • F labels the plane of the camera. If only one lens, Ll, is used, it is located so that its focal point is at the diaphragm stop S. The camera is moved forward or back with respect to the positive lens(es) until telecentric conditions are obtained (the object size seen by the camera is independent of subject position). Then the object is placed at the position of best focus.
  • System 200 includes a sampler-imager module 208, a power supply system, and a control unit.
  • the power supply system includes a charge/power supply unit 202 which powers the sampler module 208 and is in turn connected to an assembly of batteries 201.
  • This charge/power supply unit 202 is also associated with external voltage supply 214, 215.
  • the control unit includes inter alia a control panel and keyboard arrangement 203 enabling the system operation in an independent mode, and a display unit 204 allowing for managing and displaying data received from scanning and from external communication (via wires or wireless 209, 211, 212 and 213), through a communication controller 205.
  • the received data is stored, and undergoes digital and/or graphical processing using additional receiving units, such as a touch pad input device 206 and a digital pen input device 207.
  • the latter can also be used for example for identifying and/or marking (redlining) the received data, or as points for consideration when the received material is compared to a model existing in a remote system, or alternatively, for example for comparison with the system model for the purposes of broadening the consideration regarding a comparison between the exiting state with the model or the requirement.
  • the system enables ongoing control of the building process by sampling the current status (as made), comparing it to what has been planned, and then deciding as to whether to update the design or to change what has been built, or mark (redlining) a part of the model which should be evaluated (a problem in the design, for example).
  • An inspector arrives at a site (building site) to be sampled, and installs a sampling system at a reference point predefined at the beginning of the building process, or alternatively marks the known reference points in the site (e.g. in a room).
  • the system can be translated to acquire a number of samples that cover a certain space (e.g. each sample being acquired for about 50 milliseconds).
  • an automatic system is used (as will be described below with reference to Fig. 3) the automatic management mode can be provided.
  • Several methods can be used for estimating whether the sampling provides for obtaining a model of the existing state (as made): (a) the model is stored in the system memory and the samples are presented thereon; (b) the model is located at a remote system (e.g. remote server or local server of the building site), sampling data is sent there and processing is carried out at the remote system; (c) the data is collected in the local system as a cloud of points, for the purposes of documentation and future use, and no processing is carried out at the stage of data collection, or alternatively a model is created and verification is carried out based on visual inspection.
  • a remote system e.g. remote server or local server of the building site
  • sampler module 208 of system 200 preferably includes a connection port for connecting to an auxiliary control channel 210 configured for bi-directional control, thereby allowing remote control of the system.
  • control may for example include activation of automatic scanning mode in the system. This is exemplified in Fig. 3.
  • Fig. 3 shows an example of a full sphere (360°), shot-sampling system 300 of the present invention associated with a target 301 to be sampled.
  • System 300 includes a sampler module 302; and a motion system formed by an X-Y axis rotating plate 304 and preferably also a pitch, roll and azimuth mechanism 303.
  • the motion system is configured and operable to allow movement of sampler system 302 in the full space (360°).
  • Motion system 304 (or 303-304) is operated by a control unit (card or board) 305, synchronizing the motion system operation with that of the sampler 302, through the above-described control channel (210 in Fig. 2).
  • the entire system 300 is placed on a location stabilizer unit 306, enabling a relatively smooth movement allowing for constructing a space of cloud of points with minimum adjustment procedures (stitching). Despite the significance of this element, as the calculation of the cloud of points is carried out in real time, the deviation can be corrected with high reliability.
  • Such a system 300 can be used for various applications. These include for example automatic documentation of the existing state for each of the building stages, with no need for a skilled user.
  • the system requires a single reference point that is to be defined at the beginning of the building process or alternatively at the beginning of the sampling procedure, under conditions to be described (or marked on the model). This will enable simple, immediate and reliable modeling that enables tracking progress in the construction and evaluating it with the model (or transferring it to a remote station).
  • Another possible application is the analysis of events. In this case, the system is installed within the event region and operates to provide documentation of said region as a live and active model (completed by the gray level and RGB data). To obtain data regarding material composition (e.g.
  • the created model is an active model and allows for producing simulations of the event, belonging to the data detected during the sampling procedure.
  • a mechanical system can be produced that moves automatically around the object (which is static) and creates the full model.
  • Such a system is suitable for use in documentation and creation of models of objects of a limited size (sculptures, people).
  • Fig. 4 showing an example of a system 400 of the present invention configured as a professional, real time, full space, shot- sampling-tracking system, utilizing dynamic movement for animation and/or medical use.
  • the system 400 enables construction of a space region by static means with no movement of a sampler.
  • This system is suitable for operating in closed space in which an object to be sampled moves.
  • the system may be used in a variety of applications, the common feature being the ability of the system to sample, image, track the movement, adjust the sample to a model and warn about deviations, or guide the process in accordance with the subject performance, all being real time features (as per the performance and accuracy requirements).
  • These applications include for example sampling of a sportsman's motion, where the motion is sampled as a model and a picture for the purposes of its analysis, explanation and guidance.
  • Another possible application is sampling a patient suffering from limb or other damage that affects mobility (stroke, Parkinsons, etc.).
  • Yet other possible applications include sampling and guiding of professional dancers; sampling motion of mechanical systems and/or objects; creation of professional animation by means of sampling a model of motion and its effect on a model of another character, synchronized with sound.
  • These systems are characterized in their ability for real time sampling and displaying, with no marks or sensors on an image.
  • system 400 includes multiple sampler modules arranged to cover (by their fields of view) a certain spherically-shaped space region.
  • Four such sampler modules 401, 402, 403 and 404 are used in the present example being accommodated at the corners of a space region to cover the spherical shape of about 140 degrees; and an additional sampler 405 is located at the top of the space region to complete the space coverage.
  • Sampled (measured) data is transferred from these samplers to a controller 406 configured for synchronizing these data in a time plane and transmitting the synchronized data to a stitching utility 407 that matches the sampled time-synchronized data in space.
  • the so-processed data (time and space matched data) is transmitted to a model constructing utility 408.
  • the model-related data is then transmitted to a connection utility 409 that communicates with all the other relevant systems (computers, video and audio systems, sensors, illuminators, etc.).
  • System 500 includes a single sampler 501 connectable to a personal computer 502 (controlled by user input from a keyboard 504, mouse and/or touch pad 505).
  • System 500 utilizes a main display 503.
  • the latter may for example present himself to the user being sampled as a different subject (for example, a girl is dancing, and an image of her admired subject appears on the display); or alternatively the system displays an image of a dancer and a girl for example follows up the motion of said image (while the quality of her tracking is presented in real time on the main display).
  • the system may also include an additional display 506 for displaying the same to an examiner or to a remote location.
  • Such a system can be used for example for creating a face model for biometric identification.
  • the system may additionally utilize different or varying wavelengths for obtaining additional data about the materials composition (to avoid forgeries).
  • Fig. 6 illustrating a system 700 configured for 3D real time dynamic shot-sampler operation.
  • the use of a light emitter associated with guided mirrors (e.g. MEMS technology) to produce structured light is replaced by the direct use of an array of light emitters 703 (LEDs and/or laser diodes, preferably operating with varying wavelengths).
  • the light emitters are controllably operated by a controller board 707 (constituting a part of the control unit).
  • the system operation is similar to that of system 100 in Fig. IB.
  • the elements of system 700 are similar to those of system 100, except for that a single optical system 708 is used in system 700 instead of two optical systems 108 and 111 as in system 100.
  • the present invention can utilize any known suitable technique for the depth calculation, for example a triangulation based technique.
  • a triangulation based technique The principles of triangulation are generally known.
  • the calculation of the depth of a known point from two points of view, in order to reconstruct a 3D cloud of points model (out of a large number of points) is briefly described below with reference to Figs. 7 and 8.
  • Fig. 7 presents a top view of a stereo system composed of two pinhole cameras.
  • the left and the right image planes are coplanar.
  • Oi and O r are the centers of projection.
  • the optical axes are parallel.
  • a fixation point defined as the point of intersection of the optical axes lies infinitely far from the cameras.
  • the way in which stereo determines the position in space of P and Q is triangulation, that is, by intersecting the rays defined by the centers of the projection and the images of P and Q, p h p r , qi, q r .
  • Triangulation depends crucially on the solution of the correspondence problem: if (pi, p r ) and (qi,q r ) are chosen as a pair of corresponding image points, intersecting the rays OlPl-OrPr and Oiqi ⁇ O r q r leads to interpreting the image points as projection of P and Q, but if (P 1 , q r ) and (qi, P r ) are the selected pairs of corresponding points, triangulation returns P' and Q'. It should be noted that both interpretations, although being dramatically different, stand on equal footing.
  • controlling of the light sources and the processes of the signal analysis enables the system adaptation in a number of planes.
  • the system resolution the following should be noted. Determination of differences in the received gray levels between the two sampled light points allows the system, by observing the sampling window, to adapt the required number of active light points from the nature of the sampled object, such that the quality of a model is optimal. For example, for large and planar surfaces, smaller number of light points per surface unit are used, but for corners and columns — a larger number of points.
  • the amount of light produced by the light point is derived from two main variables: the intensity of light impinging from a light emitter onto mirrors or alternatively the intensity of an array of light emitters (when no mirrors are used); and a number of mirrors operating at a time point (as exemplified in Fig. 9).
  • the amount of light produced by the system is controlled by the system in accordance with visual contrast of the light points (where the determination of an arrangement of an array of these points is repeated for each sample).
  • contrast increases, the intensity of light produced by the light source can be reduced.
  • the light intensity of light emitter is increased, and at the next stage the number of mirrors forming a single unit is increased.
  • One of the advantageous features of the system of the present invention is in its ability to obtain information that optimizes the system operation. Contrary to a scanner, the sampled points are not captured in a sequence synchronized in space, but rather are captured simultaneously, during a regular picture acquisition, from the same sensor, which allows for completion of data of the cloud of points and a gray level image, perfectly synchronized, with no computational process.
  • a model is created from the cloud of sampled points and is added to data received from the regular image. This process can be implemented with a number of wavelengths providing additional data, the model thus becoming a "clever model", including information also about an envelope between the sampled points. This data is indicative of the surface of envelope, material composition, absorption of wavelengths, etc.
  • a model with a layer of brushing tires is created while analyzing the region of the event, where said layer is identified within the model as being associated with a change in the illumination.
  • the above described examples relate to a sampling system adaptive to the level of contrast observed on the camera, enabling the system to be adaptive to a distance and variable levels of illumination.
  • Figs. lOA-lOC there is shown an example of the operational method according to the invention, demonstrating the ability of the invented technique to be adaptive to the nature of a region of interest (object and/or surface and/or open space) to be sampled.
  • the adaptive procedures exemplified in Figs. lOA-lOC are based on a dynamic process of data analysis for the received data with respect to reference data projected by a projection mode (adaptive test pattern).
  • the important feature of this method is that the data analysis process does not perform correlation between the acquired and displayed images (that contain coincidental or pseudo-coincidental test pattern), but rather examines the acquired images with respect to a predefined and updated reference system.
  • the projected image contains known (predefined) simple elements (reference points), such that identification of at least parts thereof enables decision making at a high level of certainty.
  • the identification of the parts of a reference allows inspection thereof with high resolution that allows for identifying corners or other surface edges.
  • regular and/or telecentric optics allows for constructing data enabling decision making based on data indicative of the image size in short distances and small objects, with no dependence on distance, and otherwise in accordance with a change in the dependence on distance.
  • a combination of data from the two optical systems allows the sampling system to decide to separate between, for example, a structural change on a wall and window or blind existing in its vicinity.
  • the accuracy of such a system exceeds regular optics alone, which compensates the advantages of a laser scanner at those distances.
  • the system in such a way analyzes and describes non-correlated regions in the two images and the test pattern (reference image), and these regions then undergo a process of pattern projection thereon with higher resolution (larger number of projected light points) that reduces the non-correlation to minimal. At that stage, the system has data with reasonable resolution.
  • the system activates a scanning process (sweeping), in which coded patterns are projected enabling to track a change (pseudo) along the time axis, and by this the nature of the treated region is well predicted.
  • a scanning process sweeping
  • coded patterns are projected enabling to track a change (pseudo) along the time axis, and by this the nature of the treated region is well predicted.
  • An example of a sweeping cyclic pattern is shown in Fig. 12. In the example of Figs.
  • Fig. 1OA when the process is initially activated (step 1301), medium resolution reference pattern is displayed by a projector (step 1302). The two cameras capture images of this pattern (step 1302). The pictures may be enhanced (step 1304), and the projected pattern is matched in the two pictures (step 1305).
  • the latter procedure may utilize fusion of two or more protocols, using the reference image as an anchor for the calculation procedure.
  • the system then initiates a second measurement stage (step 1308).
  • a high resolution light pattern is projected by the projection module, where this light pattern is adapted to the non-matched areas (step 1309).
  • the above described first stage is repeated for this resolution image.
  • the number of steps can vary from one up to any required number depending on the structure of the measured object and/or surface and/or space, as well as on the required cloud of points' resolution.
  • the sweeping process projection of a coded pattern
  • a calculation process is applied (step 1310) consisting of calculation of the cloud of points using triangulation or another suitable technique, presenting the final measurement stage (step 1311).
  • step 1312 if the sweeping process is to be initiated (step 1312), an appropriate sweeping method (cycle, point, stripe, etc.) is selected (step 1313), a region of interest (one to which sweeping is to be applied) is selected (step 1314), and the sweep resolution is calculated (step 1315), all these selections being defined according to the data obtained in the previous steps.
  • another pattern, the N 4 pattern is generated and projected (step 1316), the two cameras acquire images of the region of interest (step 1317), which images may be enhanced (step 1318), and image data is saved as a data array (step 1319).
  • Steps 1316-1319 are repeated while increasing the resolution (step 1321) until the maximal possible resolution (maximal number of light beams involved in the pattern) is provided (step 1320), and then the cloud of points related data is calculated using a phase change in the pictures across the time domain (step 1322) presenting the final process step (1323).
  • Fig. 11 exemplifies the recognizable matching points (reference points) in the main pattern of structured light.
  • Fig. 12 exemplifies a cyclic sweep pattern, which in the present example is variable only across Y-axis, in time domain.
  • the present invention provides a 3D real-time dynamic shot sampler system which may be used in a variety of applications, e.g. for tracking the performance and/or description of the existing state, as well as constructing an integral natural 3D model utilizing 3D cloud of points combined with RGB in a desired coordinate system.
  • the system of the present invention can operate on the basis of an image and scanned model, enabling automatic reconstruction of the model.
  • the system provides real time calculations allowing for dynamic model creation.
  • the system is inexpensive, allows for effective operation with no specific preparation.
  • the system is insensitive to illumination conditions (is operable under reasonable daylight, as well as at night).
  • the system allows for redlining on the image and/or model for the purposes of updating.
  • the system can be configured for local operational mode or for remote mode (via a communication network).
  • the technique of the present invention provides sampling and imaging allowing construction of a live and active model in space, by combining a cloud of scan points and an image.
  • the invented technique in some embodiments utilizes an array of guided mirrors that produce from a single light source an array of point-like light sources, dynamically controlled by a control system.
  • the simple implementations of the invented system utilize an array of very small light sources, controlled to produce the sample.
  • the invention provides for automatically and/or manually receiving a dynamic active and live model of a space region or an object sampled in two modes: as a dynamic cloud of points synchronized in space and time; and as a dynamic model combining data from the cloud of points and data from a regular photo (image) correlated in space and time.
  • the invented technique also provides for dynamically tracking the object motion (tower, pole, people, etc.), and enables full sphere data reading by automatic means.
  • the invention also provides for intelligent use of illumination of dynamically varying wavelengths and angular distribution; analysis of light absorption (by means of analyzing a structure of points of light reflection); estimation of the material compositions and objects; and dealing with light filtering ranges.
  • the invention can be used for constructing live models, both static and dynamic, of objects and spaces, with a possibility of creating an independent model or as an integration and completion of an existing model (for the purposes of updating or matching test of the existing state to the model).
  • the invention can also be used for tracking motion in space to create animation or control the motion of a body (including a human body) for medical and other applications; creation of models to reconstruct a fragment of events, including the creation of a dynamic model (e.g. reconstruct a car accident or terror act).
  • the created model enables to create a characterizing signature of an object (including a live object) by means of a shape and material composition allowing its identification (including a human face).
  • the invention can utilize professional and amateur imaging systems for receiving image and 3D model in cost-effective way.

Abstract

An imaging method and system are presented for use in 3D real time dynamic sampling of a region of interest. The region of interest is illuminated with structured light of predetermined illumination conditions, the structured light being in the foπn of a main pattern formed by a number of spatially separated light beams of at least one wavelength of light. Light from the region of interest is detected and image data indicative thereof is generated. The image data is analyzed, and the main pattern is selectively adapted until detecting optimal image data corresponding to an optimal projected pattern. The adapting procedure comprises carrying out at least one of the following: increasing a number of the projected light beams at least within at least one selected area of said region of interest, and projecting an additional coded pattern onto at least one selected area of said region of interest. The optimal image data can then be processed to calculate cloud of points related data, thereby enabling creation of a model of the region of interest.

Description

REAL-TIME IMAGING METHOD AND SYSTEM USING STRUCTURED LIGHT
FIELD OF THE INVENTION
This invention relates to an imaging method and system for three- dimensional modeling of spaces and objects. The invention is particularly useful for 3D geometry documentation and 3D shape capturing.
BACKGROUND OF THE INVENTION
The use of 3D documentation and modeling of spaces and objects has become a vital part of the processes of design, data analysis and presentation, leading to the development of various tools for programming, analysis, correlation, simulation and presentation. Data acquisition of the existing state (the so-called "as made", or "as built" work, which is the current status of a building, according to what has been built and which is different from what has been planned) is an important component in geometric documentation, programming, designing, controlling, managing, simulating and describing projects in various fields.
Considering for example the field of development and modeling, existing techniques do not provide for high-quality, automatic or pseudo-automatic, continuously controllable tracking of a performance and/or description of an existing state in a closed environment, being a space or surface region (2-3Om). Existing systems cannot be operated by unskilled users. Moreover, the existing systems do not allow for mapping dynamic space effects (like motions of structures). This leads to creation of samples and/or construction of models measured and/or scanned manually, in combination with calculation and imaging. The process is thus both cumbersome and limited, and does not provide complete data.
One kind of existing technique in this field utilizes systems based on photographing. These include mechanized systems based on structured light (DOE), which are precise systems (of 30μm precision), intended for short operational range (up to about 3m), have a small measurement volume (2m'x2m'x2m'), are very costly, and require professional skills and expertise in operation and processing. These systems are intended for use in the automobile and aircraft industries, product control and reverse engineering. The system produces an RGB image belonging to 3D cloud of points. Other systems of this kind include conventional ground photogrammetry based systems. These systems are based on overlapping images and control points, require manual operation, and data processing is slow. The result is inadequate, non-integrated non-natural 3D data. The systems operate with a range of 2-3Om, accuracy of 1/2 mm while specifically operated and of 15mm in the regular operational mode, the operation is distance dependent, and the data analysis is strongly dependent on illumination of the surroundings.
Another existing system utilizes 3D laser scanners and includes devices with tiny, short and middle range of operation. The tiny-range systems have a
200mm measurement distance, accuracy of 5-2000μm, are intended for use with quality control of components and vehicles and are characterized by very small measurement volume, are very slow in operation, immobile, very costly, and require a highly skilled operator. The system produces 3D cloud of point presentation. The short-range systems have a 0.5-3m measurement distance, 0.1-
2.5mm accuracy (distance dependent), and are intended for mapping of general bodies, archeology, objects' reverse engineering and quality control. These systems are mobile, lightweight, and operable by a skilled user. The system produces colored 3D cloud of point presentation. The system and the technology utilized therein are costly; the obtained model is static and requires to be further treated manually. The middle-range systems have a 2-15Om measurement distance, and a 3-5mm accuracy. These systems are intended to be used in general mapping (building, industry, archeology, etc.); the systems are mobile, of relatively cumbersome operation, easy to operate but require a skilled operator. The system produces 3D cloud of points with true (primary) colors (RGB) and intensity map presentation.
The known systems in the field of animation include those utilizing sampling of static bodies by means of a scanner, and mobility acquisition using appropriate software. This technique enables model construction, and motion creation by means of model manipulation and mathematical procedures. This technique is costly, slow, and requires too many manual operations for correlating different procedures (or alternatively expensive tools for continuous calculation). The other known system used in the field of animation is based on an arrangement of features in significant points for movement (features which could be easily analyzed by cameras or other sensors), and sampling thereof by sensors or cameras for obtaining a continuity of motion in an image, meaning the image with no model; the entire process is cumbersome and costly.
SUMMARY OF THE INVENTION
The invention provides a novel system and method enabling indoor/outdoor capture and reconstruction of a static or live dynamic model, including material composition of objects and spaces with real reference points. The invention also relates to a method of using image data for constructing a continuous model, which may be independent or adaptive, as well as generating simulation of motion and events based on the model. Additionally, the invention relates to a method of real time following a body's motion in space.
The technique of the present invention provides for a 3D real time dynamic shot sampler, allowing for tracking the performance and/or description of an existing state. The invented technique is capable of constructing an integral natural 3D model utilizing 3D cloud of points combined with true colors (RGB) in a desired coordinate system. Moreover, the system of the present invention can operate on the basis of an image and scanned model, enabling automatic modeling. The system provides real time calculations allowing for dynamic model creation. The system is inexpensive, and allows for effective operation with no specific training. The system is insensitive to illumination conditions, i.e. is operable under reasonable daylight, as well as at night (cloud of points presentation with no colors). The system allows for redlining on the image and/or model for the purposes of updating. The system can be configured for local operational mode or for remote mode (via a communication network).
The present invention provides sampling and imaging allowing construction of a live and active model in space, combining a cloud of scan points and an image. In some embodiments, the invented system utilizes an array of guided mirrors that produce from a single light source an array of point-like light sources, dynamically controlled by a control system. The system of the present invention is aimed at replacing the technique of constructing a model from an image (i.e., photogrammetry based techniques) and replacing a model constructed from a cloud of points (COP), where each point presents a distance of a sampled point from a point in space formed by laser scanners. The simple implementations of the invented system utilize an array of very small light emitters, controlled to produce the sample. The invention provides for automatically and/or manually receiving a dynamic active and live model of a space region or an object sampled in two modes: as a dynamic cloud of points synchronized in space and time; and as a dynamic model combining data from the cloud of points and data from a regular photo (image) correlated in space and time.
The invented technique also provides for dynamically tracking object motion (tower, pole, people, etc.), and enables full-sphere data reading by automatic means. The invention also provides for intelligent use of illumination of dynamically varying wavelengths and angular distribution; analysis of light absorption (by means of analyzing a structure of points of light reflection); estimation of the materials and objects; and dealing with light filtering ranges.
The invention can be used for constructing live models, both static and dynamic, of objects and spaces, with a possibility of creating an independent model or as an integration and completion of existing models (for the purposes of updating or matching test of the existing state to the model). The invention can also be used for tracking the motion in space to create animation or control the motion of a body (including a human body) for medical and other applications; creation of models to reconstruct a fragment of events, including the creation of a dynamic model (e.g. reconstruct a car accident or terror act). Also, the created model enables to create a characterizing signature of an object (including a live object) by means of a shape and material composition allowing its identification (including a human face). The invention can utilize both professional and amateur imaging systems for receiving an image and a 3D model in a cost- effective way.
It is important to note that the technique of the present invention is revolutionary in the scanning market, by enabling a non-professional user to create and process a 3D model ("as made" or other), completely by himself. This is a major innovation, which will make use of a data model as common as using a camera to track the construction process.
There is provided according to one broad aspect of the present invention an imaging method for use in 3D real time dynamic sampling of a region of interest, the method comprising: (i) illuminating the region of interest with structured light of predetermined illumination conditions, the structured light being in the form of a main pattern formed by a number of spatially separated light beams of at least one wavelength of light;
(ii) detecting light from the region of interest and generating image data indicative thereof; (iii) analyzing the image data and selectively adapting the main pattern until detecting optimal image data corresponding to an optimal projected pattern, said adapting comprising carrying out at least one of the following: increasing a number of the projected light beams within at least one selected area of said region of interest, and projecting an additional coded pattern onto at least one selected area of said region of interest, thereby enabling processing of the optimal image data to calculate cloud of points related data, and using this data to create a model of the region of interest
The light coming from the region of interest is indicative of the structured light returned (reflected) from the region of interest as well as the ambient light (surroundings).
Preferably, the main pattern carries a reference pattern. The latter is formed by a few (at least two) reference marks formed by a predetermined arrangement of some of the light beams involved in the main pattern. In some embodiments of the invention, the structured light is produced by emitting a light beam and splitting it into the predetermined number of spatially separated light beams. The splitting may be achieved by impinging the emitted light beam onto a surface formed by an array of light deflectors (mirrors), which are controllable, activated and angularly displaced. Additionally, one or more masks can be placed in the optical path of deflected light to further split the light into a larger number of beams. Such a mask grid may be selectively operated to be either in or out of the optical path (i.e. shiftable between its operative and inoperative positions).
In some other embodiments of the invention, the structured light is produced by controllably operating an array of separate light emitters. Similarly, one or more masks (e.g. shiftable between operative and inoperative positions) can be placed in the optical path of emitted light beams to further split this light into a larger number of beams.
In preferred embodiments of the invention, the structured light is produced by carrying out an initial adaptation of the imaging to required resolution, illumination conditions, and a distance from an imaging system to the region of interest. The initial adaptation comprises successively illuminating the region of interest and detecting the image data, with at least one varying parameter from the following: a number of the light beams at least within one or more selected areas of the region of interest, intensity of detected illumination, polarization of illuminating light, wavelength of illuminating light and the distance to the region of interest, and analyzing the successive image data upon detecting an optimal condition of substantially no missing light points in the image and non-matched points in the multiple images, within a certain predefined threshold. The illumination intensity variation can be achieved by controllably varying intensity of emitted light (e.g. simultaneously varying switching of a local array formed by a number of the separate light emitters such that said local array presents a source of light formed by the number of separate light sources), and/or by simultaneously varying switching of a local array formed by a number of deflectors (mirrors) such that said local array presents a source of light formed by the number of separate light sources. For example, the intensity of emitted light can be initially increased, and then a number of mirrors involved in the splitting and deflecting of said light is increased. Alternatively, or additionally, the illumination intensity variation can be achieved by increasing an exposure time of light detector(s). Preferably, the method utilizes data based on the nature of the region of interest to be modeled and the environmental conditions.
According to the method of the present invention, the light points projected onto the region of interest by said light beams, respectively, are captured simultaneously during a regular image acquisition. By this, complete synchronized measured data is obtained from the cloud of points' data and a gray level image data. The model is thus created from the cloud of sampled points added to the regular image data.
The method may utilize illumination with light of different wavelengths. This allows for creation of a model indicative of a material composition of the region of interest. In a specific example of the invention, the region of interest is illuminated with a certain initial resolution reference pattern, the image data is generated using two light detectors, and the data analysis consists of identifying and matching the relative locations of reference points (reference pattern) and correlating the images using this reference pattern, and then identifying the area(s) of missing or not-matched points in the images to apply the selective adaptation.
The adaptation procedure includes successively varying the illumination conditions by increasing the number of the projected light beams at least within the selected one or more areas of the region of interest (where missing or not- matching points have been identified), and upon identifying the missing or not- matched points while reaching the maximal number of light beams involved in the illumination, projecting the additional coded pattern onto the selected area(s).
According to another broad aspect of the present invention, there is provided a method for use in 3D real time dynamic sampling of a region of interest, the method comprising:
(i) producing structured light in the form of a main pattern formed by a predetermined number of spatially separated light beams of at least one wavelength of light, the structured light being indicative of a reference pattern formed by a few reference marks formed by a predetermined arrangement of some of said light beams;
(ii) projecting said predetermined number of light beams as a cloud of points onto the region of interest;
(iii) detecting light from the region of interest and generating measured image data indicative thereof;
(iv) analyzing the measured image data and selectively adapting the main pattern until an optimal pattern is projected onto the region of interest and a corresponding optimal image data is generated, said adapting comprising carrying out at least one of the following: increasing a number of the projected light beams within at least one selected area of said region of interest, and projecting an additional coded pattern onto at least one selected area of said region of interest, thereby enabling using said optimal image data to calculate cloud of points related data and create a model of the region of interest.
According to yet another broad aspect of the invention, there is provided a method for use in 3D real time dynamic sampling of a region of interest, the method comprising illuminating the region of interest by structured light including a main pattern formed by a predetermined number of spatially separated light beams of at least one wavelength of light, and a reference pattern within said main pattern, the reference pattern including a few spaced apart reference marks formed by a predetermined arrangement of some of said light beams forming the main pattern.
According to yet another broad aspect of the invention, there is provided an imaging system for use in 3D real time dynamic sampling of a region of interest, the system comprising: a) an illumination unit configured and operable for producing and projecting onto the region of interest structured light of predetermined illumination conditions, the structured light being in the form of a main pattern formed by a number of spatially separated light beams of at least one wavelength of light; b) a light detection unit for detecting light from the region of interest and generating image data indicative thereof; c) a control unit configured and operable for analyzing the image data and selectively operating the illumination unit for adapting the main pattern until detecting optimal image data corresponding to an optimal projected pattern, the control unit operating the illumination unit to obtain at least one of the following: increase a number of the projected light beams at least within at least one selected area of said region of interest, and project an additional coded pattern onto at least one selected area of said region of interest, to enable processing of the optimal image data to calculate cloud of points related data and create a model of the region of interest. BRIEF DESCRIPTION OF THE DRAWINGS
In order to understand the invention and to see how it may be carried out in practice, preferred embodiments will now be described, by way of non- limiting example only, with reference to the accompanying drawings, in which: Fig. IA is a block diagram of a system of the present invention configured for 3D real time dynamic modeling (sampling) a region of interest;
Fig. IB schematically illustrates an example of the configuration of the system of Fig. IA using a MEMS mirrors array;
Figs. 1C and ID illustrate the operational principles of a telecentric lens system suitable to be used in the sampling system of the present invention;
Fig. 2 shows another example of the implementation of the system of Fig. IA, as a portable 3D real time dynamic shot-sampling system;
Fig. 3 shows yet another example of the configuration of the system of Fig. IA, as a 360°, full sphere, shot-sampling system; Figs. 4 and 5 show yet further examples of the system of the present invention configured as a professional real time, full space, dynamic movement system for animation and/or medical use;
Fig. 6 illustrates the principle of the 3D real time dynamic shot-sampler operation of the present invention using LED and Laser Diode Array; Figs. 7 and 8 illustrate the principle of depth calculation of a known point, from two points of view (triangulation), suitable to be used in the system of the present invention;
Fig. 9 exemplifies the system operation to achieve variable light power (and resolution); Figs. 1OA to 1OC show an example of a method of the present invention for 3D real time dynamic modeling (sampling) a region of interest;
Fig. 11 exemplifies an image of structured light pattern carrying a reference pattern, according to the invention; and
Fig. 12 exemplifies a cyclic sweep pattern (coded pattern) suitable to be used in the present invention. 61
- 11 -
DETAILED DESCRIPTION OF THE INVENTION
Referring to Fig. IA5 there are illustrated, by way of a block diagram, the main elements of an imaging-sampling system 10 of the present invention configured and operable as a 3D Real Time Dynamic Shot-Sampler system.
System 10 includes an illumination unit 12 configured and operable for producing and projecting structured light onto a region of interest ROI; a light detection unit 14 for detecting light returned from the region of interest and generating measured data indicative thereof; and a control unit 16. System 10 is operable for real time creation of a 3D model of the region of interest, with simple and inexpensive system configuration.
Illumination unit 12 includes either a light emitter associated with a light splitter and deflector an array of spatially separating light beams producing from a light beam emitted by the light emitter, or an array (2D array) of light emitters producing an array of light beams, respectively. Optionally, the illumination unit may also include one or more additional splitters (e.g. grids) in the optical path of said array of light beams thus creating even more separate light beams. The illumination unit can be constructed using simple and inexpensive technologies, for example including a LED and/or laser based light source with an array of controllably displaceable mirrors (such MEMS based technique); or an array of LEDs and/or lasers, or a spatial light modulator (SLM). The illumination unit is preferably configured and operable to form, within the structured light pattern (main pattern), a further local pattern indicative of a few (generally at least two) reference marks (points) of a predefined arrangement. To this end, some of said separated light beams of the main pattern are arranged to present the reference points' pattern. Such reference marks' pattern within the main pattern of points is exemplified in Fig. 11.
Light detection unit 14 includes one or preferably more than one camera detecting light from the region of interest and generating data indicative thereof. The cameras are oriented with respect to each other and with respect to the region of interest so as to provide the desired coverage of the region of interest by the fields of view of the cameras, with the relative orientation between the cameras and the illumination unit being known. The cameras are preferably mounted for movement relative to the region of interest.
Control unit 16 is configured and operable to carry out inter alia an image processing (pattern recognition) to identify the reference points and their correlation in the two cameras' images and in the successive images acquired by each of the cameras. The control unit is further configured for carrying out initial system adaptation to the required resolution, illumination conditions, and a distance to the region of interest; and further measurement adaptation to an object (region of interest). The latter includes image data analysis to identify whether the illumination unit is to be operated to increase a number of light beams (points) involved in the illumination of at least some parts of the region of interest (increased resolution) and/or whether a sweeping procedure (illumination by a coded pattern) is to be applied. Examples of the system operation will be described further below.
Fig. IB illustrates a specific but not limiting example of the configuration of an imaging-sampling system 100 of the present invention. System 100 includes an illumination (projector) unit, a light detection unit, and a control unit. The illumination unit is configured to define an array of small (point-like) light sources controllably operable to project structured light in the form of a pattern of spatially separated light beams (cloud of points) onto a region of interest (object/surface/space) to be sampled. In this example, the illumination unit includes a light emitter unit 110 (including for example LED and/or laser light source), associated with an optical unit 111 (one or more lenses); and a light splitting and deflecting unit 103 operating to form the structure light from the light generated by light emitter unit 110 and appropriately project the structured light pattern onto a region of interest. In the present example, splitter/deflector unit 103 includes N mirrors controllably guided for angular displacement and associated with an appropriate optical unit 108 (e.g. including a spherical mirror to increase a solid angle of propagation of the light beams).
Light returned from the illuminated region is detected by the detection unit including two cameras 101 and 102. The control unit is a computer system including various controlling and processing utilities operating as mentioned above and as will be described more specifically further below, to control the light projection onto the region of interest and to process and analyze the detected light.
System 100 is configured such that light generated by light emitter unit 110 is projected through optical system 111 on a surface formed by N mirrors' unit 103 guided for angular displacement. This may be an array of Digital Light
Processors (DLP) commercially available from TEXAS INSTRUMENTS.
Optical unit 103 may for example include mirrors of a 13x13 μm size. The array of N mirrors thus presents N point-like spatially separated controllably switched light sources producing structured light in the form of N spatially separated light beams. The latter are projected onto a targeted object (not shown) by optical system 108 (and possibly also by means of a grid 114, that might be needed in special cases to improve the focusing). Two cameras (imagers) 101 and 102 operate to detect the projected points and their surroundings, when the light intensity at these points is relatively high as compared to that of the surroundings.
The control unit operates to calculate a distance to each of the projected points, for the following known parameters: a distance between the two cameras and a relative angle between them, a location of the reference point at each camera, the profile of the mirrors that transmit / block the light incident thereon
(or the profile of the separate light emitters as the case may be).
Activation of one or more of the mirror profiles results in the creation of a cloud of points (COP), in which the location of each point in space corresponds to the object being sampled and a distance between the object and a measurement system. Additionally, the system may be configured to detect a gray level or color (being indicative of information about the object), providing additional data of the sampled object, directly and fully correlated in space (as being sampled from the same source). The illumination intensity projected by the sampling system of the present invention can be varying, which can be implemented in three ways:
This may be varying intensity of emitted light, achieved by appropriately controlling the operation of the light emitter by means of a controller utility 109 of the control unit. Such variation of the illumination intensity is needed in order to produce light intensity providing a contrast indicative of a height of the points projecting the light onto an image acquired by the camera. This intensity variation is needed when a distance to the sampled object is large or alternatively when the illumination from the surroundings (ambient light) is relatively strong. For example, in case the intensity of emitted light is not uniform, such variation may occur only for the wavelength of light produced by the light source (e.g. in a NIR spectrum) and can thus be distinguished from the surroundings.
Another way is to simultaneously vary the switching of a number of mirrors (local array) such that this local array presents a source of light formed by the number of separate light sources. This process is controlled by a respective controller utility 107 of the control unit, and is needed for example in case when a distance to the sampled object is high or the illumination in the vicinity of the object (ambient light) is relatively strong, and the contrast between the illuminated point and its surroundings is low. This process of the illumination intensity variation reduces the resolution of a single sample (because more mirrors are involved in the construction of the single sample), while allowing for obtaining a desired resolution for a suitable assembly of a higher number of samples. This is exemplified in Fig. 9: Using the DLP as a switchable light source enables each mirror to present an independent, separate light source. Thus, when a low light level is required, each one of N mirrors can present a pixel (Ν/2 resolution), and a high resolution measure is available without using the low speed sweep mode. Higher light power requires to operate a bundle of mirrors as a single light source, thus resulting in higher light power and reduction of the resolution (without sweeping).
The third option to obtain the illumination intensity variation consists of increasing the exposure time of a sensor in the camera (integration time) which enables more photons to integrate within one exposure period. This process is performed by means of a controller utility 104 of the control unit associated with the cameras. Longer exposure time is possible, within the limitation of the sensor in the camera, for example in case of sampling in darkness of an object located at a higher distance and limited illumination intensity of the points (to obtain a high contrast of the points relative to their surroundings).
Decision-making with regard to which one of the above processes is to be applied to obtain the desired illumination intensity variation is carried out by a local processing unit 106 (being part of the control unit) based on the criteria of the process nature and the environmental conditions. Such decision-making may be partially implemented automatically and partially based on user input.
The illumination unit is preferably configured to produce light of different (or variable) wavelengths for different uses. To this end, appropriate spectral filters 112 and 113 can be used associated with cameras 102 and 102, respectively. For example the use of IR illumination together with appropriate filtering allows for receiving samples also for the case where the object is significantly illuminated (e.g. for sampling a model for animation), with no disturbance of sampling and the illumination of the object. Also, polarized illuminating light can be used to reduce the disturbances. The resolution of sampling (i.e. a number of points illuminated in each sample) can be defined by different ways, for example using an automatic mode based on a calibration sample. The results of sampling, calculated in real time (an example of this process is described further below) are transmitted through a communication controller 105. It should be noted that the optical system used in the sampling system of the present invention may utilize a telecentric lens systems (in front of the camera), which enhances accuracy during short range and relatively small objects testing. The operational principles of the telecentric lens system are generally known and are briefly described below with reference to Figs. 1C and ID.
Fig. 1C shows a ray diagram of a conventional telecentric lens system. An object is at O, an image at I. A camera is placed at I. A "stop" or aperture is at S to block all the rays except for those in a narrow bundle. This serves not only as an aperture stop, controlling the amount of light that reaches the imager (camera), but is also strategically located at the focal point of the front and rear lens elements. There is an optimum location for both the object and the camera for best image sharpness. If the object is moved nearer or farther from the lens system, its size on the camera does not change. The quality of focus changes, but if the stop S is made very small, this can be minimized. A telecentric lens "sees" a cylindrical tube of space of a diameter equal to that of the front lens element. It is limited to imaging objects whose lateral dimensions do not exceed the diameter of the lens. The subject (object) is rendered on the camera isometrically, such that equal distances, whatever their orientation, are equal distances on the camera. Parallel lines of the subject are parallel on the camera. A slight readjustment of lens elements can result in an "entocentric" lens in which parallel lines converge, but in a scene opposite to that of normal imaging perspective. The entocentric picture renders more distant objects larger than nearer ones. The system of Fig. 1C is telecentric in both the image and object space. Moving the object or camera relative to the lens, results in no change of image size. Fig. ID shows a simpler system that is telecentric only in object space, with simple and non-achromatic lenses. This system includes two positive lenses, Ll and L2, and a digital camera. The camera lens is at S, and is not explicitly shown. In fact, all that is really necessary at plane S is a small aperture stop to limit rays to narrow bundles. If a large diameter lens is used, a need for the second Ll is eliminated. The extra lens L2 serves only to present to the camera a larger angle of view. In this set-up, F labels the plane of the camera. If only one lens, Ll, is used, it is located so that its focal point is at the diaphragm stop S. The camera is moved forward or back with respect to the positive lens(es) until telecentric conditions are obtained (the object size seen by the camera is independent of subject position). Then the object is placed at the position of best focus.
Reference is made to Fig. 2 exemplifying a portable sampling system 200 of the present invention configured to enable for example sampling of the existing state ("as made") of a structure during the structure creation process. System 200 includes a sampler-imager module 208, a power supply system, and a control unit.
The power supply system includes a charge/power supply unit 202 which powers the sampler module 208 and is in turn connected to an assembly of batteries 201. This charge/power supply unit 202 is also associated with external voltage supply 214, 215.
The control unit includes inter alia a control panel and keyboard arrangement 203 enabling the system operation in an independent mode, and a display unit 204 allowing for managing and displaying data received from scanning and from external communication (via wires or wireless 209, 211, 212 and 213), through a communication controller 205. The received data is stored, and undergoes digital and/or graphical processing using additional receiving units, such as a touch pad input device 206 and a digital pen input device 207. The latter can also be used for example for identifying and/or marking (redlining) the received data, or as points for consideration when the received material is compared to a model existing in a remote system, or alternatively, for example for comparison with the system model for the purposes of broadening the consideration regarding a comparison between the exiting state with the model or the requirement. The system enables ongoing control of the building process by sampling the current status (as made), comparing it to what has been planned, and then deciding as to whether to update the design or to change what has been built, or mark (redlining) a part of the model which should be evaluated (a problem in the design, for example).
The following is the description of a specific but not limiting example of the sampling method according to the invention: An inspector arrives at a site (building site) to be sampled, and installs a sampling system at a reference point predefined at the beginning of the building process, or alternatively marks the known reference points in the site (e.g. in a room). In case the system is installed at the reference point, the system can be translated to acquire a number of samples that cover a certain space (e.g. each sample being acquired for about 50 milliseconds). In case an automatic system is used (as will be described below with reference to Fig. 3) the automatic management mode can be provided. Several methods can be used for estimating whether the sampling provides for obtaining a model of the existing state (as made): (a) the model is stored in the system memory and the samples are presented thereon; (b) the model is located at a remote system (e.g. remote server or local server of the building site), sampling data is sent there and processing is carried out at the remote system; (c) the data is collected in the local system as a cloud of points, for the purposes of documentation and future use, and no processing is carried out at the stage of data collection, or alternatively a model is created and verification is carried out based on visual inspection.
As also shown in Fig. 2, sampler module 208 of system 200 preferably includes a connection port for connecting to an auxiliary control channel 210 configured for bi-directional control, thereby allowing remote control of the system. Such control may for example include activation of automatic scanning mode in the system. This is exemplified in Fig. 3.
Fig. 3 shows an example of a full sphere (360°), shot-sampling system 300 of the present invention associated with a target 301 to be sampled. System 300 includes a sampler module 302; and a motion system formed by an X-Y axis rotating plate 304 and preferably also a pitch, roll and azimuth mechanism 303. The motion system is configured and operable to allow movement of sampler system 302 in the full space (360°). Motion system 304 (or 303-304) is operated by a control unit (card or board) 305, synchronizing the motion system operation with that of the sampler 302, through the above-described control channel (210 in Fig. 2). The entire system 300 is placed on a location stabilizer unit 306, enabling a relatively smooth movement allowing for constructing a space of cloud of points with minimum adjustment procedures (stitching). Despite the significance of this element, as the calculation of the cloud of points is carried out in real time, the deviation can be corrected with high reliability.
Such a system 300 can be used for various applications. These include for example automatic documentation of the existing state for each of the building stages, with no need for a skilled user. The system requires a single reference point that is to be defined at the beginning of the building process or alternatively at the beginning of the sampling procedure, under conditions to be described (or marked on the model). This will enable simple, immediate and reliable modeling that enables tracking progress in the construction and evaluating it with the model (or transferring it to a remote station). Another possible application is the analysis of events. In this case, the system is installed within the event region and operates to provide documentation of said region as a live and active model (completed by the gray level and RGB data). To obtain data regarding material composition (e.g. presence of blood, brakes in tires, etc.), different kinds of illumination and filtering can be used. There is no need for marks installation and for a process of immediate building of the model. For example, car tires marks on the road will be sampled differently with a different light source (and blood marks also). This will enable creation of a model that will hold multidisciplinary data to give a reliable model of an accident zone. The created model is an active model and allows for producing simulations of the event, belonging to the data detected during the sampling procedure.
Also, a mechanical system can be produced that moves automatically around the object (which is static) and creates the full model. Such a system is suitable for use in documentation and creation of models of objects of a limited size (sculptures, people).
Reference is made to Fig. 4 showing an example of a system 400 of the present invention configured as a professional, real time, full space, shot- sampling-tracking system, utilizing dynamic movement for animation and/or medical use. In distinction to the above-described system 300, the system 400 enables construction of a space region by static means with no movement of a sampler. This system is suitable for operating in closed space in which an object to be sampled moves. The system may be used in a variety of applications, the common feature being the ability of the system to sample, image, track the movement, adjust the sample to a model and warn about deviations, or guide the process in accordance with the subject performance, all being real time features (as per the performance and accuracy requirements). These applications include for example sampling of a sportsman's motion, where the motion is sampled as a model and a picture for the purposes of its analysis, explanation and guidance. Another possible application is sampling a patient suffering from limb or other damage that affects mobility (stroke, Parkinsons, etc.). Yet other possible applications include sampling and guiding of professional dancers; sampling motion of mechanical systems and/or objects; creation of professional animation by means of sampling a model of motion and its effect on a model of another character, synchronized with sound. These systems are characterized in their ability for real time sampling and displaying, with no marks or sensors on an image.
As shown in Fig. 4, system 400 includes multiple sampler modules arranged to cover (by their fields of view) a certain spherically-shaped space region. Four such sampler modules 401, 402, 403 and 404 are used in the present example being accommodated at the corners of a space region to cover the spherical shape of about 140 degrees; and an additional sampler 405 is located at the top of the space region to complete the space coverage. Sampled (measured) data is transferred from these samplers to a controller 406 configured for synchronizing these data in a time plane and transmitting the synchronized data to a stitching utility 407 that matches the sampled time-synchronized data in space. The so-processed data (time and space matched data) is transmitted to a model constructing utility 408. The model-related data is then transmitted to a connection utility 409 that communicates with all the other relevant systems (computers, video and audio systems, sensors, illuminators, etc.).
Fig. 5 exemplifies the simplest implementation of the above system, intended mainly for home use. System 500 includes a single sampler 501 connectable to a personal computer 502 (controlled by user input from a keyboard 504, mouse and/or touch pad 505). System 500 utilizes a main display 503. The latter may for example present himself to the user being sampled as a different subject (for example, a girl is dancing, and an image of her admired subject appears on the display); or alternatively the system displays an image of a dancer and a girl for example follows up the motion of said image (while the quality of her tracking is presented in real time on the main display). The system may also include an additional display 506 for displaying the same to an examiner or to a remote location.
Such a system can be used for example for creating a face model for biometric identification. In this case, the system may additionally utilize different or varying wavelengths for obtaining additional data about the materials composition (to avoid forgeries).
Reference is made to Fig. 6 illustrating a system 700 configured for 3D real time dynamic shot-sampler operation. In this system, the use of a light emitter associated with guided mirrors (e.g. MEMS technology) to produce structured light is replaced by the direct use of an array of light emitters 703 (LEDs and/or laser diodes, preferably operating with varying wavelengths). The light emitters are controllably operated by a controller board 707 (constituting a part of the control unit). The system operation is similar to that of system 100 in Fig. IB. The elements of system 700 are similar to those of system 100, except for that a single optical system 708 is used in system 700 instead of two optical systems 108 and 111 as in system 100.
It should be noted that the present invention can utilize any known suitable technique for the depth calculation, for example a triangulation based technique. The principles of triangulation are generally known. The calculation of the depth of a known point from two points of view, in order to reconstruct a 3D cloud of points model (out of a large number of points) is briefly described below with reference to Figs. 7 and 8.
Fig. 7 presents a top view of a stereo system composed of two pinhole cameras. The left and the right image planes are coplanar. Oi and Or are the centers of projection. The optical axes are parallel. To this end, a fixation point defined as the point of intersection of the optical axes lies infinitely far from the cameras. The way in which stereo determines the position in space of P and Q is triangulation, that is, by intersecting the rays defined by the centers of the projection and the images of P and Q, ph pr, qi, qr. Triangulation depends crucially on the solution of the correspondence problem: if (pi, pr) and (qi,qr) are chosen as a pair of corresponding image points, intersecting the rays OlPl-OrPr and Oiqi~Orqr leads to interpreting the image points as projection of P and Q, but if (P1, qr) and (qi, Pr) are the selected pairs of corresponding points, triangulation returns P' and Q'. It should be noted that both interpretations, although being dramatically different, stand on equal footing.
Assuming the correspondence problem has been solved and turns to reconstruction, it is instructive to write the equation underlying the triangulation of Fig 8. With reference to Fig. 8, let us concentrate on the recovery of a single point P form its projection, Pi and Pr. A distance T between the centers of the projection Oi and Or, is called the baseline of a stereo system. Let Xi and xr be the coordinates of Pi and Pr with respect to the principal points Ci and Cr, f is the common focal length, and Z is the distance between P and the baseline. From similar triangles (Pj, P, P1) and (Oi, P, Or) we have: ' T + Xl -Xr _ T m
Z - f ~ Z { )
Solving (1) for Z, we obtain f *T
Z - 1-/- (2) a where d=Xr-Xi, the disparity measures the difference in retinal position between the corresponding points in the two images. It can be seen from equation (2) that the depth is inversely proportional to disparity.
As indicated above, controlling of the light sources and the processes of the signal analysis enables the system adaptation in a number of planes. With regard to the system resolution, the following should be noted. Determination of differences in the received gray levels between the two sampled light points allows the system, by observing the sampling window, to adapt the required number of active light points from the nature of the sampled object, such that the quality of a model is optimal. For example, for large and planar surfaces, smaller number of light points per surface unit are used, but for corners and columns — a larger number of points. The amount of light produced by the light point is derived from two main variables: the intensity of light impinging from a light emitter onto mirrors or alternatively the intensity of an array of light emitters (when no mirrors are used); and a number of mirrors operating at a time point (as exemplified in Fig. 9). The larger the number of operating mirrors (or operating light emitters), the larger the amount of light representing a sampled region.
The amount of light produced by the system is controlled by the system in accordance with visual contrast of the light points (where the determination of an arrangement of an array of these points is repeated for each sample). When the contrast increases, the intensity of light produced by the light source can be reduced. At an initial stage, the light intensity of light emitter is increased, and at the next stage the number of mirrors forming a single unit is increased.
One of the advantageous features of the system of the present invention is in its ability to obtain information that optimizes the system operation. Contrary to a scanner, the sampled points are not captured in a sequence synchronized in space, but rather are captured simultaneously, during a regular picture acquisition, from the same sensor, which allows for completion of data of the cloud of points and a gray level image, perfectly synchronized, with no computational process. In this case, a model is created from the cloud of sampled points and is added to data received from the regular image. This process can be implemented with a number of wavelengths providing additional data, the model thus becoming a "clever model", including information also about an envelope between the sampled points. This data is indicative of the surface of envelope, material composition, absorption of wavelengths, etc.
Creation of such a "clever model" based on a cloud of points together with data regarding material composition and envelope enables to produce a mathematical structure allowing simulation of an event activating a mathematical and/or heuristic procedure on the model elements for the purposes of creation of an image of an event occurring with the model. Considering for example the analysis of a car accident, a model with a layer of brushing tires is created while analyzing the region of the event, where said layer is identified within the model as being associated with a change in the illumination. These features allow for simulating the vehicle arrival to a region of accident. Another example relates to a face model: the face model including received data indicative of a material composition of the envelope allows for creating a motion and distortion of the face by means of a controlled movement of the points in the model (simulation of speech).
The above described examples relate to a sampling system adaptive to the level of contrast observed on the camera, enabling the system to be adaptive to a distance and variable levels of illumination.
Referring to Figs. lOA-lOC, there is shown an example of the operational method according to the invention, demonstrating the ability of the invented technique to be adaptive to the nature of a region of interest (object and/or surface and/or open space) to be sampled. The adaptive procedures exemplified in Figs. lOA-lOC are based on a dynamic process of data analysis for the received data with respect to reference data projected by a projection mode (adaptive test pattern). The important feature of this method is that the data analysis process does not perform correlation between the acquired and displayed images (that contain coincidental or pseudo-coincidental test pattern), but rather examines the acquired images with respect to a predefined and updated reference system. The projected image contains known (predefined) simple elements (reference points), such that identification of at least parts thereof enables decision making at a high level of certainty. The identification of the parts of a reference allows inspection thereof with high resolution that allows for identifying corners or other surface edges.
The use of regular and/or telecentric optics allows for constructing data enabling decision making based on data indicative of the image size in short distances and small objects, with no dependence on distance, and otherwise in accordance with a change in the dependence on distance. A combination of data from the two optical systems allows the sampling system to decide to separate between, for example, a structural change on a wall and window or blind existing in its vicinity. Moreover, the accuracy of such a system exceeds regular optics alone, which compensates the advantages of a laser scanner at those distances. The system in such a way analyzes and describes non-correlated regions in the two images and the test pattern (reference image), and these regions then undergo a process of pattern projection thereon with higher resolution (larger number of projected light points) that reduces the non-correlation to minimal. At that stage, the system has data with reasonable resolution. In case of "holes" in data or alternatively a difficulty in setting a 3D structure, the system activates a scanning process (sweeping), in which coded patterns are projected enabling to track a change (pseudo) along the time axis, and by this the nature of the treated region is well predicted. An example of a sweeping cyclic pattern is shown in Fig. 12. In the example of Figs. lOA-lOC, the optical system of Fig. IB (using a light emitter associated with an array of mirrors, and using regular or telecentric optics) is considered, and the method is described with respect to only one region of interest without stitching of image that enables calculation of the space. As shown in Fig. 1OA, when the process is initially activated (step 1301), medium resolution reference pattern is displayed by a projector (step 1302). The two cameras capture images of this pattern (step 1302). The pictures may be enhanced (step 1304), and the projected pattern is matched in the two pictures (step 1305). The latter procedure may utilize fusion of two or more protocols, using the reference image as an anchor for the calculation procedure. This is implemented by identifying and matching relative locations of the reference points pattern on the main pattern; and correlating the images using the reference points pattern as the reference layer. Then, images of the projected pattern are analyzed to identify areas with large holes of matched points (step 1306), the large holes being predefined as areas of pixels with missing or non-matched points (e.g. areas with more than 5 percent of missing or non-matched points). This process allows for appropriately setting the initial system resolution and illumination conditions.
As shown in Fig. 1OB, the system then initiates a second measurement stage (step 1308). A high resolution light pattern is projected by the projection module, where this light pattern is adapted to the non-matched areas (step 1309). The above described first stage is repeated for this resolution image. The number of steps can vary from one up to any required number depending on the structure of the measured object and/or surface and/or space, as well as on the required cloud of points' resolution. If holes are identified, the sweeping process (projection of a coded pattern) is initiated (step 1312), and of not identified, a calculation process is applied (step 1310) consisting of calculation of the cloud of points using triangulation or another suitable technique, presenting the final measurement stage (step 1311). As shown in Fig. 1OC, if the sweeping process is to be initiated (step 1312), an appropriate sweeping method (cycle, point, stripe, etc.) is selected (step 1313), a region of interest (one to which sweeping is to be applied) is selected (step 1314), and the sweep resolution is calculated (step 1315), all these selections being defined according to the data obtained in the previous steps. Then, another pattern, the N4 pattern, is generated and projected (step 1316), the two cameras acquire images of the region of interest (step 1317), which images may be enhanced (step 1318), and image data is saved as a data array (step 1319). Steps 1316-1319 are repeated while increasing the resolution (step 1321) until the maximal possible resolution (maximal number of light beams involved in the pattern) is provided (step 1320), and then the cloud of points related data is calculated using a phase change in the pictures across the time domain (step 1322) presenting the final process step (1323).
Fig. 11 exemplifies the recognizable matching points (reference points) in the main pattern of structured light. Fig. 12 exemplifies a cyclic sweep pattern, which in the present example is variable only across Y-axis, in time domain.
Thus, the present invention provides a 3D real-time dynamic shot sampler system which may be used in a variety of applications, e.g. for tracking the performance and/or description of the existing state, as well as constructing an integral natural 3D model utilizing 3D cloud of points combined with RGB in a desired coordinate system. Moreover, the system of the present invention can operate on the basis of an image and scanned model, enabling automatic reconstruction of the model. The system provides real time calculations allowing for dynamic model creation. The system is inexpensive, allows for effective operation with no specific preparation. The system is insensitive to illumination conditions (is operable under reasonable daylight, as well as at night). The system allows for redlining on the image and/or model for the purposes of updating. The system can be configured for local operational mode or for remote mode (via a communication network). The technique of the present invention provides sampling and imaging allowing construction of a live and active model in space, by combining a cloud of scan points and an image. The invented technique in some embodiments utilizes an array of guided mirrors that produce from a single light source an array of point-like light sources, dynamically controlled by a control system. The simple implementations of the invented system utilize an array of very small light sources, controlled to produce the sample. The invention provides for automatically and/or manually receiving a dynamic active and live model of a space region or an object sampled in two modes: as a dynamic cloud of points synchronized in space and time; and as a dynamic model combining data from the cloud of points and data from a regular photo (image) correlated in space and time. The invented technique also provides for dynamically tracking the object motion (tower, pole, people, etc.), and enables full sphere data reading by automatic means. The invention also provides for intelligent use of illumination of dynamically varying wavelengths and angular distribution; analysis of light absorption (by means of analyzing a structure of points of light reflection); estimation of the material compositions and objects; and dealing with light filtering ranges.
The invention can be used for constructing live models, both static and dynamic, of objects and spaces, with a possibility of creating an independent model or as an integration and completion of an existing model (for the purposes of updating or matching test of the existing state to the model). The invention can also be used for tracking motion in space to create animation or control the motion of a body (including a human body) for medical and other applications; creation of models to reconstruct a fragment of events, including the creation of a dynamic model (e.g. reconstruct a car accident or terror act). Also, the created model enables to create a characterizing signature of an object (including a live object) by means of a shape and material composition allowing its identification (including a human face). The invention can utilize professional and amateur imaging systems for receiving image and 3D model in cost-effective way.
Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention as hereinbefore described without departing from its scope defined in and by the appended claims.

Claims

CLAIMS:
1. An imaging method for use in 3D real-time dynamic sampling of a region of interest, the method comprising:
(i) illuminating the region of interest with structured light of predetermined illumination conditions, the structured light being in the form of a main pattern formed by a number of spatially separated light beams of at least one wavelength of light;
(ii) detecting light from the region of interest and generating image data indicative thereof; (iii) analyzing the image data and selectively adapting the main pattern until detecting optimal image data corresponding to an optimal projected pattern, said adapting comprising carrying out at least one of the following: increasing a number of the projected light beams at least within at least one selected area of said region of interest, and projecting an additional coded pattern onto at least one selected area of said region of interest, thereby enabling processing of the optimal image data to calculate cloud of points related data, and using this data to create a model of the region of interest.
2. The method of Claim 1, wherein said main pattern carries a reference pattern formed by a few reference marks formed by a predetermined arrangement of some of said light beams.
3. The method of Claim 1 or 2, wherein the structured light is produced by emitting a light beam and splitting it into the predetermined number of spatially separated light beams.
4. The method of Claim 3, wherein said splitting comprises impinging the emitted light beam onto a surface formed by an array of mirrors controllable activated and angularly displaced.
5. The method of Claim 1 or 2, wherein the structured light is produced by controllably operating an array of separate light emitters.
6. The method of any one of preceding Claims, the structured light is produced by carrying out an initial adaptation of the imaging to a required resolution, illumination conditions, and a distance from an imaging system to the region of interest.
7. The method of Claim 6, wherein the initial adaptation comprises successively illuminating the region of interest and detecting the image data, with at least one varying parameter from the following: a number of the light beams at least within one or more selected areas of the region of interest, intensity of detected illumination, polarization of illuminating light, wavelength of illuminating light and the distance to the region of interest, and analyzing the successive image data upon detecting an optimal condition of substantially no missing light points in the image and non-matched points in the multiple images, within a certain predefined threshold.
8. The method of Claims 4 and 7, wherein the illumination intensity variation is carried out by controllably varying intensity of emitted light.
9. The method of Claims 4 and 7, wherein the illumination intensity variation is carried out by simultaneously varying switching of a local array formed by a number of said mirrors such that said local array presents a source of light formed by the number of separate light sources.
10. The method of Claims 4 and 7, wherein said adaptation comprises initially increasing the intensity of emitted light, and then increasing a number of mirrors involved in the splitting and deflecting of said light.
11. The method of Claims 5 and 7, wherein the illumination intensity variation is carried out by simultaneously varying switching of a local array formed by a number of said light emitters such that said local array presents a source of light formed by the number of separate light sources.
12. The method of Claim 7, wherein the illumination intensity variation is carried out by controllably increasing an exposure time of at least one light detector.
13. The method of Claim 7, wherein the illumination intensity variation is carried out based on the nature of region of interest to be modeled and environmental conditions.
14. The method of any one of preceding Claims, wherein light points projected onto the region of interest by said light beams, respectively, are captured simultaneously during a regular image acquisition thereby completing synchronized measured data of the cloud of points and a gray level image data, the model being created from the cloud of sampled points added with the regular image data.
15. The method of any one of preceding Claims, wherein the region of interest is illuminated with the light of different wavelengths, said model being therefore indicative of a material composition of the region of interest.
16. The method according to any one of preceding Claims, comprising: carrying out said illumination with a certain initial resolution reference pattern, using two light detectors to generate the image data, analyzing the image data to identify and match relative locations of a reference pattern contained in said main pattern and correlate the images using the reference pattern, and to identify the areas of missing or not-matched points in the images to apply said selective adaptation.
17. The method according to any one of preceding Claims, wherein said adapting comprises successively varying the illumination conditions by increasing the number of the projected light beams at least within the selected one or more areas of said region of interest, and upon identifying the missing or not-matched points while reaching the maximal number of light beams involved in the illumination, projecting the additional coded pattern onto said at least one selected area of the region of interest.
18. An imaging system for use in 3D real-time dynamic sampling of a region of interest, the system comprising:
(a) an illumination unit configured and operable for producing and projecting onto the region of interest structured light of predetermined illumination conditions, the structured light being in the form of a main pattern formed by a number of spatially separated light beams of at least one wavelength of light;
(b)a light detection unit for detecting light returned from the region of interest and generating image data indicative thereof; (c) a control unit configured and operable for analyzing the image data and selectively operating the illumination unit for adapting the main pattern until detecting optimal image data corresponding to an optimal projected pattern, the control unit operating the illumination unit to obtain at least one of the following: increase a number of the projected light beams at least within at least one selected area of said region of interest, and project an additional coded pattern onto at least one selected area of said region of interest, to enable processing of the optimal image data to calculate cloud of points related data and create a model of the region of interest.
19. The system of Claim 18, wherein the illumination unit comprises at least one light emitter and an array of controllably switchable and angular displaceable mirrors in the optical path of light produced by said at least one light emitter.
20. The system of Claim 18, wherein the illumination unit comprises an array of separate light emitters.
21. The system of Claim 18, comprising at least one mask in the optical path of light coming from the mirrors, thereby further splitting the light into a higher number of the separate light beams.
22. The system of Claim 19, comprising at least one mask in the optical path of light coming from the light emitters, thereby further splitting the light into a higher number of the separate light beams.
23. The system of any one of Claims 18 to 22, wherein the illumination unit comprises at least one light emitting diode (LED).
24. The system of any one of Claims 18 to 22, wherein the illumination unit comprises at least one light emitting diode laser.
25. The system of any one of Claims 18 to 24, configured to produce a reference pattern within said main pattern, the reference pattern including a few spaced apart reference marks formed by a predetermined arrangement of some of said light beams.
26. The system of any one of Claims 18 to 25, wherein the detection unit comprises two pixel-array light detectors arranged relative to each other and to the region of interest to cover the region of interest with a desired field of view of the detectors.
27. The system of any one of Claims 18 to 26, wherein the illumination unit is configured and operable to produce light of different wavelengths.
28. The system of any one of Claims 18 to 27, wherein the illumination unit is configured and operable to produce light of varying polarization.
29. The system of any one of Claims 18 to 28, configured to adapt the imaging to a required resolution, illumination conditions, and a distance from an imaging system to the region of interest.
30. The system of Claim 29, wherein the control unit is adapted to operate the illumination unit and detection units to successively illuminate the region of interest and detect the image data, with at least one varying parameter from the following: a number of the light beams at least within one or more selected areas of the region of interest, intensity of detected illumination, polarization of illuminating light, wavelength of illuminating light and the distance to the region of interest, the control unit being adapted to continuously analyze the successive image data upon detecting an optimal condition of substantially no missing light points in the image and non-matched points in the multiple images, within a certain predefined threshold.
31. The system of Claim 30, configured for controllably varying intensity of emitted light.
32. The system of Claim 19 and 30, configured for simultaneously varying the switching of a local array formed by a number of said mirrors such that said local array presents a source of light formed by the number of separate light sources.
33. The system of Claim 30, configured for controllably varying an exposure time of the light detection unit.
34. The system of Claim 30, wherein the control unit is pre-stored with data indicative of nature of the region of interest to be modeled and environmental conditions, to operate the illumination unit to produce the illumination intensity variation based on said data.
PCT/IL2006/000461 2005-04-12 2006-04-11 Real-time imaging method and system using structured light WO2006109308A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL16797505 2005-04-12
IL167975 2005-04-12

Publications (1)

Publication Number Publication Date
WO2006109308A1 true WO2006109308A1 (en) 2006-10-19

Family

ID=36646015

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2006/000461 WO2006109308A1 (en) 2005-04-12 2006-04-11 Real-time imaging method and system using structured light

Country Status (1)

Country Link
WO (1) WO2006109308A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009140461A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
DE102009059794A1 (en) * 2009-12-21 2011-06-22 Siemens Aktiengesellschaft, 80333 Camera projector system and a method for triggering a camera
CN102231037A (en) * 2010-06-16 2011-11-02 微软公司 Illuminator for depth camera with superradiation light-emitting diode
WO2013156530A1 (en) * 2012-04-18 2013-10-24 3Shape A/S 3d scanner using merged partial images
EP2772676A1 (en) * 2011-05-18 2014-09-03 Sick Ag 3D camera and method for three dimensional surveillance of a surveillance area
GB2522551A (en) * 2014-01-17 2015-07-29 Canon Kk Three-dimensional-shape measurement apparatus, three-dimensional-shape measurement method, and non-transitory computer-readable storage medium
EP2265895A4 (en) * 2008-04-01 2016-07-27 Perceptron Inc Contour sensor incorporating mems mirrors
WO2016123618A1 (en) * 2015-01-30 2016-08-04 Adcole Corporation Optical three dimensional scanners and methods of use thereof
CN106023247A (en) * 2016-05-05 2016-10-12 南通职业大学 Light stripe center extraction tracking method based on space-time tracking
WO2018044265A1 (en) * 2016-08-30 2018-03-08 Empire Technology Development Llc Joint attention estimation using structured light
WO2019127539A1 (en) * 2017-12-29 2019-07-04 Shenzhen United Imaging Healthcare Co., Ltd. Systems and methods for determining region of interest in medical imaging
WO2019185624A1 (en) * 2018-03-30 2019-10-03 Koninklijke Philips N.V. System and method for 3d scanning
DE102018208417A1 (en) * 2018-05-28 2019-11-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Projection device and projection method
US10789498B2 (en) 2017-12-29 2020-09-29 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for patient positioning
CN111788623A (en) * 2018-01-06 2020-10-16 凯尔Os公司 Intelligent mirror system and using method thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4115445A1 (en) * 1990-07-05 1992-01-23 Reinhard Malz Recording three=dimensional image of object - using active triangulation principle and object marker projector synchronised to video camera
DE19633686A1 (en) * 1996-08-12 1998-02-19 Fraunhofer Ges Forschung Distances and-or spatial coordinates measuring device for moving objects
WO2004044525A2 (en) * 2002-11-11 2004-05-27 Qinetiq Limited Ranging apparatus

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4115445A1 (en) * 1990-07-05 1992-01-23 Reinhard Malz Recording three=dimensional image of object - using active triangulation principle and object marker projector synchronised to video camera
DE19633686A1 (en) * 1996-08-12 1998-02-19 Fraunhofer Ges Forschung Distances and-or spatial coordinates measuring device for moving objects
WO2004044525A2 (en) * 2002-11-11 2004-05-27 Qinetiq Limited Ranging apparatus

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2265895A4 (en) * 2008-04-01 2016-07-27 Perceptron Inc Contour sensor incorporating mems mirrors
JP2011521231A (en) * 2008-05-16 2011-07-21 ロッキード・マーチン・コーポレーション Accurate image acquisition on structured light systems for optical measurement of shape and position
WO2009140461A1 (en) * 2008-05-16 2009-11-19 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
US8220335B2 (en) 2008-05-16 2012-07-17 Lockheed Martin Corporation Accurate image acquisition for structured-light system for optical shape and positional measurements
DE102009059794A1 (en) * 2009-12-21 2011-06-22 Siemens Aktiengesellschaft, 80333 Camera projector system and a method for triggering a camera
US8670029B2 (en) 2010-06-16 2014-03-11 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
CN102231037A (en) * 2010-06-16 2011-11-02 微软公司 Illuminator for depth camera with superradiation light-emitting diode
EP2772676A1 (en) * 2011-05-18 2014-09-03 Sick Ag 3D camera and method for three dimensional surveillance of a surveillance area
US9228697B2 (en) 2011-05-18 2016-01-05 Sick Ag 3D-camera and method for the three-dimensional monitoring of a monitoring area
WO2013156530A1 (en) * 2012-04-18 2013-10-24 3Shape A/S 3d scanner using merged partial images
US9557167B2 (en) 2014-01-17 2017-01-31 Canon Kabushiki Kaisha Three-dimensional-shape measurement apparatus, three-dimensional-shape measurement method, and non-transitory computer-readable storage medium
GB2522551A (en) * 2014-01-17 2015-07-29 Canon Kk Three-dimensional-shape measurement apparatus, three-dimensional-shape measurement method, and non-transitory computer-readable storage medium
GB2522551B (en) * 2014-01-17 2018-06-27 Canon Kk Three-dimensional-shape measurement apparatus, three-dimensional-shape measurement method, and non-transitory computer-readable storage medium
CN107430194A (en) * 2015-01-30 2017-12-01 阿德科尔公司 Optical three-dimensional scanning instrument and its application method
WO2016123618A1 (en) * 2015-01-30 2016-08-04 Adcole Corporation Optical three dimensional scanners and methods of use thereof
US10048064B2 (en) 2015-01-30 2018-08-14 Adcole Corporation Optical three dimensional scanners and methods of use thereof
CN106023247A (en) * 2016-05-05 2016-10-12 南通职业大学 Light stripe center extraction tracking method based on space-time tracking
CN106023247B (en) * 2016-05-05 2019-06-14 南通职业大学 A kind of Light stripes center extraction tracking based on space-time tracking
WO2018044265A1 (en) * 2016-08-30 2018-03-08 Empire Technology Development Llc Joint attention estimation using structured light
US20190182456A1 (en) * 2016-08-30 2019-06-13 Xinova, LLC Joint attention estimation using structured light
WO2019127539A1 (en) * 2017-12-29 2019-07-04 Shenzhen United Imaging Healthcare Co., Ltd. Systems and methods for determining region of interest in medical imaging
US11730396B2 (en) 2017-12-29 2023-08-22 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for patient positioning
US11532083B2 (en) 2017-12-29 2022-12-20 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for determining a region of interest in medical imaging
US10789498B2 (en) 2017-12-29 2020-09-29 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for patient positioning
US10825170B2 (en) 2017-12-29 2020-11-03 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for determining a region of interest in medical imaging
US11295153B2 (en) 2017-12-29 2022-04-05 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for patient positioning
CN111788623A (en) * 2018-01-06 2020-10-16 凯尔Os公司 Intelligent mirror system and using method thereof
US10935376B2 (en) 2018-03-30 2021-03-02 Koninklijke Philips N.V. System and method for 3D scanning
WO2019185624A1 (en) * 2018-03-30 2019-10-03 Koninklijke Philips N.V. System and method for 3d scanning
US11029145B2 (en) 2018-05-28 2021-06-08 Fraunhofer-Gasellschaft zur Förderung der angewandten Forschung e.V. Projection device and projection method
DE102018208417A1 (en) * 2018-05-28 2019-11-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Projection device and projection method

Similar Documents

Publication Publication Date Title
WO2006109308A1 (en) Real-time imaging method and system using structured light
CN104634276B (en) Three-dimension measuring system, capture apparatus and method, depth computing method and equipment
EP3650807B1 (en) Handheld large-scale three-dimensional measurement scanner system simultaneously having photography measurement and three-dimensional scanning functions
US10401143B2 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
US10088296B2 (en) Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device
CN110383343B (en) Inconsistency detection system, mixed reality system, program, and inconsistency detection method
US9041775B2 (en) Apparatus and system for interfacing with computers and other electronic devices through gestures by using depth sensing and methods of use
US7711179B2 (en) Hand held portable three dimensional scanner
EP1720131B1 (en) An augmented reality system with real marker object identification
US6664531B2 (en) Combined stereovision, color 3D digitizing and motion capture system
US20160134860A1 (en) Multiple template improved 3d modeling of imaged objects using camera position and pose to obtain accuracy
CN103003713A (en) A laser scanner or laser tracker having a projector
US20030202120A1 (en) Virtual lighting system
KR101824888B1 (en) Three dimensional shape measuring apparatus and measuring methode thereof
CN104634277B (en) Capture apparatus and method, three-dimension measuring system, depth computing method and equipment
US20030067537A1 (en) System and method for three-dimensional data acquisition
CN105190703A (en) Using photometric stereo for 3D environment modeling
JP2006505784A (en) Ranging device
EP3069100B1 (en) 3d mapping device
CN109425306A (en) Depth measurement component
JP2011254411A (en) Video projection system and video projection program
CN108154126A (en) Iris imaging system and method
WO2016040271A1 (en) Method for optically measuring three-dimensional coordinates and controlling a three-dimensional measuring device
US10685448B2 (en) Optical module and a method for objects' tracking under poor light conditions
CN207650834U (en) Face information measurement assembly

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06728262

Country of ref document: EP

Kind code of ref document: A1

WWW Wipo information: withdrawn in national office

Ref document number: 6728262

Country of ref document: EP