WO2005095887A1 - 投影装置および3次元形状検出装置 - Google Patents
投影装置および3次元形状検出装置 Download PDFInfo
- Publication number
- WO2005095887A1 WO2005095887A1 PCT/JP2005/005862 JP2005005862W WO2005095887A1 WO 2005095887 A1 WO2005095887 A1 WO 2005095887A1 JP 2005005862 W JP2005005862 W JP 2005005862W WO 2005095887 A1 WO2005095887 A1 WO 2005095887A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- projection
- light
- semiconductor light
- image
- projection device
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/12—Picture reproducers
- H04N9/31—Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
- H04N9/3141—Constructional details thereof
- H04N9/317—Convergence or focusing systems
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F21—LIGHTING
- F21K—NON-ELECTRIC LIGHT SOURCES USING LUMINESCENCE; LIGHT SOURCES USING ELECTROCHEMILUMINESCENCE; LIGHT SOURCES USING CHARGES OF COMBUSTIBLE MATERIAL; LIGHT SOURCES USING SEMICONDUCTOR DEVICES AS LIGHT-GENERATING ELEMENTS; LIGHT SOURCES NOT OTHERWISE PROVIDED FOR
- F21K9/00—Light sources using semiconductor devices as light-generating elements, e.g. using light-emitting diodes [LED] or lasers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B21/00—Projectors or projection-type viewers; Accessories therefor
- G03B21/005—Projectors using an electronic spatial light modulator but not peculiar thereto
- G03B21/006—Projectors using an electronic spatial light modulator but not peculiar thereto using LCD's
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/74—Projection arrangements for image reproduction, e.g. using eidophor
- H04N5/7416—Projection arrangements for image reproduction, e.g. using eidophor involving the use of a spatial light modulator, e.g. a light valve, controlled by a video signal
Definitions
- the present invention relates to a projection device and a three-dimensional shape detection device that can suppress uneven illuminance of light emitted to a semiconductor light emitting element spatial modulation element and project a high-quality image on a projection surface.
- a projection apparatus configured to project an arbitrary image on a projection surface by spatially modulating light having a light source power using a spatial modulation element.
- a projection apparatus condenses light emitted from a light source with a condensing optical system, performs spatial modulation according to an image signal on the condensed light with a spatial modulation element, and performs The light subjected to the spatial modulation is projected as an image signal light on a projection surface via a projection means.
- a light source of the projection device As a light source of the projection device, a xenon lamp, a halogen lamp, a metal halide lamp, or the like has been conventionally used. Since such a light source has a short life as a light source and cannot perform on-off modulation, it must be used continuously in a projection device, and therefore consumes a large amount of power in a conventional projection device! There was a problem.
- Document 1 Japanese Patent Application Laid-Open No. 11-32278 (hereinafter, referred to as Document 1) and Japanese Patent Application Laid-Open No. 11-231316 (hereinafter, referred to as Document 2) disclose a projection device.
- a technology using a semiconductor light emitting element, specifically, a light emitting diode (hereinafter, referred to as “LED”) as a light source has been disclosed.
- the life of the light source can be extended and the power consumption can be reduced.
- the output light of the LED has a strong directivity and a small light-emitting area, The light collection efficiency is increased, and as a result, the light use efficiency can be improved.
- the present invention has been made to solve the problems described above. That is, the present invention provides a projection device and a three-dimensional shape detection device capable of suppressing unevenness in illuminance of light emitted to a semiconductor light emitting element and a spatial modulation element and projecting a high-quality image on a projection surface. It is aimed at.
- a projection apparatus which includes a plurality of semiconductor light emitting devices that emit light, and a light emitting device that emits light by the semiconductor light emitting devices. And a projection means for projecting the image signal light output from the spatial modulation element onto a projection surface.
- the plurality of semiconductor light emitting elements is a semiconductor light emitting element array in which at least two or more semiconductor elements are linearly arranged at a first pitch in a first direction on a substrate supporting the plurality of semiconductor light emitting elements.
- the first pitch is equal to or less than the full width at half maximum of the illuminance distribution formed in the spatial light modulator by the light emitted from one of the semiconductor light emitting elements. It is arranged as follows.
- the first pitch is one power of the semiconductor light emitting element.
- the light is arranged below the full width at half maximum of the illuminance distribution formed in the spatial light modulator by the emitted light. Therefore, a decrease in illuminance between adjacent semiconductor light emitting elements can be suppressed. Therefore, illuminance unevenness in the spatial modulation element is suppressed, and substantially uniform light can be applied to the spatial modulation element. As a result, an effect that a high-quality image can be projected on the projection surface is obtained.
- a projection device which includes a plurality of semiconductor light emitting elements that emit light, and performs spatial modulation on light emitted by the semiconductor light emitting elements.
- a spatial modulation device that outputs image signal light, and a projection unit that projects the image signal light output from the spatial modulation device toward a projection surface.
- the plurality of semiconductor light-emitting elements are arranged in a staggered manner on a substrate that supports the plurality of semiconductor light-emitting elements. According to such a configuration, since the plurality of semiconductor light emitting elements are arranged in a staggered manner on the substrate supporting the plurality of semiconductor light emitting elements, the illuminance between adjacent semiconductor light emitting elements can be reduced.
- the front row or rear row semiconductor light emitting elements can be supplemented by the front row or rear row semiconductor light emitting elements. Therefore, illuminance unevenness in the spatial light modulator can be suppressed. Therefore, it is possible to irradiate substantially uniform light to the spatial modulation element, and to obtain an effect that a high-quality projected image can be projected.
- a projection device having the above-described configuration and a projection direction in which image signal light output from projection means provided in the projection device is projected are arranged.
- An imager that projects a pattern light as image signal light onto an object to be projected, and detects a three-dimensional shape of the object based on an image captured by the imager.
- a three-dimensional shape detecting device provided with a three-dimensional shape detecting means.
- FIG. 1 is an external perspective view of an image input / output device.
- FIG. 2 is a diagram showing an internal configuration of an imaging head.
- FIG. 3 (a) is an enlarged view of an image projection unit
- FIG. 3 (b) is a plan view of a light source lens
- FIG. 3 (c) is a front view of a projection LCD.
- FIG. 4 (a) Force FIG. 4 (c) is a diagram for explaining the arrangement of LED arrays.
- FIG. 5 is an electrical block diagram of the image input / output device.
- FIG. 6 is a flowchart of a main process.
- FIG. 7 is a flowchart of a digital camera process.
- FIG. 8 is a flowchart of a webcam process.
- FIG. 9 is a flowchart of a projection process.
- FIG. 10 is a flowchart of stereoscopic image processing.
- FIG. 11 (a) is a diagram for explaining the principle of the spatial code method
- FIG. 11 (b) is a diagram for explaining the principle of the spatial code method
- FIG. 4 is a diagram showing a mask pattern (gray code) different from 1 (a).
- FIG. 12 (a) is a flowchart of a three-dimensional shape detection process.
- FIG. 12B is a flowchart of the imaging process.
- FIG. 12C is a flowchart of the three-dimensional measurement processing.
- FIG. 13 is a diagram for explaining an outline of a code boundary coordinate detection process.
- FIG. 14 is a flowchart of a code boundary coordinate detection process.
- FIG. 15 is a flowchart of a process for obtaining code boundary coordinates with sub-pixel accuracy.
- FIG. 16 is a flowchart of a process for calculating a CCDY value of a boundary for a luminance image having a mask pattern number of PatID [i].
- 17 (a) to 17 (c) are diagrams for explaining a lens aberration correction process.
- FIG. 18 (a) and FIG. 18 (b) are diagrams for explaining a method of calculating three-dimensional coordinates in a three-dimensional space from coordinates in a CCD space.
- FIG. 19 is a flowchart of flattened image processing.
- FIG. 20 (a) to FIG. 20 (c) are diagrams for explaining a document attitude calculation process.
- FIG. 21 is a flowchart of a plane conversion process.
- FIG. 22 (a) is a diagram for explaining the outline of a curvature calculation process
- FIG. 22 (b) is a diagram showing a flattened image flattened by a plane conversion process.
- FIG. 23 (a) is a side view showing another example of the light source lens
- FIG. 23 (b) is a plan view showing the light source lens of FIG. 23 (a).
- FIG. 24 (a) is a perspective view showing a state in which a light source lens is fixed
- FIG. 24 (b) is a partial cross-sectional view thereof.
- FIG. 25 is a diagram showing another example of the pattern light projected on the subject.
- FIG. 1 is an external perspective view of the image input / output device 1.
- FIG. The projection device and the three-dimensional shape detection device of the present invention are devices included in the image input / output device 1.
- the image input / output device 1 includes a digital camera mode functioning as a digital camera, a webcam mode functioning as a web camera, a stereo image mode for detecting a three-dimensional shape to obtain a three-dimensional image, a curved document.
- a digital camera mode functioning as a digital camera
- a webcam mode functioning as a web camera
- a stereo image mode for detecting a three-dimensional shape to obtain a three-dimensional image
- a curved document includes a digital camera mode functioning as a digital camera, a webcam mode functioning as a web camera, a stereo image mode for detecting a three-dimensional shape to obtain a three-dimensional image, a curved document.
- Various modes such as a flattened image mode for acquiring a flattened image obtained by flattening the image and the like are provided.
- FIG. 1 in particular, in the stereoscopic image mode or the flattened image mode, in order to detect the three-dimensional shape of the original P as a subject, a stripe formed by alternately arranging light and dark from an image projection unit 13 described later. The shape of the pattern light is projected to show the shape of the shape.
- the image input / output device 1 has an imaging head 2 formed in a substantially box shape, a pipe-shaped arm member 3 having one end connected to the imaging head 2, and an end connected to the other end of the arm member 3. And a base 4 formed in a substantially L-shape in plan view.
- the imaging head 2 is a case in which an image projection unit 13 and an image imaging unit 14 described later are included.
- a cylindrical lens barrel 5 is arranged at the center, a finder 6 is arranged diagonally above the lens barrel 5, and a flash 7 is arranged on the opposite side of the finder 6.
- a part of a lens of an imaging optical system 21 which is a part of an image pickup unit 14 described later is exposed on the outer surface. The image of the subject is input with this exposed partial force of the imaging optical system.
- the lens barrel 5 is formed so as to protrude from the front of the imaging head 2 and has an image inside.
- the cover includes a projection optical system 20, which is a part of the projection unit 13.
- the projection optical system 20 is held, the whole can be moved for focus adjustment, and the projection optical system 20 is prevented from being damaged.
- a part of the lens of the projection optical system 20, which is a part of the image projection unit 13, is exposed to the outer surface from the end surface of the lens barrel 5, and the image signal light is also projected toward the projection surface with this exposed partial force. .
- the finder 6 is composed of an optical lens disposed through the rear and front faces of the imaging head 2. When the user looks into the back surface of the imaging head 2, a range that substantially matches the range where the imaging optical system 21 forms an image on the CCD 22 can be seen.
- the flash 7 is, for example, a light source for supplementing a required amount of light in the digital camera mode, and is configured by a discharge tube filled with xenon gas. Therefore, it can be used repeatedly by discharging from a capacitor (not shown) built in the imaging head 2.
- a release button 8 is arranged on the near side, a mode switching switch 9 is arranged behind the release button 8, and a monitor LCD 10 is arranged on the opposite side of the mode switching switch 9. ing.
- the release button 8 is composed of a two-stage push button switch that can be set to two states, a "half-pressed state” and a "fully-pressed state.”
- the state of the release button 8 is managed by a processor 15 described later.
- the well-known auto focus (AF) and automatic exposure (AF) functions are activated when the release button 8 is "half-pressed”, and the focus, aperture, and shirt speed are adjusted.
- imaging is performed.
- the mode switching switch 9 is a switch for setting various modes such as a digital camera mode, a webcam mode, a stereoscopic image mode, a flattened image mode, and an off mode.
- the state of the mode switching switch 9 is managed by the processor 15, and the processing of each mode is executed when the state of the mode switching switch 9 is detected by the processor 15.
- the monitor LCD 10 is configured by a liquid crystal display (Liquid Crystal Display), and receives an image signal from the processor 15 to display an image to a user.
- the monitor LCD 10 displays captured images in the digital camera mode or webcam mode, 3D shape detection result image, flattened image in flattened image mode, etc. are displayed.
- An antenna 11 as an RF (wireless) interface and a connecting member 12 for connecting the imaging head 2 and the arm member 3 are arranged above the side surface of the imaging head 2.
- the antenna 11 is used to transmit captured image data acquired in a digital camera mode, stereoscopic image data acquired in a stereoscopic image mode, and the like via an RF driver 24 to be described later to an external interface by wireless communication.
- the connecting member 12 is formed in a ring shape, and a female screw is formed on an inner peripheral surface thereof.
- the connecting member 12 is rotatably fixed to a side surface of the imaging head 2.
- a male screw is formed at one end of the arm member 3.
- the arm member 3 is for holding the imaging head 2 at a predetermined imaging position so as to be changeable, and is constituted by a bellows-like pipe that can be bent into an arbitrary shape. Therefore, the imaging head 2 can be directed to an arbitrary position by the arm member 3.
- the base 4 is mounted on a mounting table such as a desk, and supports the imaging head 2 and the arm member 3. Since the base 4 is formed in a substantially L-shape in plan view, the imaging head 2 and the like can be stably supported.
- the base 4 and the arm member 3 are detachably connected to each other, which makes it easy to carry the image input device 1 and also makes it possible to store the image input device 1 in a small space.
- FIG. 2 is a diagram schematically showing an internal configuration of the imaging head 2.
- the imaging head 2 mainly includes an image projection unit 13, an image imaging unit 14, and a processor 15.
- the image projection unit 13 is a unit for projecting an arbitrary projection image on a projection surface.
- the image projection unit 13 includes a substrate 16, a plurality of LEDs 17 (hereinafter collectively referred to as “LED array 17 A”), a light source lens 18, a projection LCD 19, and a projection optical system 20 along the projection direction. It has.
- the image projection unit 13 will be described later in detail with reference to FIGS. 3 (a) to 3 (c).
- the image capturing section 14 is a unit for capturing an image of a document P as a subject.
- the image capturing section 14 includes an image capturing optical system 21 and a CCD 22 along the light input direction.
- the imaging optical system 21 includes a plurality of lenses.
- the imaging optical system 21 has a well-known autofocus function, and automatically adjusts the focal length and aperture to form an image of an external force on the CCD 22.
- the CCD 22 has photoelectric conversion elements such as CCD (Charge Coupled Device) elements arranged in a matrix.
- the CCD 22 generates a signal corresponding to the color and intensity of light of an image formed on the surface of the CCD 22 via the imaging optical system 21, converts the signal into digital data, and outputs the digital data to the processor 15.
- a flash 7, a release button 8, a mode switching switch 9, an external memory 27, and a cache memory 28 are electrically connected. Further, the processor 15 is connected to the monitor LCD 10 via the monitor LCD driver 23, the antenna 11 via the RF driver 24, the battery 26 via the power supply interface 25, and the light source driver 29 via the light source driver 29.
- the LED array 17A is connected, the projection LCD 19 is connected via the projection LCD driver 30, and the CCD 22 is connected via the CCD interface 31.
- the external memory 27 is a detachable flash ROM, which is used for a digital camera mode, a webcam mode, and a stereoscopic image mode!
- the stored image and three-dimensional information are stored.
- an SD card, a CompactFlash (registered trademark) card, or the like can be used as the external memory 27.
- the cache memory 28 is a high-speed storage device. For example, in the digital camera mode, the captured image is transferred to the cache memory 28 at a high speed, processed by the processor 15, and then stored in the external memory 27. Specifically, SDRAM, DDRR AM, or the like can be used.
- Power supply interface 25, light source driver 29, projection LCD driver 30, CCD interface The case 31 is configured by an IC (Integrated Circuit).
- the power supply interface 25, the light source driver 29, the projection LCD driver 30, and the CCD interface 31 control the notch 26, the LED array 17A, the projection LCD 19, and the CCD 22, respectively.
- FIG. 3A is an enlarged view of the image projection unit 13
- FIG. 3B is a plan view of the light source lens 18, and
- FIG. 3C shows an arrangement relationship between the projection LCD 19 and the CCD 22.
- the image projection unit 13 includes the substrate 16, the LED array 17A, the light source lens 18, the projection LCD 19, and the projection optical system 20 along the projection direction.
- the substrate 16 is for mounting the LED array 17A and for performing electrical wiring with the LED array 17A.
- a substrate formed by applying an insulating resin to an aluminum substrate and forming a pattern by a capillary electroless plating, or a substrate having a single layer or a multilayer structure having a core of a force epoch substrate. can be used.
- the LED array 17 A is a light source that emits radial light toward the projection LCD 19.
- the LED array 17A is composed of a plurality of LEDs 17 (light emitting diodes) arranged in a staggered manner on the substrate 16.
- the plurality of LEDs 17 are bonded to the substrate 16 via a silver paste, and are electrically connected to the substrate 16 via bonding wires.
- the efficiency of converting electricity into light is increased as compared with a case where an incandescent light bulb, a nitrogen lamp, or the like is used as a light source.
- generation of infrared rays and ultraviolet rays can be suppressed. Therefore, according to the present embodiment, the light source can be driven with low power consumption, and power saving and long life can be achieved. Further, the temperature rise of the device can be reduced.
- the LED 17 since the LED 17 generates extremely low heat rays as compared with a halogen lamp or the like, a resin lens can be used as the light source lens 18 and the projection optical system 20 described later. Therefore, the light source lens 18 and the projection optical system 20 can be configured inexpensively and lightly in comparison with the case where a glass lens is employed.
- Each LED 17 constituting the LED array 17A emits the same emission color.
- Each of the LEDs 17 uses four elements of Al, In, Ga, and P as materials, and emits an amber color. Therefore, it is not necessary to consider the correction of chromatic aberration that occurs when emitting a plurality of emission colors. It is not necessary to use an achromatic lens as the projection optical system 20 to correct chromatic aberration. It is effective to provide a simple surface configuration and an inexpensive material projection means!
- the LED array 17A is composed of 59 LEDs 17, and each LED 17 is driven at 50mW (20 mA, 2.5V). Therefore, all 59 LEDs 17 are driven with approximately 3W of power consumption.
- the luminous power emitted from each LED 17 The brightness as the luminous flux when illuminated from the projection optical system 20 through the light source lens 18 and the projection LCD 19 is set to about 25 ANSI lumens even in the case of full illumination. T!
- the light source lens 18 is a lens as a condensing optical system for condensing light emitted radially from the LED array 17A, and is made of an optical resin represented by acrylic.
- the light source lens 18 projects at a position facing each LED 17 of the LED array 17A, and supports the convex lens portion 18a protruding toward the LCD 19 side, and the lens portion 18a.
- Epoxy sealing material for the purpose of sealing the LED 17 filled in the opening inside the base portion 18b and the base portion 18b and enclosing the LED array 17A and bonding the substrate 16 to the light source lens 18 18c, and positioning pins 18d projecting from the base portion 18b to the substrate 16 side and connecting the light source lens 18 and the substrate 16 to each other.
- the light source lens 18 is fixed on the substrate 16 by inserting the positioning pins 18 d into the long holes 16 formed in the substrate 16 while enclosing the LED array 17 A inside the opening.
- the light source lens 18 can be arranged in a small space. Also, by providing a function of supporting the light source lens 18 in addition to the function of mounting the LED array 17A on the substrate 16, it is not necessary to separately provide a component for supporting the light source lens 18, and the part is not required. The number of parts can be reduced.
- Each lens portion 18a is arranged at a position facing each LED 17 of the LED array 17A in a one-to-one relationship.
- each LED 17 is efficiently condensed by each lens section 18a facing each LED 17, and projected as highly directional radiation light as shown in FIG. 3 (a). Irradiated on LCD19.
- the reason why the directivity is increased in this way is that by making light incident on the projection LCD 19 substantially perpendicularly, in-plane transmittance unevenness can be suppressed.
- the reason why the directivity is improved is that the projection optical system 20 has telecentric characteristics and its incident NA is about 0.1, so that only light within vertical ⁇ 5 ° can pass through the internal aperture. This is because it is regulated.
- the emission angle of the light from the LED 17 is set to be perpendicular to the projection LCD 19 and that almost all the light flux of ⁇ 5 ° is introduced. This is because, when light with a perpendicular force deviating from the projection LCD 19 is incident, the transmittance changes depending on the incident angle due to the optical rotation of the liquid crystal, and this is a force that causes transmittance unevenness.
- the projection LCD 19 is a spatial modulation element that performs spatial modulation on light condensed through the light source lens 18 and outputs image signal light to the projection optical system 20.
- the projection LCD 19 is composed of a plate-like liquid crystal display (Liquid Crystal Display) having different vertical and horizontal sizes.
- each pixel constituting the projection LCD 19 has one pixel column linearly arranged along the longitudinal direction of the projection LCD 19, and the one pixel column. Is arranged so that another pixel row shifted by a predetermined distance in the longitudinal direction of the projection LCD 19 is alternately arranged in parallel.
- the front side of the drawing corresponds to the front side of the imaging head 2, and light is emitted toward the projection LCD 19 from the back side of the drawing.
- the light of the subject power goes from the troublesome side of the paper to the back side, and the subject image is formed on the CCD 22.
- the projection pattern can be controlled with a fine pitch, and the three-dimensional shape can be detected with high accuracy by increasing the resolution.
- a stereoscopic image mode or a flattened image mode to be described later when projecting a striped pattern light in which light and dark are alternately arranged toward an object to detect a three-dimensional shape of the object, By making the stripe direction coincide with the short direction of the projection LCD 19, the boundary between light and dark can be controlled at 1Z2 pitch. Therefore, a three-dimensional shape can be detected with high accuracy.
- the projection LCD 19 and the CCD 22 are arranged in a relationship as shown in Fig. 3 (c). More specifically, since the wide surface of the projection LCD 19 and the wide surface of the CCD 22 are arranged so as to face in substantially the same direction, an image projected from the projection LCD 19 onto the projection surface is formed on the CCD 22. When forming an image, it is possible to form the projected image as it is without bending the projected image with a nope mirror or the like.
- the CCD 22 is disposed on the longitudinal direction side of the projection LCD 19 (the direction in which the pixel columns extend).
- the inclination between the CCD 22 and the subject should be controlled at a 1Z2 pitch. This makes it possible to detect three-dimensional shapes with high accuracy.
- the projection optical system 20 includes a plurality of lenses that project the image signal light that has passed through the projection LCD 19 toward a projection surface.
- the projection optical system 20 is composed of a telecentric lens made of a combination of glass and resin. Telecentric refers to a configuration in which the principal ray passing through the projection optical system 20 is parallel to the optical axis in the space on the incident side, and the position of the exit pupil is infinite. By making it telecentric in this way, it is possible to project only the light that passes through the projection LCD 19 at a vertical angle of 5 ° as described above, so that the image quality can be improved.
- FIGS. 4 (a) to 4 (c) are views for explaining the arrangement of the LED array 17A.
- FIG. 4A is a diagram showing an illuminance distribution of light passing through the light source lens 18
- FIG. 4B is a plan view showing an arrangement state of the LED array 17A
- FIG. 5 is a diagram showing a composite illuminance distribution in the example.
- the light that has passed through the light source lens 18 is a light having an illuminance distribution as shown on the left side of FIG.
- the projection is designed to reach the surface of LCD19.
- the plurality of LEDs 17 are arranged in a staggered pattern on the substrate 16. Specifically, a plurality of LEDs 17 are arranged in series at a d pitch, and a plurality of LEDs are arranged in parallel at a 3Z2d pitch.Furthermore, in the plurality of LED strings arranged in parallel in this manner, Every other row is moved lZ2d in the same direction with respect to the adjacent row.
- the distance between one LED 17 and the LEDs 17 around the one LED 17 is set to be d (that is, the LEDs 17 are arranged in a triangular lattice).
- the length of d is determined so as to be equal to or less than a full width half maximum (FWHM) of an illuminance distribution formed on the projection LCD 19 by light emitted from one of the LEDs 17. .
- FWHM full width half maximum
- the combined illuminance distribution of light that reaches the surface of the projection LCD 19 through the light source lens 18 becomes a substantially linear shape including a small ripple as shown in FIG.
- Light can be applied to the surface substantially uniformly. Therefore, illuminance unevenness in the projection LCD 19 can be suppressed, and as a result, a high-quality image can be projected.
- FIG. 5 is an electrical block diagram of the image input / output device 1. The description of the configuration already described above is omitted.
- the processor 15 includes a CPU 35, a ROM 36, and a RAM 37.
- the CPU 35 performs various kinds of processing by using the program stored in the ROM 36 and the RAM 37.
- the processing performed under the control of the CPU 35 includes detection of a pressing operation of the release button 8, capture of image data from the CCD 22, transfer and storage of the image data, detection of the state of the mode switch 9, and the like.
- the ROM 36 includes a camera control program 36a, a pattern light photographing program 36b, a luminance image generation program 36c, a code image generation program 36d, a code boundary extraction program 36e, a lens aberration correction program 36f, and a triangle.
- a survey calculation program 36g, a document attitude calculation program 36h, and a plane conversion program 36i are stored.
- the camera control program 36a is a program relating to control of the entire image input / output device 1 including the main processing shown in FIG.
- the Noturn light photography program 36b is used to detect the three-dimensional shape of the document P This is a program for imaging a state in which the pattern light is projected and a state in which the pattern is projected.
- the luminance image generation program 36c includes a pattern light image captured in a state where the pattern light is projected by the pattern light imaging program 36b, and a pattern light image captured in a state where the pattern light is projected in a normal state. This is a program that calculates the difference from the image and generates a luminance image of the projected pattern light.
- a plurality of types of pattern light are projected in time series and imaged for each pattern light, and the difference between each of the plurality of captured images with pattern light and the image without pattern light is calculated. , A plurality of types of luminance images are generated.
- the code image generation program 36d is a program that superimposes a plurality of luminance images generated by the luminance image generation program 36c and generates a code image in which a predetermined code is assigned to each pixel.
- the code boundary extraction program 36e uses the code image generated by the code image generation program 36d and the luminance image generated by the luminance image generation program 36c to determine the code boundary coordinates with sub-pixel accuracy. It is a program.
- the lens aberration correction program 36f is a program for correcting the aberration of the imaging optical system 20 with respect to the code boundary coordinates obtained with the sub-pixel accuracy by the code boundary extraction program 36e.
- the triangulation calculation program 36g is a program for calculating, from the boundary coordinates of the code corrected for aberration by the lens aberration correction program 36f, the three-dimensional coordinates in the real space related to the boundary coordinates.
- the document attitude calculation program 36h is a program for estimating and obtaining the three-dimensional shape of the document P from the three-dimensional coordinates calculated by the triangulation calculation program 36g.
- the plane conversion program 36i is a program that generates a flattened image that also captures the frontal force of the document P based on the three-dimensional shape of the document P calculated by the document attitude calculation program 36h.
- the RAM 37 has an image storage unit 37a with pattern light, an image storage unit 37b without pattern light, a luminance image storage unit 37c, a code image storage unit 37d, a code boundary coordinate storage unit 37e, and an ID storage unit. 37f, an aberration correction coordinate storage unit 37g, a three-dimensional coordinate storage unit 37h, The calculation result storage unit 37i, the plane conversion result storage unit 37j, the projection image storage unit 37k, and the marking area 371 are allocated as storage areas.
- the pattern light-equipped image storage section 37a stores a pattern light-equipped image obtained by capturing an image of a state in which the no-turn light is projected on the document P by the pattern light photographing program 36b.
- the pattern light absence image storage unit 37b projects the pattern light onto the document P by the pattern light photographing program 36b, and stores the pattern light absence image obtained by capturing the state.
- the luminance image storage unit 37c stores the luminance image generated by the luminance image generation program 36c.
- the code image storage unit 37d stores a code image generated by the code image generation program 36d.
- the code boundary coordinate storage unit 37e stores the boundary coordinates of each code obtained with the sub-pixel accuracy to be extracted by the code boundary extraction program 36e.
- the ID storage unit 37f stores an ID or the like assigned to a luminance image having a change in brightness at a pixel position having a boundary.
- the aberration correction coordinate storage unit 37g stores the boundary coordinates of the code whose aberration has been corrected by the lens aberration correction program 36f.
- the three-dimensional shape coordinate storage unit 37h stores the three-dimensional coordinates of the real space calculated by the triangulation calculation program 36g.
- the document orientation calculation result storage unit 37i stores parameters relating to the three-dimensional shape of the document P calculated by the document orientation calculation program 36h.
- the plane conversion result storage unit 37j stores the plane conversion result generated by the plane conversion program 36.
- the projection image storage section 37k stores image information projected from the image projection section 13.
- the working area 371 stores data temporarily used for the operation in the CPU 15.
- FIG. 6 is a flowchart of a main process executed under the control of the CPU 35. Note that digital camera processing (S605), webcam processing (S607), stereoscopic image processing (S607), and flattened image processing (S611) in the main processing. Details will be described later.
- a key scan for determining the state of the mode switching switch 9 is performed (S603), and it is determined whether or not the setting of the mode switching switch 9 is the digital camera mode (S604). (S604: Yes), the process proceeds to the digital camera process described below (S605). On the other hand, if the mode is not the digital camera mode (S604: No), it is determined whether or not the setting of the mode switching switch 9 is the webcam mode (S606). If the mode is the webcam mode (S606: Yes), it is described later. The process shifts to webcam processing (S607).
- the mode is not the webcam mode (S605: No)
- the mode switch 9 is the planar image mode (S610). If the mode switch 9 is the planar image mode (S610: Ye) s), the process proceeds to flattened image processing described below (S611).
- step S612 If it is determined in step S612 that the mode is the off mode (S612: Yes), the process ends.
- FIG. 7 is a flowchart of the digital camera process (S605 in FIG. 6).
- the digital camera process is a process of acquiring an image captured by the image capturing unit 14.
- a high resolution setting signal is transmitted to the CCD 22 (S701).
- a high quality captured image can be provided to the user.
- a finder image (an image in a range visible through finder 6) is displayed on monitor LCD 10 (S702). Therefore, the user should be able to monitor the LCD without using the viewfinder 6.
- the image displayed in 10 allows the user to confirm the captured image (imaging range) before actual imaging.
- the release button 8 is scanned (S703a), and it is determined whether or not the release button 8 is half-pressed (S703b). If it is half-pressed (S703b: Yes), the auto focus (AF) and automatic exposure (AE) functions are activated, and the focus, aperture, and shutter speed are adjusted (S703c). If it is not half-pressed (S703b: No), the processing from S703a is repeated.
- the release button 8 is scanned again (S703d), and it is determined whether or not the release button 8 is fully pressed (S703e). If it is fully pressed (S703e: Yes), it is determined whether or not the flash mode is set (S704). [0100] As a result, in the flash mode (S704: Yes), the flash 7 is emitted (S705).
- S706 shooting is performed (S706). If the mode is not the flash mode (S704: No), shooting is performed without emitting the flash 7 (S706). If it is determined in S703e that the button has not been fully pressed (S703e: No), the processing of S703a is repeated.
- the captured image is transferred from the CCD 22 to the cache memory 28 (S707), and the captured image stored in the cache memory 28 is displayed on the monitor LCD 10 (S708).
- the captured image can be displayed on the monitor LCD 10 at a higher speed than when the captured image is transferred to the main memory.
- the captured image is stored in the external memory 27 (S709).
- FIG. 8 is a flowchart of the webcam process (S607 in FIG. 6).
- the webcam process is a process of transmitting a captured image (including a still image and a moving image) captured by the image capturing unit 14 to an external network.
- a captured image including a still image and a moving image
- FIG. 8 it is assumed that a moving image is transmitted to an external network as a captured image.
- a low-resolution setting signal is transmitted to the CCD 22 (S801), the well-known auto focus and automatic exposure functions are activated, and after adjusting the focus, aperture, and shutter speed (S802). Then, shooting is started (S803).
- the captured image is displayed on the monitor LCD 10 (S804), the finder image is stored in the projection image storage 37k (S805), and a projection process described later is performed (S806).
- the image stored in 37k is projected on the projection plane.
- the captured image is transferred from the CCD 22 to the cache memory 28 (S807), and the captured image transferred to the cache memory 28 is transmitted to the external network via the RF interface (S808).
- FIG. 9 is a flowchart of the projection process (S806 in FIG. 8). This process is a process of projecting an image stored in the projection image storage unit 37k from the projection image projection unit 13 onto a projection plane. In this processing, first, it is checked whether or not the image is stored in the projection image storage unit 37k (S901).
- the image stored in the projection image storage unit 37k is transferred to the projection LCD driver 30 (S902), and an image signal corresponding to the image is transmitted from the projection LCD driver 30 to the projection LCD 19. And display the image on the projection LCD (S903)
- the light source driver 29 is driven (S904), the LED array 17A is turned on by the electric signal from the light source driver 29 (S905), and the process is terminated.
- the LED array 17A when the LED array 17A is turned on, the light emitted from the LED array 17A reaches the projection LCD 19 via the light source lens 18, where the image signal transmitted from the projection LCD driver 30 is transmitted. Is subjected to spatial modulation in accordance with, and is output as image signal light. Then, the image signal light output from the projection LCD 19 is projected as a projection image on a projection plane via the projection optical system 20.
- FIG. 10 is a flowchart of the stereoscopic image processing (S609 in FIG. 6).
- the stereoscopic image processing is a process of detecting a three-dimensional shape of a subject, acquiring, displaying, and projecting a three-dimensional shape detection result image as the stereoscopic image.
- a high resolution setting signal is transmitted to the CCD 22 (S1001), and a finder image is displayed on the monitor LCD10 (S1002).
- the release button 8 is scanned (S1003a), and it is determined whether or not the release button 8 is half-pressed (S1003b). If it is half-pressed (S1003b: Yes), the auto focus (AF) and auto exposure (AE) functions are activated, and the focus, aperture, and shutter speed are adjusted (S1003c). If not half-pressed (S1003b: No), the process from S1003a is repeated.
- the release button 8 is scanned again (S1003d), and it is determined whether or not the release button 8 is fully pressed (S1003e). If it is fully pressed (S1003e: Yes), it is determined whether or not the flash mode power is on (S1003f).
- the three-dimensional shape detection result in the three-dimensional shape detection processing (S1006) is stored in the external memory 27 (S1007), and the three-dimensional shape detection result is displayed on the monitor LCD 10 (S1008).
- the three-dimensional shape detection result is displayed as a set of three-dimensional coordinates (XYZ) in the real space of each measurement vertex.
- the 3D shape detection result image as a 3D image (3D CG image) displaying the surface by connecting measurement vertices as a 3D shape detection result with a polygon is stored in the projection image storage unit 37k.
- a projection process similar to the projection process of S806 in FIG. 8 is performed (S1010).
- the three-dimensional coordinates obtained by using the inverse function of the equation for converting the coordinates on the projection LCD 19 into three-dimensional space coordinates described with reference to FIGS. 18 (a) and 18 (b) are used.
- By calculating the coordinates on the projection LCD 19 it is possible to project the three-dimensional shape result coordinates on the projection plane.
- FIG. 11A is a diagram for explaining the principle of the spatial code method used to detect a three-dimensional shape in the above-described three-dimensional shape detection processing (S1006 in FIG. 10).
- (b) is a diagram showing pattern light different from (a). Any of these (a) and (b) may be used for the pattern light, and a gray level code which is a multi-tone code may be used.
- FIG. 11A is a diagram for explaining the principle of the space code method used for detecting a three-dimensional shape in the above-described three-dimensional shape detection processing (S1006 in FIG. 10).
- FIG. 11 (b) is a diagram showing a pattern light different from FIG. 11 (a). Any of the patterns shown in FIGS.
- 11A and 11B may be used as the pattern light, and a gray level code which is a multi-tone code may be used.
- a gray level code which is a multi-tone code.
- the spatial code method is one type of a method of detecting the three-dimensional shape of a subject based on triangulation between the projected light and the observed image. As shown in FIG. It is characterized in that it is set apart from the container O by a distance D, and the space is elongated and divided into fan-shaped areas and encoded.
- each fan-shaped region is coded by the mask into a bright “1” and a dark “0”.
- a code corresponding to the direction ⁇ is assigned to each fan-shaped area, and the boundary of each code can be regarded as one slit light beam. Therefore, the scene is photographed by a camera as an observation device for each mask, and each bit plane of the memory is configured by converting the light and dark pattern into a binary image.
- the contents of the memory at this address give the projected light code, ie, ⁇ .
- the coordinates of the point of interest are determined from ⁇ and ⁇ .
- mask pattern A As the mask pattern used in this method, mask pattern A, mask pattern A in FIG.
- point Q in Fig. 11 (a) is a force indicating the boundary between region 3 (011) and region 4 (100). If 1 of mask A is shifted, a code in region 7 (111) can be generated. There is. In other words, a large error may occur when the hamming distance between adjacent regions is 2 or more.
- FIG. 12A is a flowchart of the three-dimensional shape detection processing (S1006 in FIG. 10).
- This imaging process uses a plurality of pure binary code mask patterns shown in FIG. 11 (a) to generate striped pattern light (see FIG. 1) from the image projection unit 13 in which light and dark are alternately arranged.
- This is a process of acquiring a pattern light presence image that captures the state where each pattern light is projected onto the subject in a time series manner, and a pattern light absence image that captures the state where the no-turn light is projected. is there.
- the three-dimensional measurement process is a process of actually measuring the three-dimensional shape of the subject using the image with pattern light and the image without pattern light acquired by the imaging process.
- the processing ends.
- FIG. 12B is a flowchart of the imaging process (S1210 in FIG. 12A). This processing is executed based on the No-turn light photographing program 36a.
- the acquired pattern light non-image is stored in the pattern light non-image storage unit 37b.
- the counter i is initialized (S1212), and it is determined whether or not the value of the counter i is the maximum value imax (S1213).
- the i-th mask pattern among the mask patterns to be used is displayed on the projection LCD 19, and The i-th pattern light projected by the i-th mask pattern is projected onto the projection surface (S1214), and the state where the pattern light is projected is photographed by the image pickup section 14 (S1215).
- FIG. 12 (c) is a flowchart of the three-dimensional measurement process (S1220 in FIG. 12 (a)). This processing is executed based on the luminance image generation program 36c.
- a luminance image is generated (S1221).
- a luminance image for each pattern light presence / absence image is generated.
- the generated luminance image is stored in the luminance image storage unit 37c. Also, a number corresponding to the number of the pattern light is assigned to each luminance image.
- the code image generation program 36d generates a code image coded for each pixel by combining the generated luminance images using the above-described spatial coding method (S1222).
- This code image is generated by comparing each pixel of the brightness image related to the pattern light image stored in the brightness image storage unit 37c with the previously set brightness threshold, and combining the results. be able to.
- the generated code image is stored in the code image storage unit 37d.
- lens aberration correction processing is performed by the lens aberration correction program 36f (S1224).
- the real space conversion processing based on the triangulation principle is performed by the triangulation calculation program 36g (S1225).
- the code boundary coordinates in the CCD space after the aberration correction is performed by this process are converted into three-dimensional coordinates in the real space, and the three-dimensional coordinates as a three-dimensional shape detection result are obtained.
- FIG. 13 is a diagram for explaining the outline of the code boundary coordinate detection process (S1223 in FIG. 12). is there.
- the boundary between the actual pattern light and dark in the CCD space is indicated by a boundary line K, and the pattern light is coded by the above-mentioned space code method, and the boundary between the code 1 and other codes is shown in the figure. It is the figure shown by the thick line.
- this code boundary coordinate detection processing is to detect code boundary coordinates with subpixel accuracy.
- the curCode changes at the next pixel of the force boundary, that is, the first pixel G, up to the boundary (thick line), which is a pixel having a curCode. Therefore, this is detected as the first pixel G.
- the detection position is moved to the left by “2” in order to specify a pixel area to be used for approximation, and at the position of the detection position curCCDX-2, the code image of interest is referenced by referring to the code image.
- CurCode force Searches for a pixel that changes to another code (boundary pixel (pixel H at the detection position of curCCDX-2)) and a predetermined range centered on that pixel (3 pixels in the Y-axis direction in this embodiment) And a range of +2 pixels) (part of the pixel region specifying means).
- the luminance threshold value bTh may be a fixed value given in advance, which may be calculated from within a predetermined range (for example, half the average of the luminance of each pixel). Thus, the boundary between light and dark can be detected with sub-pixel accuracy. [0151] Next, the detection position is moved to the right side of "1" from curCCDX-2, and the same processing as described above is performed for curCCDX-1 to obtain a representative value in curCCDX-1 (boundary coordinate detection Part of the process).
- the boundary coordinates of the code can be detected with high accuracy and sub-pixel accuracy, and the real space conversion process (S1225 in FIG. 12) based on the above-described triangulation principle is performed using the boundary coordinates. This makes it possible to detect the three-dimensional shape of the subject with high accuracy.
- the boundary coordinates can be detected with sub-pixel accuracy using the approximate expression calculated based on the luminance image, the number of images to be taken can be increased without increasing the number of images as in the related art.
- a gray code which is a special pattern light that can be used even if the pattern light is shaded with a pure binary code.
- the region constituted by the range of 2 has been described as a pixel region for obtaining an approximation, the range of the Y axis and the X axis of this pixel region is not limited to these. For example, only a predetermined range in the Y-axis direction around the boundary pixel at the curC CDX detection position may be set as the pixel region.
- FIG. 14 is a flowchart of the code boundary coordinate detection process (S1223 in FIG. 12C). This processing is executed based on the code boundary extraction program 36e. In this process, first, each element of the code boundary coordinate sequence in the CCD space is initialized (S1401), and curCCDX is set as a start coordinate (S1402).
- curCode is set to “0” (S1404). That is, curCode is initially , Set to the minimum value.
- curCode is smaller than the maximum code (S1405). If curCode is smaller than the maximum code (S1405: Yes), the code image is searched for in curCCDX by searching for a pixel of curCode (S1406), and it is determined whether or not a pixel of curCode exists (S1407).
- the boundary is provisionally set to the curCode larger than the curCode.
- the processing proceeds assuming that the pixel is located at the pixel position of the pixel.
- curCode does not exist (S1407: No), or if there is a pixel of Code larger than curCode! / ⁇ (S1409: No), the next curCode Calculate “1” to the curCode to obtain the boundary coordinates (S1411), and repeat the processing from S1405.
- curCode from 0 to the maximum code the processing from S1405 to S1411 is repeated, and when the curCode becomes larger than the maximum code (S1405: No), curCCDX, which needs to change the detection position, displays "dCCDX Is added (S1412), and the process from S1403 is repeated at the new detection position in the same manner as described above.
- curCCDX is changed, and when curCCDX finally becomes larger than the end coordinate (S1403), that is, when the detection of the start coordinate force to the end coordinate is completed, the process is terminated.
- FIG. 15 is a flowchart of a process (S1410 in FIG. 14) for obtaining code boundary coordinates with subpixel accuracy.
- this process first, from among the luminance images stored in the luminance image storage unit 37c in S1221 in FIG. 12C, pixels having a Code larger than the curCode detected in S1409 in FIG. At the pixel position, all the luminance images having a change in brightness are extracted (S1501).
- the mask pattern number of the extracted luminance image is stored in array PatID []
- the number of extracted luminance images is stored in noPatID (S1502).
- the arrays PatID [] and noPatID are stored in the ID storage unit 37f.
- the counter i is initialized (S1503), and it is determined whether or not the value of the counter i is smaller than oPatID (S1504). As a result, if it is determined that the brightness is small (S1504: Yes), the counter calculates the CCDY value of the boundary for the luminance image having the corresponding mask pattern number of PatID [i], and transfers the value to f CCDY [i]. It is stored (S1505).
- the median value of fCCDY [i] obtained in the process of S1505 is calculated, and the result is used as a boundary value, or the boundary value is calculated by statistical calculation.
- the boundary coordinates are represented by the coordinates of curCCDX and the weighted average value obtained in S1507, the boundary coordinates are stored in the code boundary coordinate storage unit 37e, and the process ends.
- Fig. 16 shows the boundary CCD for the luminance image with the mask pattern number of PatID [i].
- 16 is a flowchart of a process for obtaining a Y value (S1505 in FIG. 15).
- pixel I is detected as a pixel candidate having a boundary, and an eCCDY value is obtained at the position of pixel I.
- FIGS. 17 (a) to 17 (c) are diagrams for explaining the lens aberration correction processing (S1224 in FIG. 12 (c)).
- the lens aberration correction processing as shown in FIG. 17A, the image was taken in response to the fact that the incident light flux was displaced by the ideal lens due to the difference of the imaging optical system 21, and the position was changed. This is a process of correcting the position of a pixel to a position where an image should be originally formed.
- This aberration correction is performed, for example, by calculating the aberration of the optical system in the imaging range of the imaging optical system 21 using the half angle of view Ma, which is the angle of the incident light, as shown in FIG. 17 (b). Correct based on the data obtained.
- This aberration correction processing is executed based on the lens aberration correction program 36f, is stored in the code boundary coordinate storage unit 37e, is performed according to the code boundary coordinates, and the data subjected to the aberration correction processing is Stored in the aberration correction coordinate storage unit 37g.
- the camera calibration (approximation) of the following (1) to (3) for converting arbitrary point coordinates (ccdx, ccdy) in the real image to coordinates (ccdcx, ccdcy) in the ideal camera image Equation) is used for correction.
- the focal length of the imaging optical system 21 is focal length (mm)
- the ccd pixel length is pixel length (mm)
- FIGS. 18 (a) and 18 (b) show three-dimensional coordinates in a three-dimensional space calculated from coordinates in a CCD space in a real space conversion process based on the principle of triangulation (S1225 in FIG. 12 (c)). It is a figure for explaining a method.
- the three-dimensional coordinates in the three-dimensional space of the aberration-corrected code boundary coordinates stored in the aberration-corrected coordinate storage unit 37g are calculated by the triangulation operation program 36g. Is calculated.
- the three-dimensional coordinates calculated in this way are stored in the three-dimensional coordinate storage unit 37h.
- the optical axis direction of the imaging optical system 21 is the Z axis, and the imaging optical system is along the Z axis.
- Principal point position force 1 The point away from VPZ is the origin, the horizontal direction to image input / output device 1 is the X axis, and the vertical direction is the Y axis.
- the projection angle ⁇ p from the image projection unit 13 to the three-dimensional space (X, ⁇ , Z) and the distance between the optical axis of the imaging lens optical system 20 and the optical axis of the image projection unit 13 are represented by D .
- the field of view in the Y direction of the imaging optical system 21 from Yftop to Yfbottom, the field of view in the X direction from Xfstart to Xfend, the length (height) of the CCD 22 in the Y axis direction is Hc, and the length (width) in the X axis direction. Is Wc.
- the projection angle ⁇ p is given based on a code assigned to each pixel.
- the three-dimensional space position (X, Y, Z) corresponding to the arbitrary coordinates (ccdx, ccdy) of the CCD 22 is a point on the imaging plane of the CCD 22, a projection point of the pattern light, It can be obtained by solving five equations for the triangle formed by and the point that intersects the Y plane.
- (l) Y -(ta ⁇ ⁇ p) Z + PPZ + tan ⁇ p — D + cmp (Xtarget) (2)
- Y -(Ytarget / VPZ) Z + Yt arget
- the principal point position (0, 0, PPZ) of the image projection unit 13 the field of view in the Y direction of the image projection unit 13 is Ypf bottom from Ypftop, the field of view in the X direction is Xpfstart to Xpfend,
- the length (height) of the projection LCD 19 in the Y-axis direction is Hp, and the length (width) in the X-axis direction is Wp.
- FIG. 19 is a flowchart of the flattened image processing (S611 in FIG. 6).
- the flattened image processing is performed, for example, when capturing an image of a document P in a curved state as shown in FIG. 1 or capturing an image of a rectangular document obliquely (the captured image has a trapezoidal shape).
- This is also a process of acquiring and displaying a flattened image that is flattened in a state where the original is not curved or captured in a vertical direction.
- a high resolution setting signal is transmitted to the CCD 22 (S1901), and a finder image is displayed on the monitor LCD10 (S1902).
- the release button 8 is scanned (S1903a), and it is determined whether or not the release button 8 is half-pressed (S1903b). If it is half-pressed (SI 903b: Yes), it activates the auto focus (AF) and auto exposure (AE) functions and adjusts the focus, aperture, and shutter speed (S1903c). If it is not half-pressed (S1903b: No), the process of S1903a is repeated. Return.
- the release button 8 is scanned again (S1903d), and it is determined whether or not the release button 8 is fully pressed (S1903e). If it is fully pressed (SI 903e: Yes), it is determined whether or not the flash mode power is available (S1903f).
- a three-dimensional shape detection process which is the same process as the above-described three-dimensional shape detection process (S1006 in FIG. 10), is performed to detect the three-dimensional shape of the subject (S1906).
- a document posture calculation process of calculating the posture of the document P is performed (S1907).
- the position L, angle ⁇ ⁇ ⁇ ⁇ , and curvature ⁇ (X) of the document P with respect to the image input device 1 are calculated as the posture parameters of the document P.
- a plane conversion process described below is performed (S 1908).
- the original P is curved, if at all, curved, and is flattened into a state. Generate an image.
- the flattened image obtained by the plane change process (S 1908) is stored in the external memory 27 (S 1909), and the flattened image is displayed on the monitor LCD 10 (S 1910).
- FIGS. 20 (a) to 20 (c) are diagrams for explaining the original posture calculation process (S1907 in FIG. 19). It is assumed that the curvature of the document P is uniform in the y direction as a precondition for a document such as a book.
- the original posture calculation process first, as shown in FIG. 20 (a), points arranged in two columns in a three-dimensional space position are determined by regression curve approximation from coordinate data on code boundaries stored in a three-dimensional coordinate storage unit 37h. Find the two curves.
- FIG. 21 is a flowchart of the plane conversion process (S1908 in FIG. 19).
- the pattern light stored in the pattern lightless image storage unit 37b is stored.
- the four corner points of the image are moved L in the Z direction, rotated ⁇ in the X axis direction, and rotated by ⁇ ⁇ , and then inversely transformed into ⁇ (X) (equivalent to the “bending process” described later) (I.e., a rectangular area in which the surface of the document P on which characters and the like are written is an image as viewed from a substantially orthogonal direction), and is included in this rectangular area.
- the number of pixels a to be obtained is determined (S2102).
- the counter b determines whether or not the counter b has reached the number of pixels a (S2103). If the counter b does not reach the number of pixels a (S2103: No), a curvature calculation process is performed for one pixel constituting the rectangular area to be rotated by a curvature ⁇ (X) about the Y axis (S2104). ), Tilt around the X axis ⁇ Rotate and move (S2105), and shift by the distance L in the Z axis direction ( S2106).
- the coordinates (ccdcx, ccdcy) on the CCD image captured by the ideal camera are obtained from the obtained three-dimensional space position by the inverse function of the previous triangulation (S2107) and used.
- the coordinates (ccdx, ccdy) on the CCD image captured by the actual camera are obtained by the inverse function of the camera calibration according to the aberration characteristics of the imaging optical system 20 (S2108), and the pattern corresponding to this position is obtained.
- the state of the pixel of the lightless image is obtained and stored in the working area 371 of the RAM 37 (S2109).
- FIG. 22 (a) is a diagram for explaining the outline of the bending process (S2104 in FIG. 21), and FIG. 22 (b) shows the flattening process performed by the plane conversion process (S1908 in FIG. 19). The original P is shown. The details of this curving process are disclosed in detail in IEICE Transactions DII Vol.J86-D2 No.3 p409 “Burning Document Shooting with Eye Scanner”.
- the curvature ⁇ ⁇ (X) is the three-dimensional shape composed of the calculated code boundary coordinate sequence (real space), and the cross-sectional shape parallel to the XZ plane at an arbitrary Y value. It is represented by an equation approximated by a polynomial using the least squares method.
- Figs. 23 (a) and 23 (b) are diagrams for explaining a group of light source lenses 50 as another example of the light source lens 18 in the embodiment described above.
- FIG. 23B is a side view showing the light source lens 50
- FIG. 23B is a plan view showing the light source lens 50.
- the light source lens 18 in the above-described embodiment is configured such that the lens portions 18a are integrally arranged on the base 18b from a convex aspherical shape corresponding to each LED 17, whereas In the example shown in FIGS. 23 (a) and 23 (b), a resin lens formed in a shell shape enclosing each of the LEDs 17 is formed separately.
- the position of each LED 17 and the corresponding light source lens 50 can be determined on a one-to-one basis.
- the relative position accuracy can be improved, and the light emitting directions are aligned.
- the surface of the projection LCD 19 is irradiated with light whose incident direction from the LED 17 is perpendicular to the surface of the projection LCD 19, and can uniformly pass through the stop of the projection optical system 20. Therefore, illuminance unevenness of the projected image can be suppressed, and as a result, a high-quality image can be projected.
- the LED 17 included in the light source lens 50 is mounted on the substrate 16 via electrodes 51 serving as leads and reflectors.
- a frame-shaped elastic fixing member 52 that bundles the light source lenses 50 and regulates them in a predetermined direction is arranged on the outer peripheral surface of the group of light source lenses 50.
- the fixing member 52 is made of a resin material such as rubber or plastic.
- each light source lens 50 Since the light source lens 50 is formed separately from each LED 17, the angle of the optical axis formed by the convex tip of each light source lens 50 is correctly aligned so as to face the projection LCD 19. It is difficult to install in
- one group of light source lenses 50 is surrounded by the fixing member 52, and the outer peripheral surfaces of the light source lenses 50 are brought into contact with each other, and Lens 50
- the position of each light source lens 50 is regulated so that the optical axis of the light source faces the projection LCD 19 at a correct angle.
- the fixing member 52 is made of a material having an elastic force that is good even if it has rigidity specified in a predetermined size in advance, and the position of each light source lens 50 is determined by the elastic force. It may be restricted to a predetermined position.
- FIGS. 24 (a) and 24 (b) show another example of a fixing member 52 for fixing the light source lens 50 to a predetermined position described in FIGS. 23 (a) and 23 (b).
- 60 is a diagram for explaining FIG. FIG. 24 (a) is a perspective view showing a state where the light source lens 50 is fixed, and FIG. 24 (b) is a partial sectional view thereof.
- the same members as described above are denoted by the same reference numerals, and description thereof will be omitted.
- the fixing member 60 is formed in a plate shape having a conical through hole 60a having a cross section along the outer peripheral surface of each light source lens 50 in a sectional view. Each light source lens 50 is inserted and fixed in each through hole 60a.
- a biasing plate 61 having elasticity is interposed between the fixing member 60 and the substrate 16, and further, an electrode is provided between the biasing plate 61 and the lower surface of each light source lens 50.
- An annular O-ring 62 having elasticity is arranged so as to surround 51.
- the LED 17 included in the light source lens 50 is mounted on the substrate 16 via the urging plate 61 and the electrode 51 penetrating through holes formed in the substrate 16.
- each light source lens 50 is fixed by penetrating through each through hole 60a having a cross section along the outer peripheral surface of the lens.
- the optical axis of the light source lens 50 can be more reliably fixed so as to face the projection LCD 19 at a correct angle.
- the LED 17 can be urged to the correct position by the urging force of the O-ring 62 and fixed.
- the impact force that may be generated when the device 1 is carried is reduced by the impact of the O-ring 62. It is possible to prevent the problem that the light source lens 50 is displaced by the influence of the impact and the position of the light source lens 50 is displaced, so that light cannot be irradiated from the light source lens 50 to the projection LCD 19 vertically.
- the processing of S1211 and S1215 in Fig. 12 (b) is regarded as an imaging means.
- the process of S1006 in FIG. 10 is regarded as a three-dimensional shape detection unit.
- the process of acquiring and displaying a flattened image has been described as the flattened image mode.
- a well-known OCR function is mounted, and the flattened planar image is displayed in the OCR function. May be configured to be read.
- the text written on the original can be read with higher accuracy than when the original that is curved by the OCR function is read.
- a striped pattern light in which a plurality of types of light and dark are alternately arranged.
- the light for detecting the three-dimensional shape is not limited to the powerful pattern light.
- two band-shaped slit lights 70, 71 may be projected from the image projection unit 13. good. In this case, it is possible to detect the three-dimensional shape at a higher speed with only two captured images than when eight pattern lights are projected.
- the plurality of semiconductor light emitting elements are arranged in parallel in at least two rows at a second pitch in a second direction intersecting the semiconductor light emitting element row, and the second pitch May be set substantially the same as the first pitch.
- the semiconductor light emitting element row has a second pitch substantially equal to the first pitch (the pitch of the light emitting elements arranged linearly in the first direction in the semiconductor light emitting element row). Since at least two rows are arranged side by side in a pitch, it is possible to suppress a decrease in the illuminance between the semiconductor light emitting elements in the direction in which the semiconductor light emitting elements are arranged in parallel. Therefore, the decrease in illuminance between adjacent semiconductor elements can be entirely suppressed in two directions that alternately intersect, that is, in a plane including these two directions.
- the semiconductor light emitting element rows are composed of a plurality of rows, and each semiconductor light emitting element of two adjacent semiconductor light emitting element rows may be arranged in a staggered manner.
- the plurality of semiconductor light emitting elements are shifted by one half of the first pitch in a direction in which the one semiconductor light emitting element row extends in the one semiconductor light emitting element row.
- the other semiconductor light-emitting element rows are alternately arranged in parallel at a pitch of approximately 3Z2 times the first pitch!
- the distance between the one semiconductor light emitting element and the semiconductor light emitting element around the one semiconductor light emitting element is determined by light emitted from one of the semiconductor light emitting elements.
- the illuminance distribution formed in the spatial modulation element may be arranged to be equal to or less than the full width at half maximum. According to such a configuration, it is possible to suppress a decrease in illuminance between one semiconductor light emitting element and the semiconductor light emitting elements around it.
- the projection device is disposed between the semiconductor light emitting device and the spatial modulation device, and condenses the light emitted from the semiconductor light emitting device as highly directional radiation light.
- the spatial light modulator may be provided with a condensing optical system for emitting the condensed light.
- the condensing optical system may be formed by integrally forming a plurality of lens surfaces corresponding to each of the plurality of semiconductor light emitting elements.
- the number of components can be reduced, and as a result, the manufacturing cost of the device can be reduced.
- the condensing optical system may be configured such that each of the lenses formed in a shell shape enclosing each of the plurality of semiconductor light emitting elements is formed separately. No.
- the projection device may include a frame-shaped fixing member surrounding the plurality of lenses in order to make each of the lenses constituting the light-collecting optical system adhere to each other.
- the posture of each lens can be regulated such that the optical axes of the lenses are aligned in parallel and face the spatial light modulator. Therefore, light can be guided to the spatial modulation element and the projection unit with high efficiency.
- the projection device is drilled at a position corresponding to the predetermined position of each lens in order to arrange each of the lenses constituting the condensing optical system at the predetermined position.
- a plate-shaped fixing member having a fixing hole may be provided.
- the cross-sectional shape of the fixing hole may be formed along at least a part of the outer peripheral surface of each lens constituting the condensing optical system.
- the inner surface of the fixing hole is brought into close contact with the outer peripheral surface of each lens, so that the direction of each lens can be regulated. That is, the posture of each lens can be regulated so that the optical axes of the lenses are aligned in parallel and face the spatial modulation element side. Therefore, light can be guided to the spatial light modulator and the imaging means with high efficiency.
- the projection device may include an elastic body that comes into contact with the outer peripheral surface of the light collecting optical system.
- each of the plurality of semiconductor light emitting elements may be configured to emit the same color.
- the plurality of semiconductor light-emitting elements are light-emitting diodes, and the light emission color of each of them may be amber.
- the efficiency of converting electricity into light (electro-optical conversion efficiency) can be increased as compared with the case of emitting other colors. Therefore, it can be driven with low power, save power, The service life can be extended.
- the projection means may be configured to include a resin lens.
- a lighter and cheaper optical system can be configured as compared with the case where all the light from the semiconductor light emitting element is configured by a projection unit that also has glass power.
- the projection means may be constituted by a telecentric lens.
- the position of the pupil is infinite, and it is possible to couple and project only light that has passed through the spatial light modulator approximately vertically. Therefore, it is possible to project the spatial modulation element without uneven illuminance.
- the projection device captures an image of a state where pattern light as image signal light output from the projection unit is projected on a subject arranged in the projection direction.
- the spatial modulation element is configured by a plate-shaped display, and the plurality of pixels configuring the display may be arranged in a staggered manner.
- a plurality of pixels constituting a display which is a spatial modulation element are arranged in a staggered pattern, for example, light and dark are alternately arranged toward an object to detect a three-dimensional shape of the object.
- the boundary between light and dark can be controlled at 1Z2 pitch by matching the direction of the stripe with the short direction of the projection LCD19. Therefore, a three-dimensional shape can be detected with high accuracy.
- the wide surface of the imaging means and the wide surface of the display are oriented in substantially the same direction!
- a plurality of pixels constituting a display are composed of one pixel column arranged in a straight line along the longitudinal direction of the display, and one pixel column being a display device. Other pixel rows shifted by a predetermined distance in the longitudinal direction may be alternately arranged in parallel.
- the imaging means may be arranged on the longitudinal direction side of the display at a predetermined interval from the display.
- the inclination of the trajectory of the pattern light projected from the projection means onto the subject is determined by the display. It can be controlled with 1Z2 pitch of pixels. Therefore, a three-dimensional shape can be detected with high accuracy.
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/535,457 US7845807B2 (en) | 2004-03-31 | 2006-09-26 | Projection apparatus and three-dimensional-shape detection apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-105427 | 2004-03-31 | ||
JP2004105427A JP4734843B2 (ja) | 2004-03-31 | 2004-03-31 | 3次元形状検出装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/535,457 Continuation-In-Part US7845807B2 (en) | 2004-03-31 | 2006-09-26 | Projection apparatus and three-dimensional-shape detection apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005095887A1 true WO2005095887A1 (ja) | 2005-10-13 |
Family
ID=35063877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/005862 WO2005095887A1 (ja) | 2004-03-31 | 2005-03-29 | 投影装置および3次元形状検出装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US7845807B2 (ja) |
JP (1) | JP4734843B2 (ja) |
WO (1) | WO2005095887A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7845807B2 (en) | 2004-03-31 | 2010-12-07 | Brother Kogyo Kabushiki Kaisha | Projection apparatus and three-dimensional-shape detection apparatus |
Families Citing this family (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007087405A2 (en) * | 2006-01-24 | 2007-08-02 | The Trustees Of Columbia University In The City Of New York | Capturing scene images and depth geometry and generating a compensation image |
US7940341B2 (en) * | 2007-08-23 | 2011-05-10 | Philips Lumileds Lighting Company | Light source for a projector |
US8330771B2 (en) * | 2008-09-10 | 2012-12-11 | Kabushiki Kaisha Toshiba | Projection display device and control method thereof |
US8113668B2 (en) * | 2009-03-18 | 2012-02-14 | Dexin Corporation | Extendable real object projector |
JP5311674B2 (ja) * | 2010-01-14 | 2013-10-09 | パナソニック株式会社 | 発光装置 |
US8733951B2 (en) * | 2010-04-26 | 2014-05-27 | Microsoft Corporation | Projected image enhancement |
JP5479232B2 (ja) * | 2010-06-03 | 2014-04-23 | シャープ株式会社 | 表示装置および表示装置の製造方法 |
US8936406B2 (en) * | 2012-03-14 | 2015-01-20 | Intel-Ge Care Innovations Llc | Camera reading apparatus with document alignment guide |
JP5795431B2 (ja) * | 2012-04-13 | 2015-10-14 | パイオニア株式会社 | 三次元計測装置、三次元計測システム、制御方法、プログラム、及び記憶媒体 |
US9341927B1 (en) * | 2013-01-09 | 2016-05-17 | Orili Ventures Ltd. | Slide projector housing with mount for detachable lens and strobe |
US9524059B2 (en) * | 2013-03-15 | 2016-12-20 | Texas Instruments Incorporated | Interaction detection using structured light images |
US10032279B2 (en) * | 2015-02-23 | 2018-07-24 | Canon Kabushiki Kaisha | Information processing apparatus, information processing method, and storage medium |
JP6834543B2 (ja) * | 2017-02-01 | 2021-02-24 | セイコーエプソン株式会社 | 画像表示装置およびその調整方法 |
JP6990694B2 (ja) * | 2017-03-16 | 2022-01-12 | シャープNecディスプレイソリューションズ株式会社 | プロジェクタ、マッピング用データ作成方法、プログラム及びプロジェクションマッピングシステム |
KR102464368B1 (ko) * | 2017-11-07 | 2022-11-07 | 삼성전자주식회사 | 메타 프로젝터 및 이를 포함하는 전자 장치 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS60152903A (ja) * | 1984-01-21 | 1985-08-12 | Kosuke Sato | 位置計測方法 |
JPS63155782A (ja) * | 1986-12-19 | 1988-06-28 | Omron Tateisi Electronics Co | 光源装置 |
JPH01296764A (ja) * | 1988-05-24 | 1989-11-30 | Matsushita Electric Ind Co Ltd | Ledアレイ光源 |
JP2002303988A (ja) * | 2001-04-03 | 2002-10-18 | Nippon Telegr & Teleph Corp <Ntt> | 露光装置 |
JP2004095655A (ja) * | 2002-08-29 | 2004-03-25 | Toshiba Lighting & Technology Corp | Led装置およびled照明装置 |
Family Cites Families (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3905686A (en) * | 1974-10-18 | 1975-09-16 | Eastman Kodak Co | Three element projection lens |
JPS62229703A (ja) * | 1986-03-31 | 1987-10-08 | 株式会社小糸製作所 | 照明装置 |
JPH1132278A (ja) | 1997-07-10 | 1999-02-02 | Fuji Xerox Co Ltd | プロジェクタ装置 |
JP3717654B2 (ja) | 1998-02-17 | 2005-11-16 | 株式会社リコー | カラー画像表示装置 |
JP3585097B2 (ja) * | 1998-06-04 | 2004-11-04 | セイコーエプソン株式会社 | 光源装置,光学装置および液晶表示装置 |
JP2001091877A (ja) * | 1999-07-21 | 2001-04-06 | Fuji Photo Film Co Ltd | 露光ヘッド |
US6462891B1 (en) * | 2000-04-20 | 2002-10-08 | Raytheon Company | Shaping optic for diode light sheets |
US7019376B2 (en) * | 2000-08-11 | 2006-03-28 | Reflectivity, Inc | Micromirror array device with a small pitch size |
JP4012710B2 (ja) * | 2001-02-14 | 2007-11-21 | 株式会社リコー | 画像入力装置 |
US7088321B1 (en) * | 2001-03-30 | 2006-08-08 | Infocus Corporation | Method and apparatus for driving LED light sources for a projection display |
US7125121B2 (en) * | 2002-02-25 | 2006-10-24 | Ricoh Company, Ltd. | Image display apparatus |
JP4055610B2 (ja) * | 2002-03-22 | 2008-03-05 | セイコーエプソン株式会社 | 画像表示デバイス及びプロジェクタ |
JP2004177654A (ja) * | 2002-11-27 | 2004-06-24 | Fuji Photo Optical Co Ltd | 投写型画像表示装置 |
JP4016876B2 (ja) * | 2003-04-23 | 2007-12-05 | セイコーエプソン株式会社 | プロジェクタ |
JP4734843B2 (ja) | 2004-03-31 | 2011-07-27 | ブラザー工業株式会社 | 3次元形状検出装置 |
KR100644632B1 (ko) * | 2004-10-01 | 2006-11-10 | 삼성전자주식회사 | Led를 채용한 조명유닛 및 이를 채용한 화상투사장치 |
US7636190B2 (en) * | 2005-05-05 | 2009-12-22 | Texas Instruments Incorporated | Method of projecting an image from a reflective light valve |
-
2004
- 2004-03-31 JP JP2004105427A patent/JP4734843B2/ja not_active Expired - Fee Related
-
2005
- 2005-03-29 WO PCT/JP2005/005862 patent/WO2005095887A1/ja active Application Filing
-
2006
- 2006-09-26 US US11/535,457 patent/US7845807B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS60152903A (ja) * | 1984-01-21 | 1985-08-12 | Kosuke Sato | 位置計測方法 |
JPS63155782A (ja) * | 1986-12-19 | 1988-06-28 | Omron Tateisi Electronics Co | 光源装置 |
JPH01296764A (ja) * | 1988-05-24 | 1989-11-30 | Matsushita Electric Ind Co Ltd | Ledアレイ光源 |
JP2002303988A (ja) * | 2001-04-03 | 2002-10-18 | Nippon Telegr & Teleph Corp <Ntt> | 露光装置 |
JP2004095655A (ja) * | 2002-08-29 | 2004-03-25 | Toshiba Lighting & Technology Corp | Led装置およびled照明装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7845807B2 (en) | 2004-03-31 | 2010-12-07 | Brother Kogyo Kabushiki Kaisha | Projection apparatus and three-dimensional-shape detection apparatus |
Also Published As
Publication number | Publication date |
---|---|
JP2005291839A (ja) | 2005-10-20 |
JP4734843B2 (ja) | 2011-07-27 |
US20070019166A1 (en) | 2007-01-25 |
US7845807B2 (en) | 2010-12-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005095886A1 (ja) | 3次元形状検出装置、3次元形状検出方法、3次元形状検出プログラム | |
WO2005095887A1 (ja) | 投影装置および3次元形状検出装置 | |
JP2005293075A5 (ja) | ||
JP2007271395A (ja) | 3次元色形状計測装置 | |
TWI668997B (zh) | 產生全景深度影像的影像裝置及相關影像裝置 | |
JP2005291839A5 (ja) | ||
US20070177160A1 (en) | Three-dimensional object information acquisition using patterned light projection with optimized image-thresholding | |
JP2007271530A (ja) | 3次元形状検出装置及び3次元形状検出方法 | |
CN110716381B (zh) | 转光三维成像装置和投射装置及其应用 | |
CN108718406B (zh) | 一种可变焦3d深度相机及其成像方法 | |
JP2004132829A (ja) | 3次元撮影装置と3次元撮影方法及びステレオアダプタ | |
WO2006112297A1 (ja) | 3次元形状測定装置 | |
JP4552485B2 (ja) | 画像入出力装置 | |
CN208434044U (zh) | 滤光组件、相机模组、图像撷取装置及电子装置 | |
JP2005352835A (ja) | 画像入出力装置 | |
WO2005122553A1 (ja) | 画像入出力装置 | |
US11326874B2 (en) | Structured light projection optical system for obtaining 3D data of object surface | |
CN112074772A (zh) | 用于供闪光装置使用的透镜 | |
JP2006031506A (ja) | 画像入出力装置 | |
JP4552484B2 (ja) | 画像入出力装置 | |
US20130222569A1 (en) | Imaging apparatus | |
JP4896183B2 (ja) | 原稿照明装置、及び画像読取装置 | |
JP2005293290A5 (ja) | ||
JP2006277023A (ja) | 3次元情報取得装置、パターン光生成方法、3次元情報取得方法、プログラム及び記録媒体 | |
JP2006004010A (ja) | 画像入出力装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 11535457 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 11535457 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase |