WO2007100057A1 - Imaging device and integrated circuit - Google Patents

Imaging device and integrated circuit Download PDF

Info

Publication number
WO2007100057A1
WO2007100057A1 PCT/JP2007/053945 JP2007053945W WO2007100057A1 WO 2007100057 A1 WO2007100057 A1 WO 2007100057A1 JP 2007053945 W JP2007053945 W JP 2007053945W WO 2007100057 A1 WO2007100057 A1 WO 2007100057A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
subject
image
region
wavelength
Prior art date
Application number
PCT/JP2007/053945
Other languages
French (fr)
Japanese (ja)
Inventor
Ichiro Oyama
Original Assignee
Matsushita Electric Industrial Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co., Ltd. filed Critical Matsushita Electric Industrial Co., Ltd.
Publication of WO2007100057A1 publication Critical patent/WO2007100057A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • H04N9/315Modulator illumination systems
    • H04N9/3164Modulator illumination systems using multiple light sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects

Definitions

  • the present invention relates to a three-dimensional imaging apparatus that captures a three-dimensional shape and color of a subject and creates a color arbitrary viewpoint image or a three-dimensional image based thereon, and an integrated circuit used therefor.
  • an imaging device can be used, for example, for supporting the visibility of automobiles. By observing the situation around the car from various viewpoints (free viewpoints), it becomes possible to recognize the situation more accurately and to realize safe driving.
  • an imaging optical system includes a lens that receives light of red wavelength, a lens that receives light of green wavelength, and a lens that receives light of blue wavelength arranged in a plane.
  • the imaging surface of the imaging device is provided for each lens.
  • Each lens has a limited wavelength of light, so the focal length of each lens By making the same, it becomes possible to form a subject image on the imaging surface with a single lens, and the thickness of the imaging device can be greatly reduced.
  • FIG. 20 shows a perspective view of an example of a conventional compound eye type imaging apparatus.
  • 500 is a lens array, and the four lenses 501a, 501b, 501c, and 501d forces are molded together!
  • 501a and 50 lb are lenses that handle light of green wavelength, and convert the formed subject image into image information in the imaging areas 502a and 502b in which a green wavelength separation filter (color filter) is pasted on the light receiving unit. .
  • 501c is a lens that handles light of red wavelength, and is converted into red image information in the imaging region 502c.
  • 501d is a lens that corresponds to light of blue wavelength, and in the imaging region 502d. Convert to blue image information. A color image can be obtained by superimposing and synthesizing these images.
  • the number of lenses need not be limited to four.
  • FIG. 21 is a diagram showing a basic configuration of a conventional compound-eye imaging device. This figure shows the subject image formed by two lenses with different optical axes, and the positional relationship between the optical axis and the image sensor.
  • FIG. 21 (a) is a diagram viewed from the cross-sectional side of the lens, and
  • FIG. 21 (b) is a plan view of the imaging region.
  • Reference numerals 600a and 600b denote optical axes of the lenses 601a and 601b, and reference numerals 602a and 602b denote positions intersecting with the respective optical axial force imaging areas 603.
  • 605a and 605b are images formed through the subject 604 force S lens 601a and 601b on the optical axis 600a.
  • the optical axes of the lenses are different from each other. Therefore, the position of the subject image is connected to the optical axes of the lenses on the imaging device according to the subject distance (direction B in the figure). ) This phenomenon is called parallax.
  • the pixel displacement S of the subject image 605b from the position 602b where the lens optical axis 600b intersects the imaging area is expressed by the following equation, where Zp is the subject distance, t is the distance between the optical axes of the lenses, and f is the imaging distance. It is represented by (1).
  • Equation (1) the positional relationship between the subject image and the pixel in the direction connecting the optical axes of the lenses varies depending on the subject distance Zp. Therefore, area-based match as in Patent Document 1
  • the image shift amount is adjusted by detecting the shift amount of the subject image in the direction connecting the optical axes that changes depending on the subject distance.
  • 601a in FIG. 21 (a) corresponds to the lens 501a designed for green in FIG. 20
  • 601b corresponds to 501b in FIG.
  • the red and blue image shift amounts can be calculated based on the known inter-lens distance and focal length from the shift amount of the green imaging region obtained by the equation (1).
  • the parallax amount deriving unit 503 calculates red, green, and blue parallax amounts as described above, and the image synthesizing unit 504 has a green imaging region, a red imaging region, and a blue imaging unit. Creates a high-definition color image by synthesizing the subject image of the area.
  • Patent Document 1 Japanese Patent Laid-Open No. 2002-204462
  • the present invention solves the above-described conventional problems, and an object thereof is to provide an imaging device that is thin and capable of obtaining the three-dimensional shape and color of a subject and an integrated circuit used therefor.
  • an imaging apparatus of the present invention inputs a plurality of optical systems and a plurality of wavelength selection regions that selectively transmit light in a specific wavelength region among light from a subject.
  • a plurality of imaging regions that output image information according to the light, and the wavelength selection region and the imaging region are arranged in a one-to-one correspondence on each optical axis of the plurality of optical systems.
  • the plurality of wavelength selection regions selectively select a first wavelength selection region that selectively transmits light in the same wavelength region, and light in a wavelength region different from the first wavelength selection region.
  • the wavelength selection region is included and corresponds to the first wavelength selection region.
  • image information at least two of the respective imaging regions is outputted, based on the positional relationship between the each optical system and the respective imaging areas corresponding to the first wavelength selection ⁇ region said that Image information output by the three-dimensional coordinate calculating means for calculating the three-dimensional coordinates of the subject and the portions corresponding to the coordinates of the three-dimensional coordinates of the subject in at least three of the imaging regions that receive light in different wavelength regions, respectively.
  • 3D coordinate color information calculating means for calculating color information at each coordinate of the three-dimensional coordinate of the subject.
  • the integrated circuit of the present invention is an integrated circuit that performs an operation based on image information output from an imaging region that receives light from a subject, and the imaging region receives light of a first wavelength. Including a first imaging region and a second imaging region that receives light in a wavelength region different from the light of the first wavelength, wherein the first imaging region includes two or more imaging regions, The second imaging region includes two or more imaging regions that selectively receive light in different wavelength regions, and the integrated circuit includes at least two of the first imaging regions.
  • 3D coordinate calculation means for calculating 3D coordinates of the subject based on image information output from the area, and at least three of the 3D coordinates of the subject in each of the imaging areas that receive light in different wavelength ranges
  • the part corresponding to each coordinate is output. That based on the image information, characterized by comprising a three-dimensional coordinate color information calculating means for calculating color information at each coordinate of the three-dimensional coordinates of the object.
  • FIG. 1 is a configuration diagram of a three-dimensional imaging device and a display according to an embodiment of the present invention.
  • FIG. 2 is a diagram illustrating the principle of a three-dimensional coordinate calculation method for a three-dimensional imaging device according to an embodiment of the present invention.
  • FIG. 3 is a diagram showing an imaging region of the three-dimensional imaging device according to the embodiment of the present invention.
  • FIG. 4 A view of the subject and the imaging unit 100 in FIG.
  • FIG. 5 is a view of the subject and the imaging unit 100 viewed from the positive direction side of the X axis of the three-dimensional coordinates in FIG.
  • FIG. 6 is a view of the subject and the imaging unit 100 in FIG.
  • FIG. 7 In FIG. 2, the subject and the imaging unit 100 are viewed from the positive direction side of the X axis of the three-dimensional coordinates. Figure.
  • FIG. 8 is a view of the subject and the imaging unit 100 in FIG.
  • FIG. 9 A view of the subject and the imaging unit 100 in FIG.
  • FIG. 10 is a diagram for explaining the principle of the viewpoint image calculation method of the three-dimensional imaging apparatus according to the embodiment of the invention.
  • FIG. 11 is a view of the subject and the virtual lens viewed from the positive direction of the axis of the three-dimensional coordinate in FIG.
  • FIG. 12 is a view of the subject and the virtual lens as viewed from the positive direction of the X axis of the three-dimensional coordinates in FIG.
  • FIG. 13 is a diagram showing a display according to an embodiment of the present invention, (a) a perspective view,
  • FIG. 14 is a diagram for explaining the display principle of the stereoscopic image of the stereoscopic display shown in FIG.
  • FIG. 15 shows a viewpoint image according to an embodiment of the present invention, (a) the figure shows one viewpoint video, and (b) the figure shows another viewpoint video. Figure.
  • FIG. 16 is an explanatory diagram of coordinate conversion according to an embodiment of the present invention.
  • FIG. 17 is an explanatory diagram of shape conversion according to the embodiment of the present invention.
  • FIG. 18 is a diagram showing an example of a lens arrangement according to an embodiment of the present invention.
  • FIG. 19 is a diagram showing an example of a lens arrangement according to an embodiment of the present invention.
  • FIG. 20 is a schematic configuration diagram of an example of a conventional compound eye imaging apparatus.
  • FIG. 21 is a diagram illustrating the principle of an example of a parallax calculation method for a compound-eye imaging apparatus of a conventional example.
  • the three-dimensional shape of a subject can be obtained with a thin shape.
  • a portion corresponding to each coordinate of the three-dimensional coordinates of the subject in each imaging region corresponding to the second wavelength selection region is the three-dimensional calculated subject. It is preferable to calculate based on the coordinates and the positional relationship between each optical system corresponding to the second wavelength selection region and each imaging region. [0020] Further, it is preferable to further include coordinate conversion means for performing coordinate conversion on part or all of the three-dimensional coordinates of the subject. According to this configuration, it is possible to output free viewpoint video or stereoscopic video image data of a subject with higher presence.
  • a shape conversion means for enlarging a part or the whole of the three-dimensional shape based on the three-dimensional coordinates of the subject. According to this configuration, it is possible to output image data for free viewpoint video and stereoscopic video of a more natural subject.
  • arbitrary viewpoint image output means for outputting an image of the subject at an arbitrary viewpoint based on the three-dimensional coordinates of the subject and color information at each coordinate. According to this configuration, it is possible to output image data for a free viewpoint video or a stereoscopic video of a subject.
  • the image processing apparatus further includes arbitrary viewpoint creation means for creating the subject image having an arbitrary viewpoint power based on the three-dimensional coordinates of the subject by interpolation, and the arbitrary viewpoint image output means is configured to generate the subject image by the interpolation. It is preferable to output an image at an arbitrary viewpoint based on the subject image. According to this configuration, it is possible to output free viewpoint video and stereoscopic image data of a more natural subject.
  • the arbitrary viewpoint image output means outputs an image at an arbitrary viewpoint for stereoscopic video output.
  • the arbitrary viewpoint image output means outputs images at a larger number of viewpoints than the number of the optical systems. According to this configuration, it is possible to output image data for stereoscopic video of a more natural subject.
  • FIG. 1 shows a configuration diagram of a three-dimensional imaging apparatus and a display according to the present embodiment.
  • This embodiment includes a plurality of optical systems that form a subject image.
  • 101a and 10 lb are single lenses designed primarily for photographing green wavelengths of light.
  • 1 Olc is a single lens designed mainly for photographing red wavelength light.
  • lOld is a single lens designed mainly for photographing blue wavelength light.
  • the lens array 101 forms single lenses 101a, 101b, 101c, and 101d by integral molding. Thereby, a lens array can be produced inexpensively and in a small size.
  • a wavelength selection region is provided on the optical axis of each single lens.
  • the wavelength selection region is a region that selectively transmits light in the same wavelength region.
  • the wavelength selection region is constituted by a color filter.
  • 102a and 102b are color filters that mainly transmit light having a green wavelength.
  • 102c is a color filter that mainly transmits light of red wavelength.
  • Reference numeral 102d denotes a color filter that mainly transmits light having a blue wavelength. Note that the color filter 102b is disposed between the short lens 101b and the image sensor 103 shown by a broken line for convenience of illustration.
  • Reference numeral 103 denotes an image sensor realized by a CMOS, a CCD, or the like.
  • the image sensor 103 is composed of a single image sensor, but may be composed of a plurality of image sensors.
  • the image capturing unit 100 outputs image data of a subject image captured by the single lenses 101a, 101b, 101c, 101d, the color filters 102a, 102b, 102c, 102d, and the image sensor 103 to the three-dimensional coordinate calculation unit 110.
  • the three-dimensional coordinate calculation unit 110 includes three-dimensional coordinate calculation means and a three-dimensional coordinate color information calculation unit. Although details will be described later, the three-dimensional coordinate calculation means calculates the three-dimensional coordinates of the subject based on the image data from the imaging unit 100. The three-dimensional coordinate color information calculating means calculates color information at each three-dimensional coordinate of the subject obtained by the three-dimensional coordinate calculating means.
  • a viewpoint image output unit 111 outputs a viewpoint image when the subject is viewed from an arbitrary predetermined viewpoint based on the three-dimensional coordinates of the subject calculated by the three-dimensional coordinate calculation unit 110.
  • the viewpoint image output unit 111 includes arbitrary viewpoint image output means, and can simultaneously output viewpoint images viewed from a plurality of viewpoints, that is, a plurality of viewpoint images having different viewpoints.
  • Reference numeral 112 denotes a display, which also displays a stereoscopic image with a plurality of viewpoint image forces with different viewpoints created by the viewpoint image output unit 111.
  • FIG. 2 is a diagram for explaining the principle of calculating the three-dimensional coordinates of one point P of the subject when shooting the subject (star).
  • Each principal point A, B, C of single lens 101a, 101b, 101c, 101d D is configured on the same plane, and the imaging surface of the imaging element 103 is arranged in parallel with the plane.
  • the imaging region 103a with the optical axis of the single lens 101a as a reference, the imaging region 103b with the optical axis of the single lens 101b as a reference, and the optical axis of the single lens 101c as a reference An imaging region 103c that is based on the optical axis of the single lens 101d is configured.
  • FIG. 3 is a configuration diagram of the image sensor 103 viewed from the single lens side.
  • Each imaging area 103a, 103b, 103c, 103d sets a two-dimensional coordinate system with the origin (0, 0) at the lower left of the imaging area.
  • a horizontal line that is positive in the right direction is set as the xa axis
  • a vertical line that is positive in the upward direction is set as the ya axis.
  • a horizontal line whose right direction is positive is set on the xb axis
  • a vertical line whose upper direction is positive is set on the yb axis.
  • the horizontal line with the right direction being positive is set on the xc axis, and the vertical line with the upward direction being positive is set on the yc axis.
  • the horizontal line with the right direction being positive is set on the xd axis, and the vertical line with the upward direction being positive is set on the yd axis.
  • 104a on the imaging area 103a is the optical axis of the single lens 101a
  • 104b on the imaging area 103b is the optical axis of the single lens 101b
  • 104c on the imaging area 103c is the optical axis of the single lens 101c
  • imaging 104d on the region 103d is the optical axis of the single lens 101d.
  • the coordinates of the optical axes 104a, 104b, 104c, and 104d on the imaging regions 103a, 103b, 103c, and 103d are configured to be the same.
  • the three-dimensional coordinate system having the principal point A of the single lens 101a in Fig. 2 as the origin (0, 0, 0) has a positive and negative sign parallel to the xa axis of the imaging region 103a as shown in Fig. 2.
  • the X axis is the opposite axis
  • the Y axis is parallel to the ya axis of the imaging area 103a
  • the Y axis is the same
  • the optical axis of the single lens 101a is parallel to the optical axis of the single lens 101a.
  • the positive axis is set to the Z axis.
  • the three-dimensional coordinate calculation unit 110 in FIG. 1 calculates the coordinates of the subject in the three-dimensional coordinates.
  • the image of one point P (Xp, Yp, ⁇ ) of the subject is affected by the parallax when ⁇ is finite.
  • images are formed at different coordinates pa, pb, pc, and pd of each imaging region.
  • the calculation of the three-dimensional coordinates of the subject is performed by comparing the subject images in the imaging region where the light of the same color forms an image, that is, here the imaging regions 103a and 103b of green light.
  • the positions of the single lenses 101a and 101b differ only in the X coordinate in the three-dimensional coordinates. Therefore, the image coordinates pa (xpa, ypa) corresponding to the point P of the subject in the imaging region 103a,
  • the coordinate pb (xpb, ypb) of the image of the imaging region 103b corresponding to this image is obtained by obtaining xpb, and xpb is shifted. If the quantity S is obtained, it will be obtained.
  • the deviation amount S is the imaging area 103a and the imaging area.
  • ⁇ at one point P (Xp, Yp, ⁇ ) of the subject is obtained by the following equation (2) using the relationship between the subject distance and the parallax in equation (1). be able to.
  • S is parallax, that is, the amount of deviation of the subject image
  • tab is the distance between the optical axes of the single lenses 101a and 101b
  • F is the imaging distance, that is, the principal point A of the single lens 101a in FIG.
  • FIG. 4 is a view of the subject and the imaging area 103a of the imaging unit 100 from the positive direction side of the Y axis of the three-dimensional coordinates in FIG.
  • the relationship of the following formula (3) can be derived from the geometric calculation.
  • FIG. 5 is a view of the subject and the imaging unit 100 imaging region 103a in the positive direction side force of the X axis of the three-dimensional coordinate in FIG. In the illustration of Fig. 5, the relationship of the following formula (5) can be derived from the geometric calculation.
  • the distance tab between the optical axes and the shift amount S are known.
  • yOa is known and ypa is the y coordinate of the selected position. For this reason, ypa-yOa can also be calculated. Therefore, the value of Y p can be obtained.
  • the three-dimensional coordinate color information calculation means calculates color information at each three-dimensional coordinate of the subject obtained by the three-dimensional coordinate calculation means.
  • the three-dimensional coordinates of the subject that is, the three-dimensional shape, can be obtained using the subject image of green light. Based on this, it is also possible to calculate the coordinates of the subject images of red and blue light that are imaged in the imaging regions 103c and 103d. Based on the coordinates of these colors, the colors of the subject are synthesized.
  • FIG. 6 is a view of the subject and the imaging region 103c of the imaging unit 100 as viewed from the positive direction side of the Y axis of the three-dimensional coordinates in FIG.
  • the relationship of the following equation (7) can be derived from the geometric calculation, and the value of xpc can be obtained.
  • FIG. 7 is a diagram in which the positive side force of the X axis of the three-dimensional coordinates in FIG. In the illustration of Fig. 7, the relationship of the following equation (8) can be derived from the geometric calculation, and the value of ypc can be obtained.
  • tac is the Y-axis direction component of the distance between the optical axes of the single lenses 101a and 101c. From the above, in the imaging region 103c, the red component at one point P of the subject is imaged on pc (xpc, ypc) obtained by the equations (7) and (8).
  • FIG. 8 is a diagram of the subject and the imaging region 103d of the imaging unit 100 viewed from the positive direction side of the Y axis of the three-dimensional coordinates in FIG.
  • the relationship of the following formula (9) can be derived from the geometric calculation. From Equation (9), it is possible to obtain xpd of the coordinates pd (xpd, ypd) of the blue light emitted from P on the imaging region 103d.
  • tadx is the distance in the X-axis direction between the optical axes of the single lenses 101a and 101d.
  • FIG. 9 is a diagram in which the positive side force of the X axis of the three-dimensional coordinates in FIG. 2 also looks at the subject and the imaging region 103 d of the imaging unit 100.
  • the relationship of the following formula (10) can be derived from the geometric calculation. From equation (10), the ypd of the coordinates pd (xpd, ypd) of the blue light emitted from P on the imaging region 103d can be obtained.
  • tady is the distance in the Y-axis direction between the optical axes of the single lenses 101a and 101d.
  • the blue component at the point P of the subject is imaged on pc (xpd, ypd) obtained by the equations (9) and (10).
  • the coordinates of the images of the green, red, and blue imaging regions corresponding to each point of the subject can be obtained, by combining the information of the subject image at each of these coordinates, The color of each point can be obtained.
  • the overall color of the subject can be obtained. That is, in addition to obtaining the 3D shape of the entire subject by calculation by the 3D coordinate calculation means, the color of the entire subject can also be obtained by calculation by the 3D coordinate color information calculation means.
  • the three-dimensional shape and color of a subject can be obtained with a thin shape.
  • a configuration using multiple normal imaging systems with multiple lenses stacked When acquiring the dimensional shape and color, the color is acquired from a single imaging system. In such a configuration, the apparatus becomes large.
  • colors corresponding to the three-dimensional shape are extracted from the red, blue, and green optical systems that do not overlap each other, which is advantageous for thinning.
  • the information obtained by the 3D image calculation unit 110 is output to the viewpoint image output unit 111 of FIG. 1 to obtain a stereoscopic image, as described in the second embodiment below. You may force it to output directly to the display 112.
  • the subject image is calculated by interpolation or extrapolation of adjacent pixels. You can.
  • an optical system corresponding to three colors of red, blue, and green is used for color acquisition.
  • complementary colors of cyan, magenta, and yellow can be added, The color may be reproduced by other methods.
  • an optical system (lens), a wavelength selection region (color filter) corresponding to this, and an imaging region are added according to the number of added colors.
  • the second embodiment of the present invention will be described below.
  • the present embodiment is premised on the first embodiment.
  • the three-dimensional image calculation unit 110 in FIG. 1 calculates the three-dimensional shape of the subject, that is, the three-dimensional coordinates, and the color of the subject through the processing described in the first embodiment, and outputs it to the viewpoint image output unit 111. .
  • the arbitrary viewpoint image output means outputs the subject image at an arbitrary viewpoint based on the three-dimensional coordinates of the subject obtained by the three-dimensional coordinate calculation means and the color information at each three-dimensional coordinate obtained by the three-dimensional coordinate color information calculation means. Create and output.
  • FIG. 10 is a diagram for explaining the principle of the viewpoint image calculation method according to the present embodiment.
  • the optical axis of the virtual lens is set parallel to the Z axis of the three-dimensional coordinates.
  • the virtual imaging area 103rl is the origin (0, 0) at the lower right of the imaging area, the horizontal line that is positive in the left direction is the xrl axis, and the upward direction is positive
  • a two-dimensional coordinate system with a vertical line as the yrl axis is set.
  • the coordinates 104rl are two-dimensional coordinate points on the virtual imaging area 103rl, and are intersection points between the virtual imaging area 103rl and the optical axis of the virtual lens.
  • FIG. 11 shows a view of the subject, the virtual lens, and the virtual imaging region 103rl from the positive direction of the Y axis of the three-dimensional coordinates in FIG.
  • the relationship of the following formula (11) can be derived from the geometric calculation, and xprl can be obtained.
  • FIG. 12 shows the X-axis positive direction force of the three-dimensional coordinates in FIG.
  • the relationship of the following formula (12) can be derived from the geometric calculation, and yprl can be obtained.
  • the imaging point prKxprl, yprl) on the virtual imaging region 103rl of one point P (Xp, Yp, Zp) of the subject can be calculated.
  • the imaging point on the virtual imaging area 103rl of the entire subject can be obtained.
  • an image formed on the virtual imaging region 103r 1 is referred to as a viewpoint image rl.
  • an image formed on the virtual imaging area 103r2 described later is referred to as a viewpoint image r2
  • an image formed on the virtual imaging area 103r3 is referred to as a viewpoint image r3, and is formed on the virtual imaging area 103r4.
  • the resulting image is called viewpoint image r4.
  • the principal point of the virtual lens is R2 (vx2, vyl, vzl), that is, the viewpoint image r2 when Rl is translated in the X-axis direction of the three-dimensional coordinates is obtained.
  • the virtual imaging region 103r2 is arranged at the imaging distance fr on a plane perpendicular to the optical axis.
  • the virtual imaging area 103r2 has the lower right of the imaging area as the origin (0, 0), the horizontal line that is positive in the left direction is the xr2 axis, and the upward direction is A two-dimensional coordinate system with a positive vertical line 3 ⁇ 4 yr2 is set.
  • Coordinate 104r2 (x0r2, y0r2) is the virtual imaging area 103r2 It is the point of the upper two-dimensional coordinates and the intersection with the optical axis of the virtual lens.
  • Equation (13) the imaging point pr2 (xpr2, ypr2) on the virtual imaging region 10 3r2 of one point P (Xp, Yp, ⁇ ) of the subject is expressed by Equations (13) and (14). Is required.
  • the expression (13) ⁇ or the expression (11) is replaced by xOrl, x0r2, and vxl, vx2.
  • Equation (14) is obtained by replacing yOrl with y0r2 in equation (12).
  • the imaging point pr2 (xpr2, ypr2) on the virtual imaging region 103r2 of the point P (Xp, Yp, Zp) of the subject can be calculated.
  • the imaging point on the virtual imaging area 103r2 of the entire subject can also be obtained.
  • the principal point of the virtual lens is R3 (vx3, vyl, vzl), that is, the viewpoint image r3 when R2 is translated in the X-axis direction of the three-dimensional coordinates is obtained.
  • the virtual imaging area 103r3 is arranged at the imaging distance fr on a plane perpendicular to the optical axis.
  • the virtual imaging area 103r3 has an origin (0, 0) in the lower right corner of the imaging area when the force opposite to the virtual lens is viewed, the horizontal line that is positive in the left direction is the xr3 axis, and the upward direction is positive
  • a two-dimensional coordinate system with the vertical line yr3 is set.
  • Coordinates 104r3 (x0r3, y 0r3) is the point of the two-dimensional coordinates on the virtual imaging region 103R3, which is an intersection between the optical axis of the virtual lens. From the same calculation as the viewpoint image r2, the image point pr3 (x pr3, ypr3) on the virtual imaging area 103r3 of the point P (Xp, Yp, Zp) of the subject is obtained by the equations (15) and (16). . That is, Expression (15) is obtained by replacing xOrl with x0r3 and vxl with vx3 in Expression (11). Equation (16) is obtained by replacing y0rl3 ⁇ 4y0r3 in equation (12).
  • the imaging point pr3 (xpr3, ypr3) on the virtual imaging area 103r3 of the point P (Xp, Yp, Zp) of the subject can be calculated. Further, the image formation point on the virtual imaging region 103r3 of the entire subject can be obtained by the same calculation.
  • the principal point of the virtual lens is R4 (vx4, vyl, vzl), that is, the viewpoint image r4 when R3 is translated in the X-axis direction of the three-dimensional coordinates is obtained.
  • R4 the viewpoint image r4 when R3 is translated in the X-axis direction of the three-dimensional coordinates
  • the region 103r4 is arranged at a position of the imaging distance fr on a plane perpendicular to the optical axis.
  • the lower right of the imaging area is the origin (0, 0)
  • the horizontal line that is positive in the left direction is the xr4 axis
  • the upward direction is positive
  • a two-dimensional coordinate system is set with the vertical line as the yr4 axis.
  • Coordinates 104r4 (x0r4, y 0r4) is the point of the two-dimensional coordinates on the virtual imaging region 103R4, which is an intersection between the optical axis of the virtual lens.
  • Equation (17) From the same calculation as the viewpoint image r3, the imaging point pr4 (xpr4, ypr4) on the virtual imaging region 10 3r4 of one point P (Xp, Yp, ⁇ ) of the subject is expressed by equations (17) and (18). Is required. In other words, equation (17) ⁇ or equation (11) [koo! /,, XOrl x0r4, vxl vx4, respectively, are replaced. Equation (18) is obtained by replacing yOrl with y0r4 in equation (12).
  • the image point p r4 (xpr4, ypr4) on the virtual imaging region 103r4 of the point P (Xp, Yp, Zp) of the subject can be calculated.
  • the image formation point on the virtual imaging region 103r4 of the entire subject can also be obtained by the same calculation.
  • an imaging point in each of the virtual imaging regions 103rl to 103r4 can be obtained.
  • color information corresponding to each point of the subject has already been obtained.
  • the color information of each point of each viewpoint image uses the red, green, and blue color information of the object obtained by the three-dimensional coordinate color information calculation means of the three-dimensional coordinate calculation unit 110 in FIG. Can do. Therefore, a viewpoint image rl, a viewpoint image r2, a viewpoint image r3, and a viewpoint image r4 that are arbitrary plural viewpoint images are obtained.
  • the viewpoint image output unit 111 performs a viewpoint from an arbitrary viewpoint of the subject. An image can be obtained. Convert the calculated multiple viewpoint images into an image format for stereoscopic display and output the image data to the display 112 in FIG.
  • the display 112 in FIG. 1 outputs a stereoscopic video based on the image data output from the viewpoint image output unit 111.
  • Figure 13 shows a 3D display using a lenticular lens.
  • FIG. 13 (a) is a perspective view
  • FIG. 13 (b) shows a view of the display shown in FIG. 13 (a) as viewed from above.
  • the configuration shown in FIG. 13 is a configuration in which a lenticular lens 121 is arranged on the front surface of a two-dimensional display 120 such as a general LCD method, plasma method, SED method, FED method, or projection method.
  • a two-dimensional display 120 such as a general LCD method, plasma method, SED method, FED method, or projection method.
  • a single lens-shaped lens width Wr of the lenticular lens 121 and a four-pixel width of the two-dimensional display 120 (Fig. 14). (Ref.) Shows a figure that is almost the same.
  • the lenticular lens 121 has a one-row kamaboko-shaped lens width Wr that is set in accordance with the number of viewpoints of the stereoscopic image.
  • FIG. 14 shows the principal ray directions of the light emitted by the four sets of pixels in the two-dimensional display corresponding to the lens of one line of the lenticular lens in FIG. 13 (b).
  • Light travels from pixel 125 in the direction of chief ray 129.
  • Light travels from the pixel 126 in the direction of the principal ray 130.
  • Light travels from the pixel 127 in the direction of the principal ray 131.
  • Light travels from pixel 128 in the direction of chief ray 132. Since the light traveling directions of the four sets of pixels differ in this way, the human eye, which is generally about 6.5 cm away, forms images with different viewpoints between the right eye and the left eye. become. For this reason, a stereoscopic image can be displayed.
  • the more a stereoscopic image is displayed based on many viewpoint images the wider the viewing angle force is. It becomes possible to see the video, and a more natural stereoscopic video can be obtained. Therefore, the more viewpoint images are created based on the three-dimensional coordinates of the subject obtained by the three-dimensional image calculation unit 110, it is possible to display a natural stereoscopic video with a wider viewing angle. For example, in the above example, it is possible to create five or more viewpoint images that are larger than the number of force optical systems that created the same four viewpoint images as the number of optical systems.
  • FIG. 15 shows two different viewpoint images.
  • FIG. 15 (a) shows the viewpoint image rl in FIG. 10
  • FIG. 15 (b) shows the viewpoint image r4 in FIG. Since the subject image viewed from the lens side is upside down in the imaging region, the yrl axis and yr4 axis are converted upside down from FIG. 10 in FIGS. 15 (a) and 15 (b).
  • the subject appears in the lower right of the image in the viewpoint image rl, and the subject appears in the lower left of the image in the viewpoint image r4 as shown in FIG. 15 (b).
  • the viewpoint images r2 and r3 the subject appears in a position between the positions of the subject in FIGS. 15 (a) and 15 (b).
  • the display 112 in FIG. 1 receives the image data arranged as described above from the viewpoint image output unit 111 and displays a natural stereoscopic video based on the image data.
  • the present invention it is possible to realize a three-dimensional imaging apparatus that is thin and capable of obtaining a three-dimensional shape of a subject and that outputs natural stereoscopic video image data.
  • This embodiment may be configured to include coordinate conversion means. Specifically, the three-dimensional coordinates of the subject calculated by the three-dimensional coordinate calculation unit 110 in FIG. 1 are transformed by using affine transformation to obtain a force viewpoint image, thereby obtaining a more realistic three-dimensional image. Video can be realized. For example, as shown in FIG. 16, a part or the whole of the subject is translated by a predetermined distance mz in the negative direction of the Z axis of the three-dimensional coordinates and set to P ′, so that Zp becomes small. Can increase the parallax of the subject (see Equation (1)).
  • the subject image of the viewpoint image rl of FIG. 15 (a) is the subject image of the viewpoint image r4 of FIG. 15 (b) on the right side of the diagram (positive direction of the xrl axis). Is the left side of the figure (the negative of xr4 axis) Direction).
  • this embodiment may be configured to include shape conversion means. Specifically, a portion of the three-dimensional shape of the subject calculated by the three-dimensional coordinate calculation unit 110 in FIG. 1 is enlarged as shown in FIG. 17 to reduce the occlusion generated in each viewpoint image. By doing so, more natural 3D images can be obtained.
  • the coordinate conversion means and the shape conversion means may include the viewpoint image image output unit 111 in FIG.
  • the display 112 in FIG. 1 is a parallax barrier type stereoscopic display, a slit realized by liquid crystal or the like is disposed instead of the lenticular lens 121 in FIG. 13 (a).
  • a 3D image is output based on the 3D coordinates of the object created by the 3D coordinate calculation unit 110 in FIG. Needless to say, you can!
  • a stereoscopic image can be output based on the three-dimensional coordinates of the subject created by the three-dimensional coordinate calculation unit 110 in FIG. Needless to say.
  • the viewpoint image output unit 111 in FIG. 1 can output a subject image of an arbitrary viewpoint
  • the display 112 can be used as a general two-dimensional display and can output a free viewpoint image used for in-vehicle peripheral monitoring. Needless to say.
  • the lenses 101a, 101b, 101c, and lOld in FIG. 1 may be individually molded instead of being integrally molded.
  • the arrangement of the colors of lenses 101a, 101b, 10 lc, and lOld is not limited to that shown in Fig. 1.As shown in Fig. 18 (a), the green lenses are diagonally arranged, and the red and blue lenses are arranged. It may be arranged at another diagonal, or as shown in Fig. 18 (b), it may be arranged in an arbitrary manner by arranging four lenses on the same line.
  • the position of the principal point of the geometric calculation lens in FIG. 2 of the present embodiment may be set appropriately.
  • color filters are appropriately arranged according to the colors.
  • Fig. 19 (a) and Fig. 19 (b) even if there are 3 or more green lenses, at least 2 Three-dimensional coordinates can be obtained using a single lens.
  • the selection of two of the multiple lenses may be changed according to the subject distance and subject position.
  • the lens may be arranged arbitrarily.
  • a plurality of green lenses are arranged to calculate the three-dimensional shape of the subject.
  • a plurality of lenses are arranged to calculate the three-dimensional shape of the subject.
  • a plurality of lenses are arranged to calculate the three-dimensional shape of the subject. Oh, needless to say.
  • the third embodiment relates to an integrated circuit.
  • the three-dimensional coordinate calculation unit 110 and the viewpoint image output unit 111 in FIG. 1 are typically realized as an LSI that is an integrated circuit. These may be individually chipped, or may be chipped to include some or all of them. Here, it is sometimes called IC, system LSI, super LSI, or unroller LSI, depending on the difference in power integration.
  • the method of circuit integration may be realized with a dedicated circuit or a general-purpose processor, not limited to LSI.
  • FPGA field programmable gate array
  • the imaging apparatus of the present invention is thin, can determine the three-dimensional shape of the subject, and can output image data for free viewpoint video and stereoscopic video of the subject. It is useful as an original imaging device.
  • the integrated circuit of the present invention is useful as an integrated circuit used in a three-dimensional imaging device.

Abstract

An imaging device is provided with image information outputted from imaging regions (103a, 103b) corresponding to first wavelength selecting regions (102a, 102b); a three dimensional coordinate calculating means (110) for calculating the three dimensional coordinates of an object, based on a positional relationship between optical systems (101a, 101b) which correspond to the first wavelength selecting regions (102a, 102b) and the imaging regions (103a, 103b); and a three dimensional coordinate color information calculating means (110) for calculating color information at each coordinate of the three dimensional coordinates of the object, based on the image information outputted from the sections, which correspond to each coordinate of the three dimensional coordinates of the object in at least three imaging regions (103a, 103c, 103d) receiving light of different wavelength ranges, respectively.

Description

明 細 書  Specification
撮像装置及び集積回路  Imaging device and integrated circuit
技術分野  Technical field
[0001] 本発明は、被写体の三次元形状及び色を撮影し、それに基づきカラーの任意視点 映像や立体映像を作成する三次元撮像装置及びこれに用いる集積回路に関する。 背景技術  TECHNICAL FIELD [0001] The present invention relates to a three-dimensional imaging apparatus that captures a three-dimensional shape and color of a subject and creates a color arbitrary viewpoint image or a three-dimensional image based thereon, and an integrated circuit used therefor. Background art
[0002] 近年、物体の三次元座標の測定及びその物体のカラー映像表示の要求が高まつ ており、かつその撮像装置の小型化の要求も高まっている。このような撮像装置は、 例えば自動車の視界支援に利用できる。自動車の周囲の状況を様々な視点(自由 視点)から観察することにより、より正確な状況認識が可能となり、安全な運転の実現 が可能となる。  In recent years, there has been an increasing demand for measuring the three-dimensional coordinates of an object and displaying a color image of the object, and there has been an increasing demand for downsizing the imaging device. Such an imaging device can be used, for example, for supporting the visibility of automobiles. By observing the situation around the car from various viewpoints (free viewpoints), it becomes possible to recognize the situation more accurately and to realize safe driving.
[0003] また、自動車の視界支援において、周囲の物体と自動車との距離情報を表示する ことにより、周囲の物体との衝突を回避し、安全な運転の実現が可能となる。また、携 帯電話やテレビなどの映像を入出力する装置では、映像の臨場感をより忠実に再現 するために三次元映像の入出力が望まれている。これらの機能を実現するには、撮 像装置力 入力される物体の三次元座標およびカラー情報が必要となり、かつ撮像 装置の小型化が望まれて ヽる。  [0003] Further, in assisting the visibility of a car, by displaying distance information between a surrounding object and the car, it is possible to avoid collision with the surrounding object and to realize safe driving. Also, in devices that input and output video, such as mobile phones and televisions, 3D video input / output is desired in order to more accurately reproduce the realism of the video. In order to realize these functions, it is necessary to obtain the three-dimensional coordinates and color information of the input object of the imaging device, and to reduce the size of the imaging device.
[0004] 撮像装置の小型化、特に薄型化に有効な技術として、焦点距離が短!、単レンズを 用いる複眼方式の撮像装置が提案されている (例えば特許文献 1参照)。一般的に、 光の波長により屈折率が異なるため焦点距離が異なり、全波長の情報が含まれる景 色を単レンズで撮影面に結像することはできない。そのため、通常の撮像装置の光 学系は、赤、緑、青の波長の光を同一の撮像面に結像するため、複数のレンズを重 ねた構成となっている。すなわち、必然的に撮像装置の光学長が長くなり、厚くなる。  [0004] As an effective technique for downsizing, in particular, thinning of an imaging apparatus, a compound eye type imaging apparatus using a single lens with a short focal length has been proposed (for example, see Patent Document 1). In general, since the refractive index differs depending on the wavelength of light, the focal length is different, and it is impossible to image a scene color including information on all wavelengths on a photographing surface with a single lens. For this reason, the optical system of a normal image pickup apparatus has a configuration in which a plurality of lenses are overlapped in order to image light of red, green, and blue wavelengths on the same image pickup surface. In other words, the optical length of the imaging apparatus is inevitably increased and thickened.
[0005] 一方、複眼方式のカラー画像撮像装置は、撮像光学系を赤色の波長の光を受け 持つレンズと緑色の波長の光を受け持つレンズと青色の波長の光を受け持つレンズ を平面内に並べた構成にし、それぞれのレンズに対して、撮像素子の撮像面を設け るものである。各レンズが受け持つ光の波長が限定されるため、各レンズの焦点距離 を同一にすることにより、単レンズにより被写体像を撮像面に結像することが可能とな り、撮像装置の厚さを大幅に小さく出来る。 [0005] On the other hand, in a compound eye type color image capturing apparatus, an imaging optical system includes a lens that receives light of red wavelength, a lens that receives light of green wavelength, and a lens that receives light of blue wavelength arranged in a plane. The imaging surface of the imaging device is provided for each lens. Each lens has a limited wavelength of light, so the focal length of each lens By making the same, it becomes possible to form a subject image on the imaging surface with a single lens, and the thickness of the imaging device can be greatly reduced.
[0006] 図 20に従来の複眼方式の撮像装置の一例に係る斜視図を示す。 500はレンズァ レイであり、 4つのレンズ 501a、 501b, 501c, 501d力 ^一体に成型されて!ヽる。 501a 及び 50 lbは、緑色の波長の光を受け持つレンズであり、結像した被写体像を緑色の 波長分離フィルタ (カラーフィルタ)を受光部に貼り付けた撮像領域 502a及び 502b で画像情報に変換する。  FIG. 20 shows a perspective view of an example of a conventional compound eye type imaging apparatus. 500 is a lens array, and the four lenses 501a, 501b, 501c, and 501d forces are molded together! 501a and 50 lb are lenses that handle light of green wavelength, and convert the formed subject image into image information in the imaging areas 502a and 502b in which a green wavelength separation filter (color filter) is pasted on the light receiving unit. .
[0007] 同様に 501cは、赤色の波長の光を受け持つレンズであり、撮像領域 502cで赤色 の画像情報に変換され、 501dは青色の波長の光に対応するレンズであり、撮像領 域 502dで青色の画像情報に変換する。これらの画像を重ね合わせて合成すること により、カラー画像を取得することができる。なお、レンズは 4個に限定する必要はな い。  [0007] Similarly, 501c is a lens that handles light of red wavelength, and is converted into red image information in the imaging region 502c. 501d is a lens that corresponds to light of blue wavelength, and in the imaging region 502d. Convert to blue image information. A color image can be obtained by superimposing and synthesizing these images. The number of lenses need not be limited to four.
[0008] し力しながら、前記のような従来の複眼方式のカラー画像撮像装置には、以下のよ うな問題があった。図 21は、従来の複眼方式の撮像装置の基本構成を示す図であ る。本図は、光軸が異なる 2つのレンズで結像した被写体像と、光軸及び撮像素子と の位置関係を示している。図 21 (a)は、レンズの断面側から見た図であり、図 21 (b) は、撮像領域の平面図である。  [0008] However, the conventional compound-eye color image capturing apparatus as described above has the following problems. FIG. 21 is a diagram showing a basic configuration of a conventional compound-eye imaging device. This figure shows the subject image formed by two lenses with different optical axes, and the positional relationship between the optical axis and the image sensor. FIG. 21 (a) is a diagram viewed from the cross-sectional side of the lens, and FIG. 21 (b) is a plan view of the imaging region.
[0009] 600a, 600bはレンズ 601a、 601bの光軸であり、 602a、 602bは各光軸力撮像領 域 603と交わる位置である。 605a、 605bは、光軸 600a上にある被写体 604力 Sレン ズ 601a、 601bを通して結像したものである。  [0009] Reference numerals 600a and 600b denote optical axes of the lenses 601a and 601b, and reference numerals 602a and 602b denote positions intersecting with the respective optical axial force imaging areas 603. 605a and 605b are images formed through the subject 604 force S lens 601a and 601b on the optical axis 600a.
[0010] 複眼方式の撮像装置においては、レンズ同士の光軸が異なるため、被写体距離に 応じて、被写体像の位置が撮像素子上で、レンズの光軸同士を結ぶ方向(図中の B 方向)に移動する。この現象は、視差と呼ばれる。レンズ光軸 600bと撮像領域の交 わる位置 602bからの被写体像 605bの画素ずれ量 Sは、被写体距離を Zp、レンズの 光軸間の距離を t、結像距離を fとすると、下記の式(1)で表される。  [0010] In a compound-eye imaging device, the optical axes of the lenses are different from each other. Therefore, the position of the subject image is connected to the optical axes of the lenses on the imaging device according to the subject distance (direction B in the figure). ) This phenomenon is called parallax. The pixel displacement S of the subject image 605b from the position 602b where the lens optical axis 600b intersects the imaging area is expressed by the following equation, where Zp is the subject distance, t is the distance between the optical axes of the lenses, and f is the imaging distance. It is represented by (1).
[0011] 式(1) S =f -t/Zp  [0011] Equation (1) S = f -t / Zp
式(1)から分るように、レンズの光軸を結ぶ方向の被写体像と画素の位置関係は、 被写体の距離 Zpにより変化する。したがって、特許文献 1のようにエリアベースマッチ ング法などを用いて、被写体距離によって変化する光軸を結ぶ方向の被写体像のず れ量を検出して画像ずらし量を調整する。図 21の例では、図 21 (a)の 601aは図 20 の緑色向けに設計されたレンズ 501aに対応し、 601bは図 20の 501bに対応する。 赤、青の画像ずらし量は、式(1)で求めた緑の撮像領域のずらし量から、既知である レンズ間距離、焦点距離に基づき算出することができる。 As can be seen from Equation (1), the positional relationship between the subject image and the pixel in the direction connecting the optical axes of the lenses varies depending on the subject distance Zp. Therefore, area-based match as in Patent Document 1 The image shift amount is adjusted by detecting the shift amount of the subject image in the direction connecting the optical axes that changes depending on the subject distance. In the example of FIG. 21, 601a in FIG. 21 (a) corresponds to the lens 501a designed for green in FIG. 20, and 601b corresponds to 501b in FIG. The red and blue image shift amounts can be calculated based on the known inter-lens distance and focal length from the shift amount of the green imaging region obtained by the equation (1).
[0012] 図 20の構成では、視差量導出手段 503が、前記のように赤、緑、青の視差量を算 出し、画像合成手段 504が緑色の撮像領域と赤色の撮像領域と青色の撮像領域の 被写体像を合成し精鋭度の高 ヽカラー画像を作成する。  In the configuration of FIG. 20, the parallax amount deriving unit 503 calculates red, green, and blue parallax amounts as described above, and the image synthesizing unit 504 has a green imaging region, a red imaging region, and a blue imaging unit. Creates a high-definition color image by synthesizing the subject image of the area.
[0013] この構成は複眼方式を用いて 、るので薄型化は可能になる。また、画素ずれ量 S 力 被写体距離 Zpを求めることもできる。し力しながら、二次元画像の作成を前提と して 、るため、被写体のカラーの三次元形状及び色を推定することができな 、という 問題があった。  [0013] Since this configuration uses a compound eye system, the thickness can be reduced. Also, the pixel shift amount S force and the subject distance Zp can be obtained. However, since it is premised on the creation of a two-dimensional image, there is a problem that the three-dimensional shape and color of the color of the subject cannot be estimated.
特許文献 1:特開 2002— 204462号公報  Patent Document 1: Japanese Patent Laid-Open No. 2002-204462
発明の開示  Disclosure of the invention
[0014] 本発明は前記のような従来の問題を解決するものであり、薄型で、被写体の三次元 形状及び色を求めることができる撮像装置及びこれに用いる集積回路を提供するこ とを目的とする。  The present invention solves the above-described conventional problems, and an object thereof is to provide an imaging device that is thin and capable of obtaining the three-dimensional shape and color of a subject and an integrated circuit used therefor. And
[0015] 前記目的を達成するために本発明の撮像装置は、複数の光学系と、被写体からの 光のうち特定の波長領域の光を選択的に透過させる複数の波長選択領域と、入力し た光に応じた画像情報を出力する複数の撮像領域とを備え、前記複数の光学系の 各光軸上に、前記波長選択領域と前記撮像領域とが一対一に対応して配置された 撮像装置であって、前記複数の波長選択領域は、同一波長領域の光を選択的に透 過させる第 1波長選択領域と、前記第 1波長選択領域とは異なる波長領域の光を選 択的に透過させる第 2波長選択領域とを含み、前記第 1波長選択領域は、 2以上の 波長選択領域を含み、前記第 2波長選択領域は、それぞれ異なる波長領域の光を 選択的に透過させる 2以上の波長選択領域を含んでおり、前記第 1波長選択領域に 対応する少なくとも 2つの前記各撮像領域が出力する画像情報と、前記第 1波長選 択領域に対応する前記各光学系と前記各撮像領域との位置関係とに基づいて前記 被写体の三次元座標を算出する三次元座標算出手段と、それぞれ異なる波長領域 の光を受光する少なくとも 3つの前記各撮像領域における前記被写体の三次元座標 の各座標に対応した部分が出力する画像情報に基づいて、前記被写体の三次元座 標の各座標における色情報を算出する三次元座標色情報算出手段とを備えたことを 特徴とする。 In order to achieve the above object, an imaging apparatus of the present invention inputs a plurality of optical systems and a plurality of wavelength selection regions that selectively transmit light in a specific wavelength region among light from a subject. A plurality of imaging regions that output image information according to the light, and the wavelength selection region and the imaging region are arranged in a one-to-one correspondence on each optical axis of the plurality of optical systems. In the apparatus, the plurality of wavelength selection regions selectively select a first wavelength selection region that selectively transmits light in the same wavelength region, and light in a wavelength region different from the first wavelength selection region. A second wavelength selection region to be transmitted, wherein the first wavelength selection region includes two or more wavelength selection regions, and the second wavelength selection region selectively transmits light in different wavelength regions. The wavelength selection region is included and corresponds to the first wavelength selection region. And image information at least two of the respective imaging regions is outputted, based on the positional relationship between the each optical system and the respective imaging areas corresponding to the first wavelength selection 択領 region said that Image information output by the three-dimensional coordinate calculating means for calculating the three-dimensional coordinates of the subject and the portions corresponding to the coordinates of the three-dimensional coordinates of the subject in at least three of the imaging regions that receive light in different wavelength regions, respectively. 3D coordinate color information calculating means for calculating color information at each coordinate of the three-dimensional coordinate of the subject.
[0016] 本発明の集積回路は、被写体からの光を受光する撮像領域が出力する画像情報 に基づいて演算をする集積回路であって、前記撮像領域は、第 1の波長の光を受光 する第 1の撮像領域と、前記第 1の波長の光とは異なる波長領域の光を受光する第 2 の撮像領域とを含み、前記第 1の撮像領域は、 2以上の撮像領域を含み、前記第 2 の撮像領域は、それぞれ異なる波長領域の光を選択的に受光する 2以上の撮像領 域を含んでおり、前記集積回路は、前記第 1の撮像領域のうち、少なくとも 2つの前記 各撮像領域が出力する画像情報に基づいて、前記被写体の三次元座標を算出する 三次元座標算出手段と、それぞれ異なる波長領域の光を受光する少なくとも 3つの 前記各撮像領域における前記被写体の三次元座標の各座標に対応した部分が出 力する画像情報とに基づいて、前記被写体の三次元座標の各座標における色情報 を算出する三次元座標色情報算出手段とを備えたことを特徴とする。  The integrated circuit of the present invention is an integrated circuit that performs an operation based on image information output from an imaging region that receives light from a subject, and the imaging region receives light of a first wavelength. Including a first imaging region and a second imaging region that receives light in a wavelength region different from the light of the first wavelength, wherein the first imaging region includes two or more imaging regions, The second imaging region includes two or more imaging regions that selectively receive light in different wavelength regions, and the integrated circuit includes at least two of the first imaging regions. 3D coordinate calculation means for calculating 3D coordinates of the subject based on image information output from the area, and at least three of the 3D coordinates of the subject in each of the imaging areas that receive light in different wavelength ranges The part corresponding to each coordinate is output. That based on the image information, characterized by comprising a three-dimensional coordinate color information calculating means for calculating color information at each coordinate of the three-dimensional coordinates of the object.
図面の簡単な説明  Brief Description of Drawings
[0017] [図 1]本発明の実施の形態に係る三次元撮像装置及びディスプレイの構成図。 FIG. 1 is a configuration diagram of a three-dimensional imaging device and a display according to an embodiment of the present invention.
[図 2]本発明の実施の形態に係る三次元撮像装置の三次元座標算出方法の原理説 明図。  FIG. 2 is a diagram illustrating the principle of a three-dimensional coordinate calculation method for a three-dimensional imaging device according to an embodiment of the present invention.
[図 3]本発明の実施の形態に係る三次元撮像装置の撮像領域を示した図。  FIG. 3 is a diagram showing an imaging region of the three-dimensional imaging device according to the embodiment of the present invention.
[図 4]図 2において三次元座標の Y軸の正方向側力も被写体及び撮像部 100を見た 図。  [FIG. 4] A view of the subject and the imaging unit 100 in FIG.
[図 5]図 2において三次元座標の X軸の正方向側から被写体及び撮像部 100を見た 図。  FIG. 5 is a view of the subject and the imaging unit 100 viewed from the positive direction side of the X axis of the three-dimensional coordinates in FIG.
[図 6]図 2において三次元座標の Y軸の正方向側力も被写体及び撮像部 100を見た 図。  FIG. 6 is a view of the subject and the imaging unit 100 in FIG.
[図 7]図 2において三次元座標の X軸の正方向側から被写体及び撮像部 100を見た 図。 [FIG. 7] In FIG. 2, the subject and the imaging unit 100 are viewed from the positive direction side of the X axis of the three-dimensional coordinates. Figure.
[図 8]図 2において三次元座標の Y軸の正方向側力も被写体及び撮像部 100を見た 図。  FIG. 8 is a view of the subject and the imaging unit 100 in FIG.
[図 9]図 2において三次元座標の Υ軸の正方向側力も被写体及び撮像部 100を見た 図。  [FIG. 9] A view of the subject and the imaging unit 100 in FIG.
[図 10]発明の実施の形態に係る三次元撮像装置の視点画像算出方法の原理説明 図。  FIG. 10 is a diagram for explaining the principle of the viewpoint image calculation method of the three-dimensional imaging apparatus according to the embodiment of the invention.
[図 11]図 10において三次元座標の Υ軸の正方向カゝら被写体及び仮想レンズを見た 図。  FIG. 11 is a view of the subject and the virtual lens viewed from the positive direction of the axis of the three-dimensional coordinate in FIG.
[図 12]図 10において三次元座標の X軸の正方向カゝら被写体及び仮想レンズを見た 図。  FIG. 12 is a view of the subject and the virtual lens as viewed from the positive direction of the X axis of the three-dimensional coordinates in FIG.
[図 13]本発明の一実施の形態に係るディスプレイを示す図であり、 (a)図は斜視図、 FIG. 13 is a diagram showing a display according to an embodiment of the present invention, (a) a perspective view,
(b)図は(a)図に示したディスプレイを上側力も見た図。 (b) The figure which also looked at the upper side force of the display shown to (a) figure.
[図 14]図 13に示した立体ディスプレイの立体映像の表示原理を説明する図。  14 is a diagram for explaining the display principle of the stereoscopic image of the stereoscopic display shown in FIG.
[図 15]本発明の一実施の形態に係る視点画像を示したものであり、 (a)図は一つの 視点映像を示した図、(b)図は他の一つの視点映像を示した図。  FIG. 15 shows a viewpoint image according to an embodiment of the present invention, (a) the figure shows one viewpoint video, and (b) the figure shows another viewpoint video. Figure.
[図 16]本発明の一実施の形態に係る座標変換の説明図。  FIG. 16 is an explanatory diagram of coordinate conversion according to an embodiment of the present invention.
[図 17]本発明の実施の形態に係る形状変換の説明図。  FIG. 17 is an explanatory diagram of shape conversion according to the embodiment of the present invention.
[図 18]本発明の一実施の形態に係るレンズ配列の一例を示した図。  FIG. 18 is a diagram showing an example of a lens arrangement according to an embodiment of the present invention.
[図 19]本発明の一実施の形態に係るレンズ配列の一例を示した図。  FIG. 19 is a diagram showing an example of a lens arrangement according to an embodiment of the present invention.
[図 20]従来の複眼撮像装置の一例に係る概略構成図。  FIG. 20 is a schematic configuration diagram of an example of a conventional compound eye imaging apparatus.
[図 21]従来例の複眼撮像装置の視差算出方法の一例の原理説明図。  FIG. 21 is a diagram illustrating the principle of an example of a parallax calculation method for a compound-eye imaging apparatus of a conventional example.
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0018] 本発明によれば、薄型で、被写体の三次元形状を求めることができる。 [0018] According to the present invention, the three-dimensional shape of a subject can be obtained with a thin shape.
[0019] 前記本発明の撮像装置においては、前記第 2波長選択領域に対応した前記各撮 像領域における前記被写体の三次元座標の各座標に対応した部分は、前記算出し た被写体の三次元座標と、前記第 2波長選択領域に対応する前記各光学系と前記 各撮像領域との位置関係とに基づいて算出することが好ましい。 [0020] また、前記被写体の三次元座標の一部又は全体を座標変換する座標変換手段を さらに備えていることが好ましい。この構成によれば、より臨場感の高い被写体の自 由視点映像や立体映像用の画像データを出力することできる。 In the imaging apparatus of the present invention, a portion corresponding to each coordinate of the three-dimensional coordinates of the subject in each imaging region corresponding to the second wavelength selection region is the three-dimensional calculated subject. It is preferable to calculate based on the coordinates and the positional relationship between each optical system corresponding to the second wavelength selection region and each imaging region. [0020] Further, it is preferable to further include coordinate conversion means for performing coordinate conversion on part or all of the three-dimensional coordinates of the subject. According to this configuration, it is possible to output free viewpoint video or stereoscopic video image data of a subject with higher presence.
[0021] また、前記被写体の三次元座標に基づく三次元形状の一部又は全体を拡大する 形状変換手段をさらに備えていることが好ましい。この構成によれば、より自然な被写 体の自由視点映像や立体映像用の画像データを出力することできる。  [0021] Further, it is preferable to further include a shape conversion means for enlarging a part or the whole of the three-dimensional shape based on the three-dimensional coordinates of the subject. According to this configuration, it is possible to output image data for free viewpoint video and stereoscopic video of a more natural subject.
[0022] また、前記被写体の三次元座標及び各座標における色情報に基づ!/、て、任意の 視点における被写体の画像を出力する任意視点画像出力手段をさらに備えたことが 好ましい。この構成によれば、被写体の自由視点映像や立体映像用の画像データを 出力することが可能となる。  [0022] Further, it is preferable to further include arbitrary viewpoint image output means for outputting an image of the subject at an arbitrary viewpoint based on the three-dimensional coordinates of the subject and color information at each coordinate. According to this configuration, it is possible to output image data for a free viewpoint video or a stereoscopic video of a subject.
[0023] また、前記被写体の三次元座標に基づき、任意の視点力 の前記被写体画像を補 間により作成する任意視点作成手段をさらに備え、前記任意視点画像出力手段は、 前記補間により作成した前記被写体画像に基づき任意の視点における画像を出力 することが好ましい。この構成によれば、より自然な被写体の自由視点映像や立体映 像用の画像データを出力することができる。  [0023] The image processing apparatus further includes arbitrary viewpoint creation means for creating the subject image having an arbitrary viewpoint power based on the three-dimensional coordinates of the subject by interpolation, and the arbitrary viewpoint image output means is configured to generate the subject image by the interpolation. It is preferable to output an image at an arbitrary viewpoint based on the subject image. According to this configuration, it is possible to output free viewpoint video and stereoscopic image data of a more natural subject.
[0024] また、前記任意視点画像出力手段は、立体映像出力用の任意の視点における画 像を出力することが好まし 、。  [0024] Preferably, the arbitrary viewpoint image output means outputs an image at an arbitrary viewpoint for stereoscopic video output.
[0025] また、前記任意視点画像出力手段は、前記光学系の数よりも多数の視点における 画像を出力することが好ましい。この構成によれば、より自然な被写体の立体映像用 の画像データを出力することが可能となる。  [0025] Preferably, the arbitrary viewpoint image output means outputs images at a larger number of viewpoints than the number of the optical systems. According to this configuration, it is possible to output image data for stereoscopic video of a more natural subject.
[0026] 以下本発明の一実施の形態について、図面を参照しながら説明する。  Hereinafter, an embodiment of the present invention will be described with reference to the drawings.
[0027] (実施の形態 1)  (Embodiment 1)
まず、図 1を用いて、本実施の形態に係る三次元撮像装置の概要を説明する。図 1 は、本実施の形態に係る三次元撮像装置とディスプレイの構成図を示している。本実 施の形態は、被写体像を結像する複数の光学系を備えている。具体的には、 101a 及び 10 lbは、主に緑色の波長の光を撮影するために設計された単レンズである。 1 Olcは、主に赤色の波長の光を撮影するために設計された単レンズである。 lOldは 、主に青色の波長の光を撮影するために設計された単レンズである。 [0028] レンズアレイ 101は、一体成型により単レンズ 101a、 101b, 101c, 101dを構成し ている。このことにより、安価で小型にレンズアレイを作製することができる。 First, the outline of the three-dimensional imaging apparatus according to the present embodiment will be described with reference to FIG. FIG. 1 shows a configuration diagram of a three-dimensional imaging apparatus and a display according to the present embodiment. This embodiment includes a plurality of optical systems that form a subject image. Specifically, 101a and 10 lb are single lenses designed primarily for photographing green wavelengths of light. 1 Olc is a single lens designed mainly for photographing red wavelength light. lOld is a single lens designed mainly for photographing blue wavelength light. The lens array 101 forms single lenses 101a, 101b, 101c, and 101d by integral molding. Thereby, a lens array can be produced inexpensively and in a small size.
[0029] 前記の各単レンズの光軸上に、波長選択領域を備えている。波長選択領域は、同 一波長領域の光を選択的に透過させる領域である。本実施の形態では、波長選択 領域をカラーフィルタで構成している。具体的には、 102a, 102bは、主に緑色の波 長の光を通すカラーフィルタである。 102cは、主に赤色の波長の光を通すカラーフィ ルタである。 102dは、主に青色の波長の光を通すカラーフィルタである。なお、カラ 一フィルタ 102bは、図示の便宜上、破線で示している力 短レンズ 101bと撮像素子 103の間に配置されている。  [0029] A wavelength selection region is provided on the optical axis of each single lens. The wavelength selection region is a region that selectively transmits light in the same wavelength region. In the present embodiment, the wavelength selection region is constituted by a color filter. Specifically, 102a and 102b are color filters that mainly transmit light having a green wavelength. 102c is a color filter that mainly transmits light of red wavelength. Reference numeral 102d denotes a color filter that mainly transmits light having a blue wavelength. Note that the color filter 102b is disposed between the short lens 101b and the image sensor 103 shown by a broken line for convenience of illustration.
[0030] 103は、 CMOSや CCD等により実現される撮像素子である。ここでは撮像素子 10 3は、単一の撮像素子で構成しているが、複数の撮像素子で構成してもよい。撮像部 100は、単レンズ 101a、 101b, 101c, 101d、カラーフィルタ 102a、 102b, 102c, 102d、撮像素子 103により撮影された被写体像の画像データを三次元座標算出部 110に出力する。  [0030] Reference numeral 103 denotes an image sensor realized by a CMOS, a CCD, or the like. Here, the image sensor 103 is composed of a single image sensor, but may be composed of a plurality of image sensors. The image capturing unit 100 outputs image data of a subject image captured by the single lenses 101a, 101b, 101c, 101d, the color filters 102a, 102b, 102c, 102d, and the image sensor 103 to the three-dimensional coordinate calculation unit 110.
[0031] 三次元座標算出部 110は、三次元座標算出手段及び三次元座標色情報算出手 段を備えている。詳細は後に説明するが、三次元座標算出手段は、撮像部 100から の画像データに基づき、被写体の三次元座標を算出する。三次元座標色情報算出 手段は、三次元座標算出手段で求めた被写体の各三次元座標における色情報を算 出する。  [0031] The three-dimensional coordinate calculation unit 110 includes three-dimensional coordinate calculation means and a three-dimensional coordinate color information calculation unit. Although details will be described later, the three-dimensional coordinate calculation means calculates the three-dimensional coordinates of the subject based on the image data from the imaging unit 100. The three-dimensional coordinate color information calculating means calculates color information at each three-dimensional coordinate of the subject obtained by the three-dimensional coordinate calculating means.
[0032] 111は視点画像出力部で、三次元座標算出部 110で算出された被写体の三次元 座標に基づき、被写体を任意の所定の視点から見たときの視点画像を出力する。視 点画像出力部 111は、任意視点画像出力手段を備えており、複数の視点から見た 視点画像、つまり視点の異なる複数の視点画像を同時に出力することができる。 112 はディスプレイで、視点画像出力部 111で作成された視点の異なる複数の視点画像 力も立体映像を映し出す。  A viewpoint image output unit 111 outputs a viewpoint image when the subject is viewed from an arbitrary predetermined viewpoint based on the three-dimensional coordinates of the subject calculated by the three-dimensional coordinate calculation unit 110. The viewpoint image output unit 111 includes arbitrary viewpoint image output means, and can simultaneously output viewpoint images viewed from a plurality of viewpoints, that is, a plurality of viewpoint images having different viewpoints. Reference numeral 112 denotes a display, which also displays a stereoscopic image with a plurality of viewpoint image forces with different viewpoints created by the viewpoint image output unit 111.
[0033] 次に、図 1から図 3を参照しながら、図 1の撮像部 100の構成及び座標系の設定を 説明する。図 2は被写体 (星)を撮影した際に、被写体の 1点 Pの三次元座標の算出 原理を説明する図である。単レンズ 101a、 101b, 101c, 101dの各主点 A、 B、 C、 Dは同一平面上に構成されており、その平面と平行に撮像素子 103の撮像面は配 置されている。 Next, the configuration of the imaging unit 100 in FIG. 1 and the setting of the coordinate system will be described with reference to FIGS. 1 to 3. FIG. 2 is a diagram for explaining the principle of calculating the three-dimensional coordinates of one point P of the subject when shooting the subject (star). Each principal point A, B, C of single lens 101a, 101b, 101c, 101d D is configured on the same plane, and the imaging surface of the imaging element 103 is arranged in parallel with the plane.
[0034] 撮像素子 103の撮像面上に、単レンズ 101aの光軸を基準とする撮像領域 103a、 単レンズ 101bの光軸を基準とする撮像領域 103b、単レンズ 101cの光軸を基準とす る撮像領域 103c、単レンズ 101dの光軸を基準とする撮像領域 103dが構成されて いる。  [0034] On the imaging surface of the image sensor 103, the imaging region 103a with the optical axis of the single lens 101a as a reference, the imaging region 103b with the optical axis of the single lens 101b as a reference, and the optical axis of the single lens 101c as a reference An imaging region 103c that is based on the optical axis of the single lens 101d is configured.
[0035] 次に、図 2の座標系について説明する。図 3は、単レンズ側から見た撮像素子 103 の構成図を示している。各撮像領域 103a、 103b, 103c, 103dは撮像領域の左下 を原点(0, 0)とする二次元座標系を設定する。図 3に示すように、撮像領域 103aは 、右方向が正となる水平ラインを xa軸に、上方向が正となる垂直ラインを ya軸に設定 している。撮像領域 103bは、右方向が正となる水平ラインを xb軸に、上方向が正と なる垂直ライン^ yb軸に設定している。撮像領域 103cは、右方向が正となる水平ラ インを xc軸に、上方向が正となる垂直ラインを yc軸に設定している。撮像領域 103d は、右方向が正となる水平ラインを xd軸に、上方向が正となる垂直ラインを yd軸に設 定している。  Next, the coordinate system in FIG. 2 will be described. FIG. 3 is a configuration diagram of the image sensor 103 viewed from the single lens side. Each imaging area 103a, 103b, 103c, 103d sets a two-dimensional coordinate system with the origin (0, 0) at the lower left of the imaging area. As shown in FIG. 3, in the imaging region 103a, a horizontal line that is positive in the right direction is set as the xa axis, and a vertical line that is positive in the upward direction is set as the ya axis. In the imaging region 103b, a horizontal line whose right direction is positive is set on the xb axis, and a vertical line whose upper direction is positive is set on the yb axis. In the imaging region 103c, the horizontal line with the right direction being positive is set on the xc axis, and the vertical line with the upward direction being positive is set on the yc axis. In the imaging region 103d, the horizontal line with the right direction being positive is set on the xd axis, and the vertical line with the upward direction being positive is set on the yd axis.
[0036] 図 3において、撮像領域 103a上の 104aは単レンズ 101aの光軸、撮像領域 103b 上の 104bは単レンズ 101bの光軸、撮像領域 103c上の 104cは単レンズ 101cの光 軸、撮像領域 103d上の 104dは単レンズ 101dの光軸である。各撮像領域 103a、 1 03b、 103c, 103d上の光軸 104a、 104b, 104c, 104dの座標は同一となるように 構成している。  In FIG. 3, 104a on the imaging area 103a is the optical axis of the single lens 101a, 104b on the imaging area 103b is the optical axis of the single lens 101b, 104c on the imaging area 103c is the optical axis of the single lens 101c, and imaging 104d on the region 103d is the optical axis of the single lens 101d. The coordinates of the optical axes 104a, 104b, 104c, and 104d on the imaging regions 103a, 103b, 103c, and 103d are configured to be the same.
[0037] また、図 2の単レンズ 101aの主点 Aを原点(0, 0, 0)とする三次元座標系は、図 2 に示すように撮像領域 103aの xa軸と平行で正負の符号が逆の軸を X軸、撮像領域 103aの ya軸と平行で正負の符号が同一の軸を Y軸、単レンズ 101aの光軸に平行 で、単レンズ 101aに対し撮像素子 103と逆方向を正とする軸を Z軸に設定している。 図 1の三次元座標算出部 110は、この三次元座標中の被写体の座標を算出する。  [0037] In addition, the three-dimensional coordinate system having the principal point A of the single lens 101a in Fig. 2 as the origin (0, 0, 0) has a positive and negative sign parallel to the xa axis of the imaging region 103a as shown in Fig. 2. The X axis is the opposite axis, the Y axis is parallel to the ya axis of the imaging area 103a, and the Y axis is the same, and the optical axis of the single lens 101a is parallel to the optical axis of the single lens 101a. The positive axis is set to the Z axis. The three-dimensional coordinate calculation unit 110 in FIG. 1 calculates the coordinates of the subject in the three-dimensional coordinates.
[0038] 次に図 2から図 5を参照しながら、図 1の三次元座標算出部 110に含まれる三次元 座標算出手段による被写体の三次元座標の算出方法を詳細に説明する。図 3に示 すように、被写体の一点 P (Xp, Yp, Ζρ)の像は、 Ζρが有限の場合は視差の影響に より、各撮像領域の異なる座標 pa、 pb、 pc、 pdに結像する。被写体の三次元座標の 算出は、同色の光が結像する撮像領域、つまり、ここでは緑色の光の撮像領域 103a と 103bの被写体像の比較により行う。 Next, a method for calculating the three-dimensional coordinates of the subject by the three-dimensional coordinate calculation means included in the three-dimensional coordinate calculation unit 110 of FIG. 1 will be described in detail with reference to FIGS. As shown in Fig. 3, the image of one point P (Xp, Yp, Ζρ) of the subject is affected by the parallax when Ζρ is finite. As a result, images are formed at different coordinates pa, pb, pc, and pd of each imaging region. The calculation of the three-dimensional coordinates of the subject is performed by comparing the subject images in the imaging region where the light of the same color forms an image, that is, here the imaging regions 103a and 103b of green light.
[0039] 単レンズ 101aと 101bの位置は、三次元座標において X座標のみが異なる。このた め、撮像領域 103aにおける被写体の一点 Pに対応した像の座標 pa (xpa, ypa)と、The positions of the single lenses 101a and 101b differ only in the X coordinate in the three-dimensional coordinates. Therefore, the image coordinates pa (xpa, ypa) corresponding to the point P of the subject in the imaging region 103a,
103bにおける被写体の一点 Pに対応した像の座標 pb (xpb, ypb)とを比べると、 xpa と xpbの値は視差の影響で Sのずれ量を持ち、 ypaと ypbの値は等しくなる。 Comparing the coordinate pb (xpb, ypb) of the image corresponding to the point P of the object at 103b, the values of xpa and xpb have a shift amount of S due to the parallax, and the values of ypa and ypb are equal.
[0040] すなわち、撮像領域 103aの座標 pa (xpa, ypa)における像を選択すると、この像に 対応する撮像領域 103bの像の座標 pb (xpb, ypb)は、 xpbを求めれば求まり、 xpb はずれ量 Sを求めれば求まることになる。ずれ量 Sは、撮像領域 103a及び撮像領域That is, when an image at the coordinate pa (xpa, ypa) of the imaging region 103a is selected, the coordinate pb (xpb, ypb) of the image of the imaging region 103b corresponding to this image is obtained by obtaining xpb, and xpb is shifted. If the quantity S is obtained, it will be obtained. The deviation amount S is the imaging area 103a and the imaging area.
103bが出力する画像情報に基づいて、エリアベースマッチング法などの従来の技術 により求めることができる。 Based on the image information output by 103b, it can be obtained by a conventional technique such as an area-based matching method.
[0041] さらに、ずれ量 Sが求まれば、被写体の一点 P (Xp, Yp, Ζρ)における Ζρは式(1) の被写体距離と視差の関係を用いて、下記の式 (2)で求めることができる。 [0041] Further, when the deviation amount S is obtained, Ζρ at one point P (Xp, Yp, Ζρ) of the subject is obtained by the following equation (2) using the relationship between the subject distance and the parallax in equation (1). be able to.
[0042] 式(2) Zp=f -tab/S [0042] Equation (2) Zp = f -tab / S
Sは視差つまり被写体像のずれ量、 tabは単レンズ 101aと 101bとの光軸間の距離 S is parallax, that is, the amount of deviation of the subject image, tab is the distance between the optical axes of the single lenses 101a and 101b
、fは結像距離つまり図 2の単レンズ 101aの主点 Aと撮像領域 103aでの光軸位置 1, F is the imaging distance, that is, the principal point A of the single lens 101a in FIG.
04aとの間の距離である。 The distance between 04a.
[0043] 図 4は、図 2において三次元座標の Y軸の正方向側から被写体及び撮像部 100の 撮像領域 103aを見た図である。図 4の図示において、幾何計算より下記の式(3)の 関係を導き出すことがでる。 FIG. 4 is a view of the subject and the imaging area 103a of the imaging unit 100 from the positive direction side of the Y axis of the three-dimensional coordinates in FIG. In the illustration of Fig. 4, the relationship of the following formula (3) can be derived from the geometric calculation.
[0044] 式(3) Xp = (xpa-xOa) - | Zp | /f [0044] Equation (3) Xp = (xpa-xOa)-| Zp | / f
したがって、式(2)の Zpの関係式を、式(3)に代入することにより、 Xpは下記の式( Therefore, by substituting the relational expression of Zp in equation (2) into equation (3), Xp can be expressed as
4)で表わされる。 It is expressed by 4).
[0045] 式(4) Xp = (xpa-xOa) -tab/S [0045] Equation (4) Xp = (xpa-xOa) -tab / S
光軸間の距離 tab、ずれ量 Sは既知である。 xOaは既知であり、 xpaは選択した位置 の X座標である。このため、 xpa— xOaも算出可能である。したがって、 Xpの値を求め ることがでさる。 [0046] 図 5は、図 2において三次元座標の X軸の正方向側力も被写体及び撮像部 100撮 像領域 103aを見た図である。図 5の図示において、幾何計算より下記の式(5)の関 係を導き出すことがでる。 The distance between the optical axes tab and the deviation S are known. xOa is known and xpa is the X coordinate of the selected position. For this reason, xpa-xOa can also be calculated. Therefore, the value of Xp can be obtained. [0046] FIG. 5 is a view of the subject and the imaging unit 100 imaging region 103a in the positive direction side force of the X axis of the three-dimensional coordinate in FIG. In the illustration of Fig. 5, the relationship of the following formula (5) can be derived from the geometric calculation.
[0047] 式(5) Yp= - (ypa-yOa) - | Zp | /f [0047] Formula (5) Yp =-(ypa-yOa)-| Zp | / f
したがって、式(2)の Zpの関係式を式(5)に代入することにより、 Ypは下記の式(6 Therefore, by substituting the relational expression of Zp in equation (2) into equation (5), Yp becomes the following equation (6
)で表わされる。 ).
[0048] 式(6) Yp= - (ypa-yOa) -tab/S [0048] Formula (6) Yp =-(ypa-yOa) -tab / S
前記の通り、光軸間の距離 tab、ずれ量 Sは既知である。 yOaは既知であり、 ypaは 選択した位置の y座標である。このため、 ypa— yOaも算出可能である。したがって、 Y pの値を求めることができる。  As described above, the distance tab between the optical axes and the shift amount S are known. yOa is known and ypa is the y coordinate of the selected position. For this reason, ypa-yOa can also be calculated. Therefore, the value of Y p can be obtained.
[0049] このようにして被写体の一点 P (Xp, Yp, Zp)の三次元座標を算出することができる[0049] In this way, the three-dimensional coordinates of the point P (Xp, Yp, Zp) of the subject can be calculated.
。同様の計算により、被写体全体の三次元座標についても求めることができる。このこ とにより、被写体の三次元形状を得ることができる。 . Similar calculations can be used to determine the three-dimensional coordinates of the entire subject. As a result, the three-dimensional shape of the subject can be obtained.
[0050] 次に、三次元座標色情報算出手段について説明する。三次元座標色情報算出手 段は、前記の通り、三次元座標算出手段で求めた被写体の各三次元座標における 色情報を算出するものである。前記のように緑色の光の被写体像を用いて被写体の 三次元座標つまり三次元形状を求めることができる。これに基づき、撮像領域 103c 及び 103dに結像される赤色と青色の光の被写体像の座標も算出できる。これらの各 色の座標に基づき、被写体の色を合成する。 [0050] Next, the three-dimensional coordinate color information calculation means will be described. As described above, the three-dimensional coordinate color information calculation means calculates color information at each three-dimensional coordinate of the subject obtained by the three-dimensional coordinate calculation means. As described above, the three-dimensional coordinates of the subject, that is, the three-dimensional shape, can be obtained using the subject image of green light. Based on this, it is also possible to calculate the coordinates of the subject images of red and blue light that are imaged in the imaging regions 103c and 103d. Based on the coordinates of these colors, the colors of the subject are synthesized.
[0051] 図 6は、図 2において三次元座標の Y軸の正方向側から被写体及び撮像部 100の 撮像領域 103cを見た図である。図 6の図示において、幾何計算より下記の式(7)の 関係を導き出すことができ、 xpcの値を求めることができる。 FIG. 6 is a view of the subject and the imaging region 103c of the imaging unit 100 as viewed from the positive direction side of the Y axis of the three-dimensional coordinates in FIG. In the illustration of Fig. 6, the relationship of the following equation (7) can be derived from the geometric calculation, and the value of xpc can be obtained.
[0052] 式(7) xpc =xOc+Xp -f/ | Zp | [0052] Equation (7) xpc = xOc + Xp -f / | Zp |
図 7は、図 2において三次元座標の X軸の正方向側力も被写体及び撮像部 100の 撮像領域 103cを見た図である。図 7の図示において、幾何計算より下記の式 (8)の 関係を導き出すことができ、 ypcの値を求めることができる。  FIG. 7 is a diagram in which the positive side force of the X axis of the three-dimensional coordinates in FIG. In the illustration of Fig. 7, the relationship of the following equation (8) can be derived from the geometric calculation, and the value of ypc can be obtained.
[0053] 式(8) ypc=yOc - (Yp + tac) -f/ | Zp | [0053] Equation (8) ypc = yOc-(Yp + tac) -f / | Zp |
ただし、 tacは単レンズ 101aと 101cとの光軸間の距離の Y軸方向成分である。 [0054] 以上より、撮像領域 103cにおいては、式(7)及び式(8)で求めた pc (xpc, ypc)に 、被写体の一点 Pの赤色成分が結像して 、ることになる。 Where tac is the Y-axis direction component of the distance between the optical axes of the single lenses 101a and 101c. From the above, in the imaging region 103c, the red component at one point P of the subject is imaged on pc (xpc, ypc) obtained by the equations (7) and (8).
[0055] 図 8は、図 2において三次元座標の Y軸の正方向側から被写体及び撮像部 100の 撮像領域 103dを見た図である。図 8の図示において、幾何計算より下記の式(9)の 関係を導き出すことがでる。式 (9)により、 Pから発せられる青色の光の、撮像領域 10 3d上の座標 pd (xpd, ypd)の xpdを求めること力できる。  FIG. 8 is a diagram of the subject and the imaging region 103d of the imaging unit 100 viewed from the positive direction side of the Y axis of the three-dimensional coordinates in FIG. In the illustration of Fig. 8, the relationship of the following formula (9) can be derived from the geometric calculation. From Equation (9), it is possible to obtain xpd of the coordinates pd (xpd, ypd) of the blue light emitted from P on the imaging region 103d.
[0056] 式(9) xpd=xOd+ (Xp -tadx) -f/ | Zp |  [0056] Equation (9) xpd = xOd + (Xp -tadx) -f / | Zp |
ただし、 tadxは単レンズ 101aと 101dとの光軸間の X軸方向の距離である。  Where tadx is the distance in the X-axis direction between the optical axes of the single lenses 101a and 101d.
[0057] 図 9は、図 2において三次元座標の X軸の正方向側力も被写体及び撮像部 100の 撮像領域 103dを見た図である。図 9の図示において、幾何計算より下記の式(10) の関係を導き出すことができる。式(10)により、 Pから発せられる青色の光の、撮像 領域 103d上の座標 pd (xpd, ypd)の ypdを求めることができる。  FIG. 9 is a diagram in which the positive side force of the X axis of the three-dimensional coordinates in FIG. 2 also looks at the subject and the imaging region 103 d of the imaging unit 100. In the illustration of FIG. 9, the relationship of the following formula (10) can be derived from the geometric calculation. From equation (10), the ypd of the coordinates pd (xpd, ypd) of the blue light emitted from P on the imaging region 103d can be obtained.
[0058] 式(10) ypd=yOd- (Yp +tady) -f/ | Zp |  [0058] Equation (10) ypd = yOd- (Yp + tady) -f / | Zp |
ただし、 tadyは単レンズ 101aと 101dとの光軸間の Y軸方向の距離である。  Where tady is the distance in the Y-axis direction between the optical axes of the single lenses 101a and 101d.
[0059] 以上より、撮像領域 103dにおいては、式(9)及び式(10)で求めた pc (xpd, ypd) に、被写体の一点 Pの青色成分が結像していることになる。  As described above, in the imaging region 103d, the blue component at the point P of the subject is imaged on pc (xpd, ypd) obtained by the equations (9) and (10).
[0060] このようにして、(Xp, Yp, Zp)の三次元座標で表される被写体の一点 Ρ力 撮像領 域 103c、 103d上に結像する位置を算出することができる。同様の計算により、被写 体の全体について、撮像領域 103c、 103d上に結像する位置を算出することができ る。  [0060] In this way, it is possible to calculate a position where an image is formed on the one-point repulsive force imaging areas 103c and 103d represented by the three-dimensional coordinates of (Xp, Yp, Zp). By the same calculation, it is possible to calculate the position where the entire object is imaged on the imaging regions 103c and 103d.
[0061] したがって、被写体の各点に対応した緑、赤、青の各撮像領域の像の座標を求め ることができるので、これらの各座標の被写体像の情報を合成することにより、被写体 の各点の色を得ることができる。被写体の全体について、同様の計算をすることによ り、被写体の全体の色を得ることができる。すなわち、三次元座標算出手段による計 算により被写体全体の三次元形状を求められることに加え、三次元座標色情報算出 手段による計算により、被写体全体の色も求めることができる。  [0061] Accordingly, since the coordinates of the images of the green, red, and blue imaging regions corresponding to each point of the subject can be obtained, by combining the information of the subject image at each of these coordinates, The color of each point can be obtained. By performing the same calculation for the entire subject, the overall color of the subject can be obtained. That is, in addition to obtaining the 3D shape of the entire subject by calculation by the 3D coordinate calculation means, the color of the entire subject can also be obtained by calculation by the 3D coordinate color information calculation means.
[0062] 以上のように、本実施の形態によれば、薄型で、被写体の三次元形状及び色を求 めることができる。複数のレンズを重ねた通常の撮像系を複数用いた構成により、三 次元形状及び色を取得する場合は、色の取得を単一の撮像系から取得することにな る。このような構成では、装置が大型になる。本実施の形態では、それぞれ重なり合 わない赤、青、緑の光学系から三次元形状に対応する色を抽出するので、薄型化に 有利である。 [0062] As described above, according to the present embodiment, the three-dimensional shape and color of a subject can be obtained with a thin shape. A configuration using multiple normal imaging systems with multiple lenses stacked, When acquiring the dimensional shape and color, the color is acquired from a single imaging system. In such a configuration, the apparatus becomes large. In the present embodiment, colors corresponding to the three-dimensional shape are extracted from the red, blue, and green optical systems that do not overlap each other, which is advantageous for thinning.
[0063] 図 1において、三次元画像算出部 110で得た情報は、以下の実施の形態 2で説明 するように、図 1の視点画像出力部 111に出力して、立体映像を得るようにしてもよい 力 直接ディスプレイ 112に出力するようにしてもょ 、。  In FIG. 1, the information obtained by the 3D image calculation unit 110 is output to the viewpoint image output unit 111 of FIG. 1 to obtain a stereoscopic image, as described in the second embodiment below. You may force it to output directly to the display 112.
[0064] なお、式 (7)から式(10)の算出値に対応する画素が撮像領域上に存在しない場 合は、近接する画素の内挿補間又は外挿補間により被写体像を算出することができ る。 [0064] If there is no pixel corresponding to the calculated value of Equation (7) to Equation (10) in the imaging region, the subject image is calculated by interpolation or extrapolation of adjacent pixels. You can.
[0065] また、本実施の形態では、色の取得には、赤、青、緑の 3色に対応する光学系を用 いたが、シアン、マゼンダ、黄色の補色系を追カ卩したり、その他の手法で色を再現し てもよい。色の取得に、補色系を追加する場合は、追加した色の数に応じて、光学系 (レンズ)及びこれに対応する波長選択領域 (カラーフィルター)、撮像領域を追加す ることになる。  [0065] In the present embodiment, an optical system corresponding to three colors of red, blue, and green is used for color acquisition. However, complementary colors of cyan, magenta, and yellow can be added, The color may be reproduced by other methods. When a complementary color system is added for color acquisition, an optical system (lens), a wavelength selection region (color filter) corresponding to this, and an imaging region are added according to the number of added colors.
[0066] (実施の形態 2)  [0066] (Embodiment 2)
以下、本発明の実施の形態 2について説明する。本実施の形態は、前記の実施の 形態 1を前提にしている。図 1の三次元画像算出部 110は、実施の形態 1で説明した ような処理を経て、被写体の三次元形状つまり三次元座標と、被写体の色を算出し、 視点画像出力部 111に出力する。  The second embodiment of the present invention will be described below. The present embodiment is premised on the first embodiment. The three-dimensional image calculation unit 110 in FIG. 1 calculates the three-dimensional shape of the subject, that is, the three-dimensional coordinates, and the color of the subject through the processing described in the first embodiment, and outputs it to the viewpoint image output unit 111. .
[0067] 次に、図 1の視点画像出力部 111に含まれる任意視点画像出力手段について説 明する。任意視点画像出力手段は、三次元座標算出手段で求めた被写体の三次元 座標、三次元座標色情報算出手段で求めた各三次元座標における色情報に基づ いて任意の視点における被写体の画像を作成し出力するものである。  Next, an arbitrary viewpoint image output unit included in the viewpoint image output unit 111 of FIG. 1 will be described. The arbitrary viewpoint image output means outputs the subject image at an arbitrary viewpoint based on the three-dimensional coordinates of the subject obtained by the three-dimensional coordinate calculation means and the color information at each three-dimensional coordinate obtained by the three-dimensional coordinate color information calculation means. Create and output.
[0068] 図 10は、本実施の形態に係る視点画像算出方法の原理を説明する図である。図 1 0に示すように三次元座標中の任意の位置に Rl (vxl, vyl, vzl)を主点とする仮想 レンズとその光軸に垂直な平面に配置され、結像距離 frの仮想撮像領域 103rlを設 定する。ここでは仮想レンズの光軸は、三次元座標の Z軸と平行に設定している。仮 想撮像領域 103rlは、仮想レンズと反対側から見たときに、撮像領域の右下を原点( 0, 0)とし、左方向が正となる水平ラインを xrl軸とし、上方向が正となる垂直ラインを yrl軸とした二次元座標系を設定している。 FIG. 10 is a diagram for explaining the principle of the viewpoint image calculation method according to the present embodiment. As shown in Fig. 10, a virtual lens with Rl (vxl, vyl, vzl) as the principal point at an arbitrary position in 3D coordinates and a plane perpendicular to the optical axis, and virtual imaging with an imaging distance fr Set area 103rl. Here, the optical axis of the virtual lens is set parallel to the Z axis of the three-dimensional coordinates. Provisional When viewed from the opposite side of the virtual lens, the virtual imaging area 103rl is the origin (0, 0) at the lower right of the imaging area, the horizontal line that is positive in the left direction is the xrl axis, and the upward direction is positive A two-dimensional coordinate system with a vertical line as the yrl axis is set.
[0069] 座標 104rl (xOrl, yOrl)は、仮想撮像領域 103rl上の二次元座標の点であり、 仮想撮像領域 103rlと仮想レンズの光軸との交点である。  The coordinates 104rl (xOrl, yOrl) are two-dimensional coordinate points on the virtual imaging area 103rl, and are intersection points between the virtual imaging area 103rl and the optical axis of the virtual lens.
[0070] このとき、被写体の一点 P (Xp, Yp, Ζρ)が仮想レンズを通して結像される仮想撮 像領域 103rl上の座標 prl (xprl, yprl)を求める。図 11に、図 10において三次元 座標の Y軸の正方向から被写体、仮想レンズ及び仮想撮像領域 103rlを見た図を 示している。図 11の図示において、幾何計算より下記の式(11)の関係を導き出すこ とができ、 xprlを求めることができる。  [0070] At this time, the coordinates prl (xprl, yprl) on the virtual image area 103rl where the point P (Xp, Yp, Ζρ) of the subject is imaged through the virtual lens are obtained. FIG. 11 shows a view of the subject, the virtual lens, and the virtual imaging region 103rl from the positive direction of the Y axis of the three-dimensional coordinates in FIG. In the illustration of FIG. 11, the relationship of the following formula (11) can be derived from the geometric calculation, and xprl can be obtained.
[0071] 式(11) xprl =xOrl + (Xp-vxl) -fr/ | Zp-vzl |  [0071] Formula (11) xprl = xOrl + (Xp-vxl) -fr / | Zp-vzl |
図 12に、図 10において三次元座標の X軸の正方向力も被写体、仮想レンズ及び 仮想撮像領域 103rlを見た図を示している。図 12の図示において、幾何計算より下 記の式(12)の関係を導き出すことができ、 yprlを求めることができる。  FIG. 12 shows the X-axis positive direction force of the three-dimensional coordinates in FIG. In the illustration of FIG. 12, the relationship of the following formula (12) can be derived from the geometric calculation, and yprl can be obtained.
[0072] 式(12) yprl =yOrl - (Yp-vyl) -fr/ | Zp-vzl |  [0072] Equation (12) yprl = yOrl-(Yp-vyl) -fr / | Zp-vzl |
このようにして被写体の一点 P (Xp, Yp, Zp)の仮想撮像領域 103rl上の結像点 p rKxprl, yprl)を算出することができる。同様の計算により、被写体全体の仮想撮 像領域 103rl上の結像点についても求めることができる。以下、仮想撮像領域 103r 1上に結像された画像を視点画像 rlと呼ぶ。同様に後に説明する仮想撮像領域 10 3r2上に結像された画像を視点画像 r2と呼び、仮想撮像領域 103r3上に結像され た画像を視点画像 r3と呼び、仮想撮像領域 103r4上に結像された画像を視点画像 r 4と呼ぶ。  In this way, the imaging point prKxprl, yprl) on the virtual imaging region 103rl of one point P (Xp, Yp, Zp) of the subject can be calculated. By the same calculation, the imaging point on the virtual imaging area 103rl of the entire subject can be obtained. Hereinafter, an image formed on the virtual imaging region 103r 1 is referred to as a viewpoint image rl. Similarly, an image formed on the virtual imaging area 103r2 described later is referred to as a viewpoint image r2, and an image formed on the virtual imaging area 103r3 is referred to as a viewpoint image r3, and is formed on the virtual imaging area 103r4. The resulting image is called viewpoint image r4.
[0073] 次に、仮想レンズの主点を R2 (vx2, vyl, vzl)、つまり Rlを三次元座標の X軸方 向に平行移動した場合の視点画像 r2を求める。視点画像 rlと同様にその仮想撮像 領域 103r2はその光軸に垂直な平面で結像距離 frの位置に配置する。仮想撮像領 域 103r2は、仮想レンズと反対側カゝら見たときに、撮像領域の右下を原点 (0, 0)とし 、左方向が正となる水平ラインを xr2軸とし、上方向が正となる垂直ライン ¾yr2軸とし た二次元座標系を設定している。座標 104r2 (x0r2, y0r2)は仮想撮像領域 103r2 上の二次元座標の点であり、仮想レンズの光軸との交点である。 [0073] Next, the principal point of the virtual lens is R2 (vx2, vyl, vzl), that is, the viewpoint image r2 when Rl is translated in the X-axis direction of the three-dimensional coordinates is obtained. Similar to the viewpoint image rl, the virtual imaging region 103r2 is arranged at the imaging distance fr on a plane perpendicular to the optical axis. When viewed from the opposite side of the virtual lens, the virtual imaging area 103r2 has the lower right of the imaging area as the origin (0, 0), the horizontal line that is positive in the left direction is the xr2 axis, and the upward direction is A two-dimensional coordinate system with a positive vertical line ¾ yr2 is set. Coordinate 104r2 (x0r2, y0r2) is the virtual imaging area 103r2 It is the point of the upper two-dimensional coordinates and the intersection with the optical axis of the virtual lens.
[0074] 視点画像 rlと同様の計算より、被写体の一点 P (Xp, Yp, Ζρ)の仮想撮像領域 10 3r2上の結像点 pr2 (xpr2, ypr2)は式(13)及び式(14)により求められる。すなわ ち、式(13) ίま、式(11)【こお!ヽて、 xOrlを x0r2【こ、 vxlを vx2【こ、それぞれ置き換え たものである。式(14)は、式(12)において、 yOrlを y0r2に、置き換えたものである。  [0074] From the same calculation as the viewpoint image rl, the imaging point pr2 (xpr2, ypr2) on the virtual imaging region 10 3r2 of one point P (Xp, Yp, Ζρ) of the subject is expressed by Equations (13) and (14). Is required. In other words, the expression (13) ί or the expression (11) is replaced by xOrl, x0r2, and vxl, vx2. Equation (14) is obtained by replacing yOrl with y0r2 in equation (12).
[0075] 式(13) xpr2=x0r2+ (Xp-vx2) -fr/ | Zp-vzl |  [0075] Formula (13) xpr2 = x0r2 + (Xp-vx2) -fr / | Zp-vzl |
式(14) ypr2=y0r2- (Yp-vyl) -fr/ | Zp-vzl |  Formula (14) ypr2 = y0r2- (Yp-vyl) -fr / | Zp-vzl |
このようにして、被写体の一点 P (Xp, Yp, Zp)の仮想撮像領域 103r2上の結像点 pr2 (xpr2, ypr2)を算出することができる。同様の計算により、被写体全体の仮想撮 像領域 103r2上の結像点についても、求めることができる。  In this way, the imaging point pr2 (xpr2, ypr2) on the virtual imaging region 103r2 of the point P (Xp, Yp, Zp) of the subject can be calculated. By the same calculation, the imaging point on the virtual imaging area 103r2 of the entire subject can also be obtained.
[0076] 次に仮想レンズの主点を R3 (vx3, vyl, vzl)、つまり R2を三次元座標の X軸方向 に平行移動した場合の視点画像 r3を求める。視点画像 r2と同様にその仮想撮像領 域 103r3はその光軸に垂直な平面で結像距離 frの位置に配置する。仮想撮像領域 103r3は、仮想レンズと反対側力も見たときに、撮像領域の右下を原点 (0, 0)とし、 左方向が正となる水平ラインを xr3軸とし、上方向が正となる垂直ラインを yr3軸とした 二次元座標系を設定している。座標 104r3 (x0r3, y0r3)は仮想撮像領域 103r3上 の二次元座標の点であり、仮想レンズの光軸との交点である。視点画像 r2と同様の 計算より、被写体の一点 P (Xp, Yp, Zp)の仮想撮像領域 103r3上の結像点 pr3 (x pr3, ypr3)は式(15)及び式(16)により求められる。すなわち、式(15)は、式(11) において、 xOrlを x0r3に、 vxlを vx3に、それぞれ置き換えたものである。式(16) は、式(12)において、 y0rl¾y0r3に、置き換えたものである。 Next, the principal point of the virtual lens is R3 (vx3, vyl, vzl), that is, the viewpoint image r3 when R2 is translated in the X-axis direction of the three-dimensional coordinates is obtained. Similar to the viewpoint image r2, the virtual imaging area 103r3 is arranged at the imaging distance fr on a plane perpendicular to the optical axis. The virtual imaging area 103r3 has an origin (0, 0) in the lower right corner of the imaging area when the force opposite to the virtual lens is viewed, the horizontal line that is positive in the left direction is the xr3 axis, and the upward direction is positive A two-dimensional coordinate system with the vertical line yr3 is set. Coordinates 104r3 (x0r3, y 0r3) is the point of the two-dimensional coordinates on the virtual imaging region 103R3, which is an intersection between the optical axis of the virtual lens. From the same calculation as the viewpoint image r2, the image point pr3 (x pr3, ypr3) on the virtual imaging area 103r3 of the point P (Xp, Yp, Zp) of the subject is obtained by the equations (15) and (16). . That is, Expression (15) is obtained by replacing xOrl with x0r3 and vxl with vx3 in Expression (11). Equation (16) is obtained by replacing y0rl¾y0r3 in equation (12).
[0077] (式 15) xpr3=x0r3+ (Xp-vx3) -fr/ | Zp-vzl |  [0077] (Formula 15) xpr3 = x0r3 + (Xp-vx3) -fr / | Zp-vzl |
(式 16) ypr3 =y0r3- (Yp-vyl) - fr/ | Zp-vzl |  (Formula 16) ypr3 = y0r3- (Yp-vyl)-fr / | Zp-vzl |
このようにして、被写体の一点 P (Xp, Yp, Zp)の仮想撮像領域 103r3上の結像点 pr3 (xpr3, ypr3)を算出することができる。また、同様の計算により、被写体全体の 仮想撮像領域 103r3上の結像点についても求めることができる。  In this way, the imaging point pr3 (xpr3, ypr3) on the virtual imaging area 103r3 of the point P (Xp, Yp, Zp) of the subject can be calculated. Further, the image formation point on the virtual imaging region 103r3 of the entire subject can be obtained by the same calculation.
[0078] 次に仮想レンズの主点を R4 (vx4, vyl, vzl)、つまり R3を三次元座標の X軸方向 に平行移動した場合の視点画像 r4を求める。視点画像 r3と同様にその仮想撮像領 域 103r4はその光軸に垂直な平面で結像距離 frの位置に配置する。仮想撮像領域 103r4は、仮想レンズと反対側力も見たときに、撮像領域の右下を原点 (0, 0)とし、 左方向が正となる水平ラインを xr4軸とし、上方向が正となる垂直ラインを yr4軸とした 二次元座標系を設定している。座標 104r4 (x0r4, y0r4)は仮想撮像領域 103r4上 の二次元座標の点であり、仮想レンズの光軸との交点である。 Next, the principal point of the virtual lens is R4 (vx4, vyl, vzl), that is, the viewpoint image r4 when R3 is translated in the X-axis direction of the three-dimensional coordinates is obtained. As with the viewpoint image r3, its virtual imaging area The region 103r4 is arranged at a position of the imaging distance fr on a plane perpendicular to the optical axis. In the virtual imaging area 103r4, when viewing the force opposite to the virtual lens, the lower right of the imaging area is the origin (0, 0), the horizontal line that is positive in the left direction is the xr4 axis, and the upward direction is positive A two-dimensional coordinate system is set with the vertical line as the yr4 axis. Coordinates 104r4 (x0r4, y 0r4) is the point of the two-dimensional coordinates on the virtual imaging region 103R4, which is an intersection between the optical axis of the virtual lens.
[0079] 視点画像 r3と同様の計算より、被写体の一点 P (Xp, Yp, Ζρ)の仮想撮像領域 10 3r4上の結像点 pr4 (xpr4, ypr4)は式(17)及び式(18)により求められる。すなわ ち、式(17) ίま、式(11)【こお!/、て、 xOrlを x0r4【こ、 vxlを vx4【こ、それぞれ置き換免 たものである。式(18)は、式(12)において、 yOrlを y0r4に、置き換えたものである。  [0079] From the same calculation as the viewpoint image r3, the imaging point pr4 (xpr4, ypr4) on the virtual imaging region 10 3r4 of one point P (Xp, Yp, Ζρ) of the subject is expressed by equations (17) and (18). Is required. In other words, equation (17) ί or equation (11) [koo! /,, XOrl x0r4, vxl vx4, respectively, are replaced. Equation (18) is obtained by replacing yOrl with y0r4 in equation (12).
[0080] (式 17) xpr4=x0r4+ (Xp-vx4) -fr/ | Zp-vzl |  [0080] (Equation 17) xpr4 = x0r4 + (Xp-vx4) -fr / | Zp-vzl |
(式 18) ypr4=y0r4- (Yp-vyl) - fr/ | Zp-vzl |  (Formula 18) ypr4 = y0r4- (Yp-vyl)-fr / | Zp-vzl |
このようにして被写体の一点 P (Xp, Yp, Zp)の仮想撮像領域 103r4上の結像点 p r4 (xpr4, ypr4)を算出することができる。  In this way, the image point p r4 (xpr4, ypr4) on the virtual imaging region 103r4 of the point P (Xp, Yp, Zp) of the subject can be calculated.
[0081] また、同様の計算により、被写体全体の仮想撮像領域 103r4上の結像点について も求めることができる。 [0081] Further, the image formation point on the virtual imaging region 103r4 of the entire subject can also be obtained by the same calculation.
[0082] 以上のように、被写体全体における各点について、各仮想撮像領域 103rlから 10 3r4までのそれぞれにおける結像点を求めることができる。また、前記の通り、被写体 の各点に対応する色情報は、すでに求めている。このため、各視点画像の各点の色 情報は、図 1の三次元座標算出部 110の三次元座標色情報算出手段で求めた被写 体の赤、緑、青の色情報を援用することができる。したがって、任意の複数の視点画 像である視点画像 rl、視点画像 r2、視点画像 r3、視点画像 r4が求められる。なお、 式(11)力 式(18)の算出値が小数である場合など、式(11)力 式(18)の算出値 に対応する点が視点画像上の座標に存在しな!ヽ場合は、近接する座標に内挿補間 又は外挿補間により点を按分することができる。  As described above, for each point in the entire subject, an imaging point in each of the virtual imaging regions 103rl to 103r4 can be obtained. As described above, color information corresponding to each point of the subject has already been obtained. For this reason, the color information of each point of each viewpoint image uses the red, green, and blue color information of the object obtained by the three-dimensional coordinate color information calculation means of the three-dimensional coordinate calculation unit 110 in FIG. Can do. Therefore, a viewpoint image rl, a viewpoint image r2, a viewpoint image r3, and a viewpoint image r4 that are arbitrary plural viewpoint images are obtained. Note that the point corresponding to the calculated value of equation (11) force equation (18) does not exist in the coordinates on the viewpoint image, such as when the calculated value of equation (11) force equation (18) is decimal. Can apportion points to nearby coordinates by interpolation or extrapolation.
[0083] すなわち、本実施の形態によれば、三次元座標算出部 110により求めた被写体の 三次元座標及び色に基づ 、て、視点画像出力部 111により被写体の任意の視点か らの視点画像を求めることが可能である。算出した複数の視点画像から、立体ディス プレイ用の画像フォーマットに変換し図 1のディスプレイ 112に画像データを出力す る。 That is, according to the present embodiment, based on the 3D coordinates and color of the subject obtained by the 3D coordinate calculation unit 110, the viewpoint image output unit 111 performs a viewpoint from an arbitrary viewpoint of the subject. An image can be obtained. Convert the calculated multiple viewpoint images into an image format for stereoscopic display and output the image data to the display 112 in FIG. The
[0084] なお、仮想レンズの光軸が Z軸と平行でない場合も、幾何学計算により設定した撮 像領域上の結像点を求めることができることは言うまでもない。  [0084] Needless to say, even when the optical axis of the virtual lens is not parallel to the Z axis, the imaging point on the imaging region set by the geometric calculation can be obtained.
[0085] 図 1のディスプレイ 112は、視点画像出力部 111から出力される画像データに基づ き立体映像を出力する。ここでは、四視点画像による立体ディスプレイの一般的な画 像表示方式を説明する。図 13は、レンティキユラレンズを用いた立体ディスプレイを 示している。図 13 (a)は斜視図であり、図 13 (b)は、図 13 (a)に示したディスプレイを 上側から見た図を示して 、る。  The display 112 in FIG. 1 outputs a stereoscopic video based on the image data output from the viewpoint image output unit 111. Here, a general image display method for a stereoscopic display using four-viewpoint images will be described. Figure 13 shows a 3D display using a lenticular lens. FIG. 13 (a) is a perspective view, and FIG. 13 (b) shows a view of the display shown in FIG. 13 (a) as viewed from above.
[0086] 図 13に示した構成は、一般的な LCD方式、プラズマ方式、 SED方式、 FED方式、 プロジェクシヨン方式などの二次元ディスプレイ 120の前面にレンティキユラレンズ 12 1を配置した構成になっている。図 13 (b)では、四視点画像による立体ディスプレイ の画像表示方式を説明するため、レンティキユラレンズ 121の一列のかまぼこ状のレ ンズ幅 Wrと、二次元ディスプレイ 120の 4画素の幅(図 14参照)がほぼ同一になって いる図を示している。レンティキユラレンズ 121の一列のかまぼこ状のレンズ幅 Wrは、 立体画像の視点数に合わせて設定する。  The configuration shown in FIG. 13 is a configuration in which a lenticular lens 121 is arranged on the front surface of a two-dimensional display 120 such as a general LCD method, plasma method, SED method, FED method, or projection method. Yes. In Fig. 13 (b), in order to explain the image display method of a three-dimensional display using four-viewpoint images, a single lens-shaped lens width Wr of the lenticular lens 121 and a four-pixel width of the two-dimensional display 120 (Fig. 14). (Ref.) Shows a figure that is almost the same. The lenticular lens 121 has a one-row kamaboko-shaped lens width Wr that is set in accordance with the number of viewpoints of the stereoscopic image.
[0087] 図 14は、図 13 (b)において、レンティキユラレンズの一列のレンズに対応する二次 元ディスプレイ中の 4組の画素の発する光の主光線の方向を示している。画素 125か らは主光線 129の方向に光が進む。画素 126からは主光線 130の方向に光が進む 。画素 127からは主光線 131の方向に光が進む。画素 128からは主光線 132の方向 に光が進む。このように 4組の画素の光が進む方向がそれぞれ異なるため、一般的 に 6. 5cm程度離れている人間の目には、右目と左目とで視点のずれた異なる映像 を結像することになる。このため、立体映像を表示することができる。  FIG. 14 shows the principal ray directions of the light emitted by the four sets of pixels in the two-dimensional display corresponding to the lens of one line of the lenticular lens in FIG. 13 (b). Light travels from pixel 125 in the direction of chief ray 129. Light travels from the pixel 126 in the direction of the principal ray 130. Light travels from the pixel 127 in the direction of the principal ray 131. Light travels from pixel 128 in the direction of chief ray 132. Since the light traveling directions of the four sets of pixels differ in this way, the human eye, which is generally about 6.5 cm away, forms images with different viewpoints between the right eye and the left eye. Become. For this reason, a stereoscopic image can be displayed.
[0088] また、ここでは 4つの視点画像 rl、 r2、 r3、 r4を用いているため、 目の位置 (右目と 左目両方)を図 14の左右に動かしても、動かす前と異なる視点で、且つ依然として右 目と左目とで視点のずれた異なる映像を結像するため、一般的に 2つの視点画像や 3つの視点画像をもとに立体映像を表示した場合より広い視野角力ゝら立体映像を表 示可能となる。  [0088] Since four viewpoint images rl, r2, r3, r4 are used here, even if the eye position (both right eye and left eye) is moved to the left and right in FIG. In addition, since the right eye and the left eye still form different images with different viewpoints, in general, stereoscopic images are displayed with a wider viewing angle than when stereoscopic images are displayed based on two viewpoint images or three viewpoint images. Can be displayed.
[0089] 一般的に多くの視点画像をもとに立体映像を表示するほど、広い視野角力も立体 映像を見ることが可能となり、より自然な立体映像が得られる。このため、三次元画像 算出部 110で求めた被写体の三次元座標をもとに多くの視点画像を作成するほど、 より視野角が広く自然な立体映像を表示することが可能となる。例えば、前記の例で は光学系の数と同一の 4つの視点画像を作成した力 光学系の数よりも多い 5っ以 上の視点画像を作成してもよ 、。 [0089] In general, the more a stereoscopic image is displayed based on many viewpoint images, the wider the viewing angle force is. It becomes possible to see the video, and a more natural stereoscopic video can be obtained. Therefore, the more viewpoint images are created based on the three-dimensional coordinates of the subject obtained by the three-dimensional image calculation unit 110, it is possible to display a natural stereoscopic video with a wider viewing angle. For example, in the above example, it is possible to create five or more viewpoint images that are larger than the number of force optical systems that created the same four viewpoint images as the number of optical systems.
[0090] 図 15は、 2つの異なる視点画像を示している。図 15 (a)は図 10の視点画像 rl、図 15 (b)は図 10の視点画像 r4を示している。レンズ側からみた被写体像は、撮像領域 では上下が逆になるため、図 15 (a)、図 15 (b)では yrl軸、 yr4軸を図 10と上下逆に 変換している。図 15 (a)に示したように、視点画像 rlでは被写体は画像の右下に、 図 15 (b)に示したように、視点画像 r4では被写体は画像の左下に写る。なお、視点 画像 r2、 r3では、被写体は図 15 (a)と図 15 (b)の被写体の位置の間の位置に写るこ とは言うまでもない。 FIG. 15 shows two different viewpoint images. FIG. 15 (a) shows the viewpoint image rl in FIG. 10, and FIG. 15 (b) shows the viewpoint image r4 in FIG. Since the subject image viewed from the lens side is upside down in the imaging region, the yrl axis and yr4 axis are converted upside down from FIG. 10 in FIGS. 15 (a) and 15 (b). As shown in FIG. 15 (a), the subject appears in the lower right of the image in the viewpoint image rl, and the subject appears in the lower left of the image in the viewpoint image r4 as shown in FIG. 15 (b). Needless to say, in the viewpoint images r2 and r3, the subject appears in a position between the positions of the subject in FIGS. 15 (a) and 15 (b).
[0091] 図 14の画素 128には視点画像 rl、画素 127には視点画像 r2、画素 126には視点 画像 r3、画素 125には視点画像 r4の同座標の画像を配置することにより、四視点画 像による立体ディスプレイを実現することが可能となる。つまり、図 1のディスプレイ 11 2は、前記のように配列された画像データを視点画像出力部 111から受取り、それに 基づき自然な立体映像を映し出す。  [0091] By arranging the viewpoint image rl in the pixel 128 in FIG. 14, the viewpoint image r2 in the pixel 127, the viewpoint image r3 in the pixel 126, and the viewpoint image r4 in the pixel 125, the four viewpoints are arranged. It is possible to realize a three-dimensional display using images. That is, the display 112 in FIG. 1 receives the image data arranged as described above from the viewpoint image output unit 111 and displays a natural stereoscopic video based on the image data.
[0092] 以上により、本発明によれば、薄型で、被写体の三次元形状を求めることができ、 自然な立体映像用の画像データを出力する三次元撮像装置を実現することが可能 となる。  As described above, according to the present invention, it is possible to realize a three-dimensional imaging apparatus that is thin and capable of obtaining a three-dimensional shape of a subject and that outputs natural stereoscopic video image data.
[0093] 本実施の形態は、座標変換手段を備えた構成にしてもよい。具体的には、図 1の三 次元座標算出部 110で算出された被写体の三次元座標について、ァフィン変換など を用 、て座標変換して力 視点画像を求めることにより、より臨場感のある立体映像 を実現することができる。例えば、図 16に示すように被写体の一部又は全体を三次 元座標の Z軸の負方向に所定の距離 mz平行移動し P'とすることにより Zpが小さくな るので、複数の視点画像間で被写体の視差を大きくすることができる (式( 1)参照)。  This embodiment may be configured to include coordinate conversion means. Specifically, the three-dimensional coordinates of the subject calculated by the three-dimensional coordinate calculation unit 110 in FIG. 1 are transformed by using affine transformation to obtain a force viewpoint image, thereby obtaining a more realistic three-dimensional image. Video can be realized. For example, as shown in FIG. 16, a part or the whole of the subject is translated by a predetermined distance mz in the negative direction of the Z axis of the three-dimensional coordinates and set to P ′, so that Zp becomes small. Can increase the parallax of the subject (see Equation (1)).
[0094] つまり、図 15の例では、図 15 (a)の視点画像 rlの被写体像は、図のより右側 (xrl 軸の正方向)に、図 15 (b)の視点画像 r4の被写体像は図のより左側 (xr4軸の負方 向)に移動することになる。このことにより、立体映像が立体ディスプレイより前面に飛 び出して見えるため、より臨場感のある立体映像を実現することができる。 That is, in the example of FIG. 15, the subject image of the viewpoint image rl of FIG. 15 (a) is the subject image of the viewpoint image r4 of FIG. 15 (b) on the right side of the diagram (positive direction of the xrl axis). Is the left side of the figure (the negative of xr4 axis) Direction). This makes it possible to realize a more realistic 3D image because the 3D image appears to jump out from the 3D display.
[0095] また、本実施の形態は、形状変換手段を備えた構成にしてもよい。具体的には、図 1の三次元座標算出部 110で算出された被写体の三次元形状の一部を、図 17に示 すように拡大することにより、各視点画像で生じるォクリュージヨンを低減することがで さ、より自然な立体映像を得ることができる。  Further, this embodiment may be configured to include shape conversion means. Specifically, a portion of the three-dimensional shape of the subject calculated by the three-dimensional coordinate calculation unit 110 in FIG. 1 is enlarged as shown in FIG. 17 to reduce the occlusion generated in each viewpoint image. By doing so, more natural 3D images can be obtained.
[0096] なお、座標変換手段及び形状変換手段は、図 1において視点画像画像出力部 11 1を備えて ヽな 、構成に用いてもょ 、。  [0096] Note that the coordinate conversion means and the shape conversion means may include the viewpoint image image output unit 111 in FIG.
[0097] また、図 1のディスプレイ 112が、パララックスバリア方式の立体ディスプレイの場合 は、図 13 (a)のレンティキユラレンズ 121の代わりに液晶などで実現されるスリットが配 置される。  If the display 112 in FIG. 1 is a parallax barrier type stereoscopic display, a slit realized by liquid crystal or the like is disposed instead of the lenticular lens 121 in FIG. 13 (a).
[0098] また、奥行標本ィ匕方式の立体ディスプレイの場合やホログラムによる立体ディスプ レイの場合も、図 1の三次元座標算出部 110で作成した被写体の三次元座標に基づ き立体映像を出力できることは言うまでもな!/、。  [0098] Also, in the case of a depth sample 3D display or a 3D display using a hologram, a 3D image is output based on the 3D coordinates of the object created by the 3D coordinate calculation unit 110 in FIG. Needless to say, you can!
[0099] また、ステレオペア方式や偏光方式や時分割方式などのメガネ式の立体ディスプレ ィの場合も図 1の三次元座標算出部 110で作成した被写体の三次元座標に基づき 立体映像を出力できることは言うまでもな 、。  [0099] In the case of a glasses-type stereoscopic display such as a stereo pair method, a polarization method, or a time-division method, a stereoscopic image can be output based on the three-dimensional coordinates of the subject created by the three-dimensional coordinate calculation unit 110 in FIG. Needless to say.
[0100] また、図 1の視点画像出力部 111では任意の視点の被写体像を出力できることから 、ディスプレイ 112を一般的な 2次元ディスプレイとして、車載の周辺監視などで用い られる自由視点画像を出力できることは言うまでもない。  [0100] Since the viewpoint image output unit 111 in FIG. 1 can output a subject image of an arbitrary viewpoint, the display 112 can be used as a general two-dimensional display and can output a free viewpoint image used for in-vehicle peripheral monitoring. Needless to say.
[0101] また、実施の形態 1、 2において、図 1のレンズ 101a、 101b, 101c, lOldは一体 成型でなぐ個別成型してもよいことは言うまでもない。また、レンズ 101a、 101b, 10 lc、 lOldの色の配置も図 1の配置に限定されるものではなぐ図 18 (a)に示すように 緑のレンズを対角に、赤と青のレンズを別の対角に配置してもよぐまた、図 18 (b)に 示すように 4つのレンズを同一線上に配置してもよぐ任意の配置にしてよい。  [0101] In Embodiments 1 and 2, it goes without saying that the lenses 101a, 101b, 101c, and lOld in FIG. 1 may be individually molded instead of being integrally molded. In addition, the arrangement of the colors of lenses 101a, 101b, 10 lc, and lOld is not limited to that shown in Fig. 1.As shown in Fig. 18 (a), the green lenses are diagonally arranged, and the red and blue lenses are arranged. It may be arranged at another diagonal, or as shown in Fig. 18 (b), it may be arranged in an arbitrary manner by arranging four lenses on the same line.
[0102] このとき、本実施の形態の図 2の幾何学計算のレンズの主点の位置を適切に設定 するとよい。あわせて、カラーフィルタも色に応じて適切に配置する。また、図 19 (a)、 図 19 (b)に示すように緑のレンズが 3枚以上ある場合であっても、そのうち少なくとも 2 枚のレンズを用いて、三次元座標を求めることができる。被写体距離や被写体の位 置に応じて複数枚のレンズのうちの 2枚のレンズの選定を変更してもよい。この場合も レンズは任意の配置にしてよい。また、本実施の形態は緑のレンズを複数枚配置して 被写体の三次元形状を算出して 、るが、緑でな 、レンズを複数枚配置して被写体の 三次元形状を算出してもよ 、ことは言うまでもな 、。 [0102] At this time, the position of the principal point of the geometric calculation lens in FIG. 2 of the present embodiment may be set appropriately. At the same time, color filters are appropriately arranged according to the colors. Also, as shown in Fig. 19 (a) and Fig. 19 (b), even if there are 3 or more green lenses, at least 2 Three-dimensional coordinates can be obtained using a single lens. The selection of two of the multiple lenses may be changed according to the subject distance and subject position. In this case, the lens may be arranged arbitrarily. In this embodiment, a plurality of green lenses are arranged to calculate the three-dimensional shape of the subject. However, if the green is not green, a plurality of lenses are arranged to calculate the three-dimensional shape of the subject. Oh, needless to say.
[0103] また、図 1の撮像領域 103a、 103b, 103c, 103dが同一平面上にない場合であつ ても、その位置関係から、所望の同一平面上に被写体像を座標変換し写像すること により、本実施の形態と同様に扱える。  [0103] Even when the imaging regions 103a, 103b, 103c, and 103d in Fig. 1 are not on the same plane, from the positional relationship, by subjecting the subject image to coordinate conversion and mapping on the desired same plane, It can be handled in the same manner as this embodiment.
[0104] (実施の形態 3)  [Embodiment 3]
実施の形態 3は、集積回路に関する実施の形態である。図 1の三次元座標算出部 110及び視点画像出力部 111は、典型的には集積回路である LSIとして実現される 。これらは個別に 1チップィ匕されてもよいし、一部又は全てを含むように 1チップィ匕され てもよい。ここでは、 LSIとした力 集積度の違いにより、 IC、システム LSI、スーパー L SI、ウノレ卜ラ LSIと呼称されることちある。  The third embodiment relates to an integrated circuit. The three-dimensional coordinate calculation unit 110 and the viewpoint image output unit 111 in FIG. 1 are typically realized as an LSI that is an integrated circuit. These may be individually chipped, or may be chipped to include some or all of them. Here, it is sometimes called IC, system LSI, super LSI, or unroller LSI, depending on the difference in power integration.
[0105] また、集積回路化の手法は、 LSIに限るものではなぐ専用回路又は汎用プロセッ サで実現してもよい。 LSI製造後に、プログラムすることが可能な FPGA (Field Progra mmable Gate Array)や、 LSI内部の回路セルの接続や設定を再構成可能なリコンフ ィギユアラブル 'プロセッサーを利用してもよい。  [0105] Further, the method of circuit integration may be realized with a dedicated circuit or a general-purpose processor, not limited to LSI. You can use a field programmable gate array (FPGA) that can be programmed after manufacturing the LSI, or a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI.
[0106] さらには、半導体技術の進歩又は派生する別技術により LSIに置き換わる集積回 路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積を行っても よい。例えばバイオ技術等の適用が可能性としてあり得る。  [0106] Furthermore, if integrated circuit technology that replaces LSI emerges as a result of progress in semiconductor technology or other derived technology, it is naturally also possible to perform functional block integration using this technology. For example, biotechnology or the like can be applied.
産業上の利用可能性  Industrial applicability
[0107] 以上のように本発明の撮像装置は、薄型で、被写体の三次元形状を求めることが でき、被写体の自由視点映像や立体映像用の画像データを出力することができるの で、三次元撮像装置として有用である。また、本発明の集積回路は、三次元撮像装 置に用いる集積回路として有用である。 As described above, the imaging apparatus of the present invention is thin, can determine the three-dimensional shape of the subject, and can output image data for free viewpoint video and stereoscopic video of the subject. It is useful as an original imaging device. The integrated circuit of the present invention is useful as an integrated circuit used in a three-dimensional imaging device.

Claims

請求の範囲 The scope of the claims
[1] 複数の光学系と、  [1] Multiple optical systems,
被写体力 の光のうち特定の波長領域の光を選択的に透過させる複数の波長選 択領域と、  A plurality of wavelength selection regions that selectively transmit light in a specific wavelength region of light of subject power;
入力した光に応じた画像情報を出力する複数の撮像領域とを備え、  A plurality of imaging regions that output image information according to the input light,
前記複数の光学系の各光軸上に、前記波長選択領域と前記撮像領域とが一対一 に対応して配置された撮像装置であって、  An imaging apparatus in which the wavelength selection region and the imaging region are arranged in a one-to-one correspondence on each optical axis of the plurality of optical systems,
前記複数の波長選択領域は、同一波長領域の光を選択的に透過させる第 1波長 選択領域と、前記第 1波長選択領域とは異なる波長領域の光を選択的に透過させる 第 2波長選択領域とを含み、  The plurality of wavelength selection regions include a first wavelength selection region that selectively transmits light in the same wavelength region, and a second wavelength selection region that selectively transmits light in a wavelength region different from the first wavelength selection region. Including
前記第 1波長選択領域は、 2以上の波長選択領域を含み、前記第 2波長選択領域 は、それぞれ異なる波長領域の光を選択的に透過させる 2以上の波長選択領域を含 んでおり、  The first wavelength selection region includes two or more wavelength selection regions, and the second wavelength selection region includes two or more wavelength selection regions that selectively transmit light in different wavelength regions,
前記第 1波長選択領域に対応する少なくとも 2つの前記各撮像領域が出力する画 像情報と、前記第 1波長選択領域に対応する前記各光学系と前記各撮像領域との 位置関係とに基づいて前記被写体の三次元座標を算出する三次元座標算出手段と それぞれ異なる波長領域の光を受光する少なくとも 3つの前記各撮像領域におけ る前記被写体の三次元座標の各座標に対応した部分が出力する画像情報に基づ いて、前記被写体の三次元座標の各座標における色情報を算出する三次元座標色 情報算出手段とを備えたことを特徴とする撮像装置。  Based on image information output by at least two imaging regions corresponding to the first wavelength selection region and a positional relationship between the optical systems corresponding to the first wavelength selection region and the imaging regions. The three-dimensional coordinate calculation means for calculating the three-dimensional coordinates of the subject and a portion corresponding to each coordinate of the three-dimensional coordinates of the subject in at least three imaging regions that receive light in different wavelength regions are output. An imaging apparatus comprising: three-dimensional coordinate color information calculation means for calculating color information at each coordinate of the three-dimensional coordinates of the subject based on image information.
[2] 前記第 2波長選択領域に対応した前記各撮像領域における前記被写体の三次元 座標の各座標に対応した部分は、前記算出した被写体の三次元座標と、前記第 2波 長選択領域に対応する前記各光学系と前記各撮像領域との位置関係とに基づいて 算出する請求項 1に記載の撮像装置。  [2] The portions corresponding to the coordinates of the three-dimensional coordinates of the subject in each imaging region corresponding to the second wavelength selection region are the calculated three-dimensional coordinates of the subject and the second wavelength selection region. 2. The imaging device according to claim 1, wherein the imaging device calculates based on a positional relationship between each corresponding optical system and each imaging region.
[3] 前記被写体の三次元座標の一部又は全体を座標変換する座標変換手段をさらに 備えて 、る請求項 1に記載の撮像装置。  [3] The imaging apparatus according to [1], further comprising coordinate conversion means for performing coordinate conversion on part or all of the three-dimensional coordinates of the subject.
[4] 前記被写体の三次元座標に基づく三次元形状の一部又は全体を拡大する形状変 換手段をさらに備えて 、る請求項 1に記載の撮像装置。 [4] A shape change that enlarges a part or the whole of a three-dimensional shape based on the three-dimensional coordinates of the subject. The imaging apparatus according to claim 1, further comprising a replacement unit.
[5] 前記被写体の三次元座標及び各座標における色情報に基づいて、任意の視点に おける被写体の画像を出力する任意視点画像出力手段をさらに備えた請求項 1に 記載の撮像装置。 5. The imaging apparatus according to claim 1, further comprising arbitrary viewpoint image output means for outputting an image of the subject at an arbitrary viewpoint based on the three-dimensional coordinates of the subject and color information at each coordinate.
[6] 前記被写体の三次元座標に基づき、任意の視点からの前記被写体画像を補間に より作成する任意視点作成手段をさらに備え、前記任意視点画像出力手段は、前記 補間により作成した前記被写体画像に基づき任意の視点における画像を出力する 請求項 5に記載の撮像装置。  [6] Arbitrary viewpoint creation means for creating the subject image from any viewpoint by interpolation based on the three-dimensional coordinates of the subject is further provided, and the arbitrary viewpoint image output means is the subject image created by the interpolation. 6. The imaging apparatus according to claim 5, wherein an image at an arbitrary viewpoint is output based on.
[7] 前記任意視点画像出力手段は、立体映像出力用の任意の視点における画像を出 力する請求項 5に記載の撮像装置。  7. The imaging apparatus according to claim 5, wherein the arbitrary viewpoint image output means outputs an image at an arbitrary viewpoint for stereoscopic video output.
[8] 前記任意視点画像出力手段は、前記光学系の数よりも多数の視点における画像を 出力する請求項 5に記載の撮像装置。  8. The imaging apparatus according to claim 5, wherein the arbitrary viewpoint image output means outputs images at a larger number of viewpoints than the number of the optical systems.
[9] 被写体からの光を受光する撮像領域が出力する画像情報に基づいて演算をする 集積回路であって、  [9] An integrated circuit that performs an operation based on image information output from an imaging region that receives light from a subject,
前記撮像領域は、第 1の波長の光を受光する第 1の撮像領域と、前記第 1の波長の 光とは異なる波長領域の光を受光する第 2の撮像領域とを含み、  The imaging region includes a first imaging region that receives light of a first wavelength, and a second imaging region that receives light of a wavelength region different from the light of the first wavelength,
前記第 1の撮像領域は、 2以上の撮像領域を含み、前記第 2の撮像領域は、それ ぞれ異なる波長領域の光を選択的に受光する 2以上の撮像領域を含んでおり、 前記集積回路は、  The first imaging region includes two or more imaging regions, and the second imaging region includes two or more imaging regions that selectively receive light in different wavelength regions, respectively. Circuit
前記第 1の撮像領域のうち、少なくとも 2つの前記各撮像領域が出力する画像情報 に基づいて、前記被写体の三次元座標を算出する三次元座標算出手段と、 それぞれ異なる波長領域の光を受光する少なくとも 3つの前記各撮像領域におけ る前記被写体の三次元座標の各座標に対応した部分が出力する画像情報とに基づ いて、前記被写体の三次元座標の各座標における色情報を算出する三次元座標色 情報算出手段とを備えたことを特徴とする集積回路。  Three-dimensional coordinate calculation means for calculating three-dimensional coordinates of the subject based on image information output from at least two of the first imaging areas, and receiving light in different wavelength ranges A tertiary that calculates color information at each coordinate of the three-dimensional coordinates of the subject based on image information output by a portion corresponding to each coordinate of the three-dimensional coordinates of the subject in at least three imaging regions. An integrated circuit comprising: original coordinate color information calculation means.
PCT/JP2007/053945 2006-03-03 2007-03-01 Imaging device and integrated circuit WO2007100057A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006-058083 2006-03-03
JP2006058083 2006-03-03

Publications (1)

Publication Number Publication Date
WO2007100057A1 true WO2007100057A1 (en) 2007-09-07

Family

ID=38459156

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/053945 WO2007100057A1 (en) 2006-03-03 2007-03-01 Imaging device and integrated circuit

Country Status (1)

Country Link
WO (1) WO2007100057A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011510315A (en) * 2008-01-23 2011-03-31 イジナ Spatial colorimetric measuring device and spatial colorimetric measuring method for three-dimensional objects
WO2012024044A1 (en) * 2010-08-17 2012-02-23 Apple Inc. Image capture using luminance and chrominance sensors
US8405727B2 (en) 2008-05-01 2013-03-26 Apple Inc. Apparatus and method for calibrating image capture devices
US8502926B2 (en) 2009-09-30 2013-08-06 Apple Inc. Display system having coherent and incoherent light sources
US8527908B2 (en) 2008-09-26 2013-09-03 Apple Inc. Computer user interface system and methods
US8538132B2 (en) 2010-09-24 2013-09-17 Apple Inc. Component concentricity
US8538084B2 (en) 2008-09-08 2013-09-17 Apple Inc. Method and apparatus for depth sensing keystoning
US8610726B2 (en) 2008-09-26 2013-12-17 Apple Inc. Computer systems and methods with projected display
US8619128B2 (en) 2009-09-30 2013-12-31 Apple Inc. Systems and methods for an imaging system using multiple image sensors
US8687070B2 (en) 2009-12-22 2014-04-01 Apple Inc. Image capture device having tilt and/or perspective correction
US8761596B2 (en) 2008-09-26 2014-06-24 Apple Inc. Dichroic aperture for electronic imaging device
US9356061B2 (en) 2013-08-05 2016-05-31 Apple Inc. Image sensor with buried light shield and vertical gate
CN106934845A (en) * 2015-12-29 2017-07-07 龙芯中科技术有限公司 Acquiring object method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11355807A (en) * 1998-06-11 1999-12-24 Asahi Optical Co Ltd Method and device for three-dimensional image display by stereoscopic image photographing
JP2003143459A (en) * 2001-11-02 2003-05-16 Canon Inc Compound-eye image pickup system and device provided therewith
JP2005303694A (en) * 2004-04-13 2005-10-27 Konica Minolta Holdings Inc Compound eye imaging device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11355807A (en) * 1998-06-11 1999-12-24 Asahi Optical Co Ltd Method and device for three-dimensional image display by stereoscopic image photographing
JP2003143459A (en) * 2001-11-02 2003-05-16 Canon Inc Compound-eye image pickup system and device provided therewith
JP2005303694A (en) * 2004-04-13 2005-10-27 Konica Minolta Holdings Inc Compound eye imaging device

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011510315A (en) * 2008-01-23 2011-03-31 イジナ Spatial colorimetric measuring device and spatial colorimetric measuring method for three-dimensional objects
US8405727B2 (en) 2008-05-01 2013-03-26 Apple Inc. Apparatus and method for calibrating image capture devices
US8538084B2 (en) 2008-09-08 2013-09-17 Apple Inc. Method and apparatus for depth sensing keystoning
US8761596B2 (en) 2008-09-26 2014-06-24 Apple Inc. Dichroic aperture for electronic imaging device
US8610726B2 (en) 2008-09-26 2013-12-17 Apple Inc. Computer systems and methods with projected display
US8527908B2 (en) 2008-09-26 2013-09-03 Apple Inc. Computer user interface system and methods
US8502926B2 (en) 2009-09-30 2013-08-06 Apple Inc. Display system having coherent and incoherent light sources
US8619128B2 (en) 2009-09-30 2013-12-31 Apple Inc. Systems and methods for an imaging system using multiple image sensors
US8687070B2 (en) 2009-12-22 2014-04-01 Apple Inc. Image capture device having tilt and/or perspective correction
US9113078B2 (en) 2009-12-22 2015-08-18 Apple Inc. Image capture device having tilt and/or perspective correction
US9565364B2 (en) 2009-12-22 2017-02-07 Apple Inc. Image capture device having tilt and/or perspective correction
US8497897B2 (en) 2010-08-17 2013-07-30 Apple Inc. Image capture using luminance and chrominance sensors
WO2012024044A1 (en) * 2010-08-17 2012-02-23 Apple Inc. Image capture using luminance and chrominance sensors
US8538132B2 (en) 2010-09-24 2013-09-17 Apple Inc. Component concentricity
US9356061B2 (en) 2013-08-05 2016-05-31 Apple Inc. Image sensor with buried light shield and vertical gate
US9842875B2 (en) 2013-08-05 2017-12-12 Apple Inc. Image sensor with buried light shield and vertical gate
CN106934845A (en) * 2015-12-29 2017-07-07 龙芯中科技术有限公司 Acquiring object method and device
CN106934845B (en) * 2015-12-29 2020-05-12 龙芯中科技术有限公司 Object picking method and device

Similar Documents

Publication Publication Date Title
WO2007100057A1 (en) Imaging device and integrated circuit
JP5507797B2 (en) Head-mounted imaging display device and image generation device
EP0645926B1 (en) Image processing apparatus and method.
JP5238429B2 (en) Stereoscopic image capturing apparatus and stereoscopic image capturing system
JP6021541B2 (en) Image processing apparatus and method
JP4642723B2 (en) Image generating apparatus and image generating method
JP5320524B1 (en) Stereo camera
CN102124749B (en) Stereoscopic image display apparatus
US20160255333A1 (en) Generating Images from Light Fields Utilizing Virtual Viewpoints
JP5204350B2 (en) Imaging apparatus, playback apparatus, and image processing method
WO2012029298A1 (en) Image capture device and image-processing method
WO2013099169A1 (en) Stereo photography device
JP2011053277A (en) Display device and method of controlling parallax barrier, and program
US20120113231A1 (en) 3d camera
WO2018113082A1 (en) 3d panoramic photographing system and method
KR20120030005A (en) Image processing device and method, and stereoscopic image display device
JP6288088B2 (en) Imaging device
CN108805921A (en) Image-taking system and method
US20120120068A1 (en) Display device and display method
JP2006119843A (en) Image forming method, and apparatus thereof
JP2013223008A (en) Image processing device and method
TWI505708B (en) Image capture device with multiple lenses and method for displaying stereo image thereof
TWI462569B (en) 3d video camera and associated control method
JP2842735B2 (en) Multi-viewpoint three-dimensional image input device, image synthesizing device, and image output device thereof
KR101230909B1 (en) Apparatus and method for processing wide angle image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07737625

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP