WO2000038434A1 - Method and apparatus for converting monoscopic images into stereoscopic images - Google Patents

Method and apparatus for converting monoscopic images into stereoscopic images Download PDF

Info

Publication number
WO2000038434A1
WO2000038434A1 PCT/KR1999/000806 KR9900806W WO0038434A1 WO 2000038434 A1 WO2000038434 A1 WO 2000038434A1 KR 9900806 W KR9900806 W KR 9900806W WO 0038434 A1 WO0038434 A1 WO 0038434A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
original
depth
storing
images
Prior art date
Application number
PCT/KR1999/000806
Other languages
French (fr)
Inventor
Yong Sik Kim
Sung Cheol Jeong
Original Assignee
Park, Nam, Eun
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Park, Nam, Eun filed Critical Park, Nam, Eun
Publication of WO2000038434A1 publication Critical patent/WO2000038434A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0077Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0096Synchronisation or controlling aspects

Definitions

  • the present invention relates to a method and an apparatus for converting monoscopic (i.e., two-dimensional) images into stereoscopic (i.e., three-dimensional) images, and more particularly to a method and an apparatus for converting monoscopic images into stereoscopic images which enables to effectively reduce the efforts and cost necessary for producing the data of stereoscopic images by converting the two-dimensional image of anobject into three-dimensional image, and then synthesizing the right eye image of each object and the original two-dimensional image.
  • monoscopic i.e., two-dimensional
  • stereoscopic i.e., three-dimensional
  • the stereoscopic image consists of an image
  • the present invention has been made to overcome the above described problems, and accordingly it is an object of the present invention to provide a method and an apparatus for converting monoscopic images into stereoscopic images which enables to effectively reduce the efforts and cost necessary for producing the data of stereoscopic images by converting the two-dimensional image of an object into three-dimensional image, and then synthesizing the right eye image of each object and the original two-dimensional image.
  • the present invention provides a method for convertingmonoscopic images to stereoscopic images , said method comprising the steps of: reading a two-dimensional original image to be converted, storing the image data corresponding to the original image, and initializing the construction of the image data; extracting image of each object from the original image, objectifying the extracted image, and storing the objectified image ; adjusting and editing the objectified image according to the depth of each pixel and the object depth thereof; and synthesizing the edited image and the original image, and outputting the synthesized image to a monitor.
  • the extracting step comprises; extracting the contour line of the image of each object; calculating the least rectangle containing the extracted contour line; allocating a space for storing the extracted object; initializing the allocated space; comparing pixels within the rectangle to determine which pixel within the rectangle is out of the contour line; drawing some pixel with transparent color in the region corresponding to the arrangement of original object and the arrangement of edit object if the pixels are out of the contour line as a result of said comparison; and duplicating other pixels to the arrangement of original object and the arrangement of edit object, if the other pixels are within the contour line as a result of said comparison, changing the color of the other pixels with transparent color in the region corresponding to the space from which the object is extracted.
  • the present invention provides an apparatus for converting monoscopic images to stereoscopic images, said apparatus comprising: original image storing means for storing a two-dimensional original image to be converted; object extracting means for extracting images of each object from the original image, objectifying the extracted image, and storing the objectified image; original object arrangement means for separately storing the image of each object extracted from the object extracting means ; object depth arrangement means for adjusting and editing the extracted object image according to each pixel depth and object depth; edit object arrangement means changing the image of the original object arrangement means according to the depth value of the object depth arrangement means to produce a right eye image of the object, and storing the right eye image; synthesis object arrangement means for synthesizing the images of the original object arrangement means and the edit object arrangement means, and storing the image of each object as seeing a stereoscopic image; and synthesis stereoscopic image means synthesizing the image of the original image storing means and the right eye image of each object stored in the edit object arrangement
  • Fig .1 is a schematic block diagram showing the construction and data flowing of an apparatus for converting monoscopic images into stereoscopic images according to the present invention
  • Fig. 2 is a flow chart showing in brief the sequential algorithm according to the present invention
  • Fig. 3 is a flow chart showing in detail the step for extracting the image from an object and the step for objecting the extracted image in Fig. 2; and Fig.4 shows an example of a stereoscopic image according to the present invention.
  • the human brain perceives that the focus positions of the objects are different each other.
  • the distance between human's left and right eyes is constant, so that the angle of the image formed on the left eye is different from that formed on the right eye. Therefore when left eye image and right eye image formed on the left eye and right eye are projected on a screen, the difference of the distance between the left eye image and the right eye image is appeared differently according to the distance between the human and the objects. In this reason, the human brain can perceive the distance between the human and the object due to the difference of the distance between the left eye image and the right eye image .
  • the new stereoscopic image is made by such method. That is, the stereoscopic image is produced by converting the two-dimensional image of each object into three-dimensional image, and then synthesizing the image formed on the right eye and the original two-dimensional image.
  • the stereoscopic image is not made by newly producing the right eye image with considering the original image as the left eye image, and then synthesizing the generated right eye image and the original image.
  • the stereoscopic image is made by producing the separate right eye image with considering each object in the original image as a left eye image, and then synthesizing the generated right eye images and left eye images respectively.
  • Fig .1 is a schematic block diagram showing the construction and data flowing of an apparatus for converting monoscopic images into stereoscopic images according to the present invention.
  • Fig.2 is a flowchart showing in brief the sequential algorithm according to the present invention.
  • Fig. 3 is a flow chart showing in detail the step for extracting the image from an object and the step for objecting the extracted image in Fig. 2.
  • reference numeral 10 designates an original image storing unit to which a two-dimensional original image to be converted is input.
  • the original image is use as a left eye image to get a synthesized stereoscopic image.
  • Reference numeral 20 designates an object-extracting unit to which the original image stored in the original storing unit 10 is duplicated.
  • the image of each object is extracted from the object-extracting unit 20.
  • Reference numeral 30 designates an original object arrangement unit 30 in which the extracted image is stored.
  • the extracted image is removed from the object-extracting unit 20.
  • Each object corresponds to three data spaces. These data spaces consist of an original object arrangement unit 30, an object depth arrangement unit 40 which stores a depth value of the object that is value determining whether or not each pixel of the obj ect appears to be protruded from the stereoscopic image, and an edit object arrangement unit 50 which stores the right eye image generated by editing the original object image .
  • Reference numeral 70 designates a synthesis object arrangement unit that is a region storing the stereoscopic image of each object generated by synthesizing the image of the original object arrangement unit 30 and the image of the edit object arrangement unit 50.
  • Reference numeral 60 designates a synthesis stereoscopic image unit that is a region storing the result image to be obtained by synthesizing the image of the original image-storing unit 10 and the right eye image of each object stored in the edit object arrangement unit 50.
  • the synthesis of images is realized in real time with editing the object, so that the user can process the synthesis work seeing with eyes.
  • each storage space is initialized (step 100) .
  • the two-dimensional original image is stored in the original image-storing unit 10, and the object-extracting unit
  • the synthesis stereoscopic image unit 60 is also initialized with the original image because the edit is still not practiced.
  • Contour line is extracted from the image of the object-extracting unit 20 by using the known algorithm (step 121) . And then it is calculated the size of the minimum rectangle which comprises the extracted contour line (step 122) .
  • the size of the space to be allocated to the arrangement units 30, 50, and 70 should be determined to consider a margin. That is, the size of the space should be not determined by the accurate size of the rectangle, but by the size having the added size by the maximum depthat the left andright sides thereof , because it is positioned at a new position horizontally shifted according to the depth of each pixel in the original object image. For example, when the size of the object image is about 140x60 pixels, and the maximum depth is 100, the size of the space to be allocated should be 340 pixels (40+100+100) x60 pixels . Because the object depth arrangement unit 40 is used to store the depth of each pixel, it is unnecessary for the unit 40 to have the spare size as like the original object arrangement unit 30 and the edit object arrangement 50.
  • the allocated space is initialized (step 124)
  • the original object arrangement unit 30 and the edit object arrangement unit 50 are initialized with transparent color to prevent damage of image.
  • All depth values of the object depth arrangement unit 40 are initialized with the value of zero.
  • the depth value is zero, the human eyes perceive as the pixel is on the screen of the computer monitor. However when the depth value is positive, the human eyes perceive as the pixel is in the front of the screen.
  • the depth value is negative, the human eyes perceive as the pixel is at the rear of the screen. That is, the pixel of which depth value is positive looks like that the pixel is in the front of the screen, and then the pixel of which depth value is negative looks like that the pixel is at the rear of the screen.
  • the object depth of the newly extracted image is also initialized with zero.
  • object depth' 1 means the value showing how far is an object to the reference position (e.g. screen of the monitor) when a human watches the screen.
  • the position of each object in the space is determined by adjusting the value of the object depth.
  • the result value of each pixel in the synthesis stereoscopic image 60 is the value obtained by adding the value of the pixel depth to the value of the object depth.
  • the newly extracted object image is duplicated to the central position of the following allocation space (step 125 and step 126) .
  • the image is also duplicated to the original object arrangement unit 30 and the edit object arrangement unit 50.
  • the image of the edit object arrangement unit 50 is the same that of the original object arrangement unit 30 because the image stored in the edit object arrangement unit 50 is the image not to be edited.
  • the image stored in the edit object arrangement unit 50 is the right eye image corresponding to the original image that is considered as the left eye image.
  • the duplication process for the extracted image is applied to all pixels in the object image stored in the original object arrangement unit 30. If some pixels are in the contour line, the color of pixels in the object-extracting unit 20 is changed into transparent color (step 126) . This enable to remove the new object image extracted from the object extracting unit 20 without affecting to the image of other object to be extracted in the following.
  • the original object arrangement unit 30 and the edit object arrangement unit 50 are fill with pixels of transparent color (step 127) .
  • the spare space is also filled with pixels of transparent color so as not to damage the image .
  • the image to be remained after extracting process is stored as "background object 1 '.
  • the background object is also processed as above described. However the object depth of the background object is set at the maximum negative value (-1 x maximum depth) because the background object is part corresponding to the background of the original image, and is the object that looks like the farthest object from the operator.
  • the edit of the object means that a pixel depth and an object depth are adjusted.
  • the adjustment is achievedby changing the value of each pixel depth into positive or negative value on the basis of zero value.
  • the absolute value of each depth is appropriately adjusted according to the extent of the depth.
  • the depth of each pixel in the object depth arrangement unit 40 is changed with this process.
  • the object depth arrangement unit 40 it is capable producing the right eye image of the object to the edit object arrangement unit 50 from the image stored in the original object arrangement unit 30.
  • the right eye image is horizontally shifted to the left from the position of the left eye image.
  • the value of the depth is negative
  • the right eye image is horizontally shifted to the right from the position of the left eye image.
  • the more the absolute value of the depth is large, the more the distance of the shift is large.
  • the pixel in the original object arrangement unit 30 is horizontally shifted according to the depth thereof, and then is also duplicated to the corresponding position of the edit object arrangement unit 50.
  • the images of the original object arrangement unit 30 and the edit object arrangement unit 50 are synthesized to produce new images in the synthesizing object arrangement 70 (step 160) .
  • the user can see the synthesized stereoscopic image in real time.
  • the image in the original object arrangement unit 20 is used as the right eye image
  • the image in the edit object arrangement 50 is used as the left eye image.
  • one line image date is read from the left eye image, and the image data is duplicated to the top line of the buffer for storing the synthesized image.
  • one line is read from the right eye image, and the image date is duplicated to the next line of the buffer.
  • the right eye image and left eye image are alternately read by one line, and the read image data are written on the buffer in sequential .
  • the stereoscopic image synthesized in suchmethod is applicable to the stereo viewer supporting the line blanking method.
  • the synthesized object image is stored in the synthesizing object arrangement unit 70. Accordingly, the user can see the image .
  • the object images in the edit object arrangement unit 50 are synthesized to the original image stored in the original storing unit 10 as the left eye image. Synthesis of the images is started from the object of which object depth is minimum. That is, background image is synthesized to the original image, and thereafter the result image is synthesized to the object having depth value that is second small.
  • the object having a small object depth value is covered with another object having a large object depth value.
  • the object is screened by another object, so that the user can see the object as an actual situation.
  • a method and an apparatus for converting a monoscopic image to a stereoscopic image are achieved by converting the two-dimensional image of each object into three-dimensional image, and then synthesizing the right eye image of each object and the original two-dimensional image, it is possible to more detailedly realize the stereoscopic image. Furthermore, since the entire image is not formed by editing an image, but is formed by incorporating objects that are independently edited, it is possible to ensure the quality of the stereoscopic image. Furthermore, since the most two-dimensional images are usable according to the present invention, it is possible to effectively reduce the efforts and cost necessary forproducing the data of stereoscopic images .

Abstract

The method and apparatus for converting monoscopic images to stereoscopic images is achieved by objectifying objects from a two-dimensional original image, and editing the image on the basis of the object. It is possible to more detailedly realize the stereoscopic image by converting the two-dimensional image of each object into three-dimensional image, and then synthesizing the right eye image of each object and the two-dimensional original image. Furthermore, it is possible to effectively reduce the efforts and cost necessary for producing the data of stereoscopic images because the most two-dimensional images are usable according to the present invention.

Description

METHOD AND APPARATUS FOR CONVERTING MONOSCOPIC IMAGES INTO
STEREOSCOPIC IMAGES
Field of the Invention
The present invention relates to a method and an apparatus for converting monoscopic (i.e., two-dimensional) images into stereoscopic (i.e., three-dimensional) images, and more particularly to a method and an apparatus for converting monoscopic images into stereoscopic images which enables to effectively reduce the efforts and cost necessary for producing the data of stereoscopic images by converting the two-dimensional image of anobject into three-dimensional image, and then synthesizing the right eye image of each object and the original two-dimensional image.
Background Art
In general, a technical principle for implementing
stereoscopic images is that a person perceives a cubic effect
by providing images that have different visual angles for left eye and right eye. The stereoscopic image consists of an image
(left eye image) photographed in left eye side and an image
(right eye image) photographed in right eye side.
Disclosure of the Invention
However such a technique implementing the stereoscopic images has to use the left eye image and the right eye image photographed at positions of left and right eyes by two cameras respectively. It means that two-dimensional images cannot be used for a stereoscopic image system. In this reason, it is very difficult for an user to get stereoscopic images . Therefore there are disadvantages that the user should additionally produce stereoscopic images to use the stereoscopic image system.
The present invention has been made to overcome the above described problems, and accordingly it is an object of the present invention to provide a method and an apparatus for converting monoscopic images into stereoscopic images which enables to effectively reduce the efforts and cost necessary for producing the data of stereoscopic images by converting the two-dimensional image of an object into three-dimensional image, and then synthesizing the right eye image of each object and the original two-dimensional image.
To achieve the above object , the present inventionprovides a method for convertingmonoscopic images to stereoscopic images , said method comprising the steps of: reading a two-dimensional original image to be converted, storing the image data corresponding to the original image, and initializing the construction of the image data; extracting image of each object from the original image, objectifying the extracted image, and storing the objectified image ; adjusting and editing the objectified image according to the depth of each pixel and the object depth thereof; and synthesizing the edited image and the original image, and outputting the synthesized image to a monitor.
Herein, the extracting step comprises; extracting the contour line of the image of each object; calculating the least rectangle containing the extracted contour line; allocating a space for storing the extracted object; initializing the allocated space; comparing pixels within the rectangle to determine which pixel within the rectangle is out of the contour line; drawing some pixel with transparent color in the region corresponding to the arrangement of original object and the arrangement of edit object if the pixels are out of the contour line as a result of said comparison; and duplicating other pixels to the arrangement of original object and the arrangement of edit object, if the other pixels are within the contour line as a result of said comparison, changing the color of the other pixels with transparent color in the region corresponding to the space from which the object is extracted.
To achieve the above obj ect , the present inventionprovides an apparatus for converting monoscopic images to stereoscopic images, said apparatus comprising: original image storing means for storing a two-dimensional original image to be converted; object extracting means for extracting images of each object from the original image, objectifying the extracted image, and storing the objectified image; original object arrangement means for separately storing the image of each object extracted from the object extracting means ; object depth arrangement means for adjusting and editing the extracted object image according to each pixel depth and object depth; edit object arrangement means changing the image of the original object arrangement means according to the depth value of the object depth arrangement means to produce a right eye image of the object, and storing the right eye image; synthesis object arrangement means for synthesizing the images of the original object arrangement means and the edit object arrangement means, and storing the image of each object as seeing a stereoscopic image; and synthesis stereoscopic image means synthesizing the image of the original image storing means and the right eye image of each object stored in the edit object arrangement means.
Brief Description of Drawings These and other objects, advantages and features of the invention will become apparent from the following description thereof taken in conjunction with the accompanying drawings that illustrate a specific embodiment of the invention, in the drawings ;
Fig .1 is a schematic block diagram showing the construction and data flowing of an apparatus for converting monoscopic images into stereoscopic images according to the present invention; Fig. 2 is a flow chart showing in brief the sequential algorithm according to the present invention;
Fig. 3 is a flow chart showing in detail the step for extracting the image from an object and the step for objecting the extracted image in Fig. 2; and Fig.4 shows an example of a stereoscopic image according to the present invention.
* Description of Reference Number * 10 : original image storing unit 20 : object extracting unit 30 : original object arrangement unit 40 : object depth arrangement unit 50 : editing object arrangement unit 60 : synthesis stereoscopic image unit 70 : synthesis object image unit Best Mode for Carrying out the Invention
Preferred embodiment of this invention will now be described with reference to attached drawings. Referring now to the drawings, wherein like reference numerals designate like or corresponding parts throughout several views.
The principle of the present invention is as follows:
When a human stares two objects that are away from the human at predetermined different distances, the human brain perceives that the focus positions of the objects are different each other. The distance between human's left and right eyes is constant, so that the angle of the image formed on the left eye is different from that formed on the right eye. Therefore when left eye image and right eye image formed on the left eye and right eye are projected on a screen, the difference of the distance between the left eye image and the right eye image is appeared differently according to the distance between the human and the objects. In this reason, the human brain can perceive the distance between the human and the object due to the difference of the distance between the left eye image and the right eye image .
It is possible to produce the image data for capable of perceiving cubic effect by applying this principle to the general image photographed with a camera. This principle will now be described with reference to an object, A which is closer the camera, and another object B that is more away from the camera. If an image data is the left eye image formed on the left eye, the image formed on the right eye is formed on the position horizontally shifted relative to the left eye image. The distance between the object A and the camera is different from that between the object B and the camera, so that the value of the displacement horizontally shifted is become different . As thus, because the distance between images formed on both eyes for the object A is larger than that between images formed on both eyes for the object B, it is possible to produce images formed on the right eye for the object A and the object B by drawing the object A on the position horizontally shifted with a large displacement, and drawing the object B on the position horizontally shifted with a small displacement. As a result, when the user supply the image formed on the left eye for the left eye, and then supply the newly produced image for the right eye, the user can perceive the cubic effect of the object.
In this invention, the new stereoscopic image is made by such method. That is, the stereoscopic image is produced by converting the two-dimensional image of each object into three-dimensional image, and then synthesizing the image formed on the right eye and the original two-dimensional image. The stereoscopic image is not made by newly producing the right eye image with considering the original image as the left eye image, and then synthesizing the generated right eye image and the original image. The stereoscopic image is made by producing the separate right eye image with considering each object in the original image as a left eye image, and then synthesizing the generated right eye images and left eye images respectively.
Hereinafter, the construction and operation of the present invention according to the above-mentioned principle will be described with reference to the accompanying drawings .
Fig .1 is a schematic block diagram showing the construction and data flowing of an apparatus for converting monoscopic images into stereoscopic images according to the present invention. Fig.2 is a flowchart showing in brief the sequential algorithm according to the present invention. Fig. 3 is a flow chart showing in detail the step for extracting the image from an object and the step for objecting the extracted image in Fig. 2.
Referring to Figs 1, reference numeral 10 designates an original image storing unit to which a two-dimensional original image to be converted is input. The original image is use as a left eye image to get a synthesized stereoscopic image. Reference numeral 20 designates an object-extracting unit to which the original image stored in the original storing unit 10 is duplicated. The image of each object is extracted from the object-extracting unit 20. Reference numeral 30 designates an original object arrangement unit 30 in which the extracted image is stored. The extracted image is removed from the object-extracting unit 20. When all images of each object are extracted from the original image, the following processes are practiced for the image of each object stored in the original object arrangement unit 30.
Each object corresponds to three data spaces. These data spaces consist of an original object arrangement unit 30, an object depth arrangement unit 40 which stores a depth value of the object that is value determining whether or not each pixel of the obj ect appears to be protruded from the stereoscopic image, and an edit object arrangement unit 50 which stores the right eye image generated by editing the original object image .
When the depth of a pixel of the object being edited is changed by the user input, contents in the object depth arrangement unit 40 are changed . At that time , the abovementioned principle is applied to the image of the original object arrangement unit 30 according to the depth value stored in the object depth arrangement unit 40, so that the right eye image of the object is generated in the edit object arrangement unit 50.
Thus, it is necessary for spaces to store the image edited and synthesized from the object image, the image of the object synthesized to show an entire image, and the entire image. Reference numeral 70 designates a synthesis object arrangement unit that is a region storing the stereoscopic image of each object generated by synthesizing the image of the original object arrangement unit 30 and the image of the edit object arrangement unit 50. Reference numeral 60 designates a synthesis stereoscopic image unit that is a region storing the result image to be obtained by synthesizing the image of the original image-storing unit 10 and the right eye image of each object stored in the edit object arrangement unit 50. The synthesis of images is realized in real time with editing the object, so that the user can process the synthesis work seeing with eyes.
Hereinafter, the operation of the present invention will be described in detail with reference to Fig. 2.
Firstly, the original image is read, and then each storage space is initialized (step 100) .
The two-dimensional original image is stored in the original image-storing unit 10, and the object-extracting unit
20 is initialized with the original image. The synthesis stereoscopic image unit 60 is also initialized with the original image because the edit is still not practiced.
Thereafter the image of each object is extracted from the original image (step 120) . This procedure will be described with reference to Fig. 3.
Contour line is extracted from the image of the object-extracting unit 20 by using the known algorithm (step 121) . And then it is calculated the size of the minimum rectangle which comprises the extracted contour line (step 122) .
Thereafter spaces for storing the extracted object are added to the edit object arrangement unit 50, the synthesis object arrangement unit 70 and the object depth arrangement unit 40 respectively. It is noted that the size of the space to be allocated to the arrangement units 30, 50, and 70 should be determined to consider a margin. That is, the size of the space should be not determined by the accurate size of the rectangle, but by the size having the added size by the maximum depthat the left andright sides thereof , because it is positioned at a new position horizontally shifted according to the depth of each pixel in the original object image. For example, when the size of the object image is about 140x60 pixels, and the maximum depth is 100, the size of the space to be allocated should be 340 pixels (40+100+100) x60 pixels . Because the object depth arrangement unit 40 is used to store the depth of each pixel, it is unnecessary for the unit 40 to have the spare size as like the original object arrangement unit 30 and the edit object arrangement 50.
Next, the allocated space is initialized (step 124) The original object arrangement unit 30 and the edit object arrangement unit 50 are initialized with transparent color to prevent damage of image. All depth values of the object depth arrangement unit 40 are initialized with the value of zero. When the depth value is zero, the human eyes perceive as the pixel is on the screen of the computer monitor. However when the depth value is positive, the human eyes perceive as the pixel is in the front of the screen. On the other hand, when the depth value is negative, the human eyes perceive as the pixel is at the rear of the screen. That is, the pixel of which depth value is positive looks like that the pixel is in the front of the screen, and then the pixel of which depth value is negative looks like that the pixel is at the rear of the screen.
The object depth of the newly extracted image is also initialized with zero.
Herein, "object depth'1 means the value showing how far is an object to the reference position (e.g. screen of the monitor) when a human watches the screen. The position of each object in the space is determined by adjusting the value of the object depth. Actually, the result value of each pixel in the synthesis stereoscopic image 60 is the value obtained by adding the value of the pixel depth to the value of the object depth.
Thereafter, the newly extracted object image is duplicated to the central position of the following allocation space (step 125 and step 126) .
The image is also duplicated to the original object arrangement unit 30 and the edit object arrangement unit 50. The image of the edit object arrangement unit 50 is the same that of the original object arrangement unit 30 because the image stored in the edit object arrangement unit 50 is the image not to be edited. The image stored in the edit object arrangement unit 50 is the right eye image corresponding to the original image that is considered as the left eye image. The duplication process for the extracted image is applied to all pixels in the object image stored in the original object arrangement unit 30. If some pixels are in the contour line, the color of pixels in the object-extracting unit 20 is changed into transparent color (step 126) . This enable to remove the new object image extracted from the object extracting unit 20 without affecting to the image of other object to be extracted in the following. If some other pixels are out of the contour line, the original object arrangement unit 30 and the edit object arrangement unit 50 are fill with pixels of transparent color (step 127) . The spare space is also filled with pixels of transparent color so as not to damage the image . On the other hand, the image to be remained after extracting process is stored as "background object1'. The background object is also processed as above described. However the object depth of the background object is set at the maximum negative value (-1 x maximum depth) because the background object is part corresponding to the background of the original image, and is the object that looks like the farthest object from the operator.
As above described, when all images of each object are extracted and objectified, the following object images are edited (step 140) .
The edit of the object means that a pixel depth and an object depth are adjusted. The adjustment is achievedby changing the value of each pixel depth into positive or negative value on the basis of zero value. The absolute value of each depth is appropriately adjusted according to the extent of the depth. The depth of each pixel in the object depth arrangement unit 40 is changed with this process.
According to such contents changed in the object depth arrangement unit 40, it is capable producing the right eye image of the object to the edit object arrangement unit 50 from the image stored in the original object arrangement unit 30. As shown in Fig. 4, if the value of the depth is positive, the right eye image is horizontally shifted to the left from the position of the left eye image. On the contrary, if the value of the depth is negative , the right eye image is horizontally shifted to the right from the position of the left eye image. The more the absolute value of the depth is large, the more the distance of the shift is large. As this, the pixel in the original object arrangement unit 30 is horizontally shifted according to the depth thereof, and then is also duplicated to the corresponding position of the edit object arrangement unit 50.
When the image of the edit object arrangement unit 50 is changed, the images of the original object arrangement unit 30 and the edit object arrangement unit 50 are synthesized to produce new images in the synthesizing object arrangement 70 (step 160) . Thus the user can see the synthesized stereoscopic image in real time.
In the process of synthesizing the image, the image in the original object arrangement unit 20 is used as the right eye image, and the image in the edit object arrangement 50 is used as the left eye image. Firstly, one line image date is read from the left eye image, and the image data is duplicated to the top line of the buffer for storing the synthesized image. Secondly, one line is read from the right eye image, and the image date is duplicated to the next line of the buffer. The right eye image and left eye image are alternately read by one line, and the read image data are written on the buffer in sequential . The stereoscopic image synthesized in suchmethod is applicable to the stereo viewer supporting the line blanking method.
The synthesized object image is stored in the synthesizing object arrangement unit 70. Accordingly, the user can see the image . In the entire image, the object images in the edit object arrangement unit 50 are synthesized to the original image stored in the original storing unit 10 as the left eye image. Synthesis of the images is started from the object of which object depth is minimum. That is, background image is synthesized to the original image, and thereafter the result image is synthesized to the object having depth value that is second small. As this, when the synthesis is started from the object having the smaller object depth, the object having a small object depth value is covered with another object having a large object depth value. As a result, the object is screened by another object, so that the user can see the object as an actual situation.
Industrial Applicability
As above-mentioned, since a method and an apparatus for converting a monoscopic image to a stereoscopic image are achieved by converting the two-dimensional image of each object into three-dimensional image, and then synthesizing the right eye image of each object and the original two-dimensional image, it is possible to more detailedly realize the stereoscopic image. Furthermore, since the entire image is not formed by editing an image, but is formed by incorporating objects that are independently edited, it is possible to ensure the quality of the stereoscopic image. Furthermore, since the most two-dimensional images are usable according to the present invention, it is possible to effectively reduce the efforts and cost necessary forproducing the data of stereoscopic images . Although the present invention has been disclosed and illustrated with reference to particular embodiments illustrated in the drawings, the principles involved in the present invention are susceptible for use in numerous other embodiments that will be apparent to persons skilled in the art. The invention is, therefore, to be limited only as defined by the scope of the appended claims.

Claims

CLAIMES
1. A method for converting a monoscopic image to a stereoscopic image, said method comprising the steps of: reading a two-dimensional original image to be converted, storing the image data corresponding to the original image, and initializing the construction of the image data; extracting image of each object from the original image, objectifying the extracted image, and storing the objectified image; adjusting and editing the objectified image according to the depth of each pixel and the object depth thereof; and synthesizing the edited image and the original image, and outputting the synthesized image to a monitor.
2. The method as in claimed 1, wherein said extracting step comprises: extracting the contour line from the image of each object; calculating the least rectangle containing the extracted contour line; allocating a space for storing the extracted object; initializing the allocated space; comparing pixels within the rectangle to determine which pixel within the rectangle is out of the contour line; drawing some pixel with transparent color in the region corresponding to the arrangement of original object and the arrangement of edit object if the pixels are out of the contour line as a result of said comparison; and duplicating other pixels to the arrangement of original object and the arrangement of edit object if the other pixels are within the contour line as a result of said comparison, changing the color of the other pixels with transparent color in the region corresponding to the space from which the object is extracted.
3. An apparatus for converting a monoscopic image to stereoscopic images, said apparatus comprising: original image storing means for storing a two-dimensional original image to be converted; object extracting means for extracting images of each object from the original image, objectifying the extracted image, and storing the objectified image; original object arrangement means for separately storing the image of each object extracted from said object extracting means; object depth arrangement means for adjusting and editing the extracted object image according to each pixel depth and object depth; edit object arrangement means changing the image of the original object arrangement means according to the depth value of the object depth arrangement means to produce a right eye image of the object, and storing the right eye image; synthesis object arrangement means for synthesizing the images of the original object arrangement means and the edit object arrangement means, and storing the image of each object as seeing a stereoscopic image; and synthesis stereoscopic image means synthesizing the image of the original image storing means and the right eye image of each object stored in the edit object arrangement means.
PCT/KR1999/000806 1998-12-22 1999-12-22 Method and apparatus for converting monoscopic images into stereoscopic images WO2000038434A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1998/57178 1998-12-22
KR1019980057178A KR100321897B1 (en) 1998-12-22 1998-12-22 Stereoscopic image conversion method and apparatus

Publications (1)

Publication Number Publication Date
WO2000038434A1 true WO2000038434A1 (en) 2000-06-29

Family

ID=19564569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR1999/000806 WO2000038434A1 (en) 1998-12-22 1999-12-22 Method and apparatus for converting monoscopic images into stereoscopic images

Country Status (2)

Country Link
KR (1) KR100321897B1 (en)
WO (1) WO2000038434A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007148219A2 (en) 2006-06-23 2007-12-27 Imax Corporation Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
CN102113022A (en) * 2008-06-12 2011-06-29 成泳锡 Image conversion method and apparatus
US8761541B2 (en) 2010-05-11 2014-06-24 Thomson Nlicensing Comfort noise and film grain processing for 3 dimensional video
US8842730B2 (en) 2006-01-27 2014-09-23 Imax Corporation Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
US9348423B2 (en) 2008-07-09 2016-05-24 Apple Inc. Integrated processor for 3D mapping

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100381817B1 (en) * 1999-11-17 2003-04-26 한국과학기술원 Generating method of stereographic image using Z-buffer
KR100450836B1 (en) * 2001-12-11 2004-10-01 삼성전자주식회사 Apparatus for generating 3-dimensional image from 2-dimensional image
KR20030076904A (en) * 2002-03-23 2003-09-29 (주)맥스소프트 Method for Reconstructing Intermediate View Image using Adaptive Disparity Estimation
KR100436904B1 (en) * 2002-09-06 2004-06-23 강호석 Method for generating stereoscopic image from 2D images
KR101212223B1 (en) * 2005-07-18 2012-12-13 삼성디스플레이 주식회사 Device taking a picture and method to generating the image with depth information
KR100713220B1 (en) * 2006-07-28 2007-05-02 (주)블루비스 3d image editing apparatus and method thereof
KR101789071B1 (en) 2011-01-13 2017-10-24 삼성전자주식회사 Apparatus and method for extracting feature of depth image
WO2013081304A1 (en) * 2011-11-28 2013-06-06 에스케이플래닛 주식회사 Image conversion apparatus and method for converting two-dimensional image to three-dimensional image, and recording medium for same
WO2013081281A1 (en) * 2011-11-29 2013-06-06 에스케이플래닛 주식회사 Image converting apparatus and method for converting two-dimensional image to three-dimensional image, and recording medium for same

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
KR970014416A (en) * 1995-08-12 1997-03-29 박남은 Virtual stereoscopic image conversion device and method
KR19980082849A (en) * 1997-05-09 1998-12-05 윤종용 3D image conversion device and method of 2D continuous image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3128467B2 (en) * 1995-04-11 2001-01-29 三洋電機株式会社 How to convert 2D video to 3D video

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5371778A (en) * 1991-11-29 1994-12-06 Picker International, Inc. Concurrent display and adjustment of 3D projection, coronal slice, sagittal slice, and transverse slice images
KR970014416A (en) * 1995-08-12 1997-03-29 박남은 Virtual stereoscopic image conversion device and method
KR19980082849A (en) * 1997-05-09 1998-12-05 윤종용 3D image conversion device and method of 2D continuous image

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8842730B2 (en) 2006-01-27 2014-09-23 Imax Corporation Methods and systems for digitally re-mastering of 2D and 3D motion pictures for exhibition with enhanced visual quality
WO2007148219A2 (en) 2006-06-23 2007-12-27 Imax Corporation Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
EP2033164A2 (en) * 2006-06-23 2009-03-11 Imax Corporation Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
EP2033164A4 (en) * 2006-06-23 2010-11-17 Imax Corp Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US9282313B2 (en) 2006-06-23 2016-03-08 Imax Corporation Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition
CN102113022A (en) * 2008-06-12 2011-06-29 成泳锡 Image conversion method and apparatus
JP2011523323A (en) * 2008-06-12 2011-08-04 スォング,ヨンソック Video conversion method and apparatus
US9348423B2 (en) 2008-07-09 2016-05-24 Apple Inc. Integrated processor for 3D mapping
US8761541B2 (en) 2010-05-11 2014-06-24 Thomson Nlicensing Comfort noise and film grain processing for 3 dimensional video

Also Published As

Publication number Publication date
KR20000041329A (en) 2000-07-15
KR100321897B1 (en) 2002-05-13

Similar Documents

Publication Publication Date Title
JP5429896B2 (en) System and method for measuring potential eye strain from stereoscopic video
EP2340534B1 (en) Optimal depth mapping
JP4065488B2 (en) 3D image generation apparatus, 3D image generation method, and storage medium
US20100085423A1 (en) Stereoscopic imaging
US20120182403A1 (en) Stereoscopic imaging
US20060119597A1 (en) Image forming apparatus and method
WO2000038434A1 (en) Method and apparatus for converting monoscopic images into stereoscopic images
KR20080065889A (en) Apparatus and method for generating a stereoscopic image from a two-dimensional image using the mesh map
EP1704730A1 (en) Method and apparatus for generating a stereoscopic image
CA2581273A1 (en) System and method for processing video images
EP0707287B1 (en) Image processing apparatus and method
KR101717379B1 (en) System for postprocessing 3-dimensional image
KR20090129175A (en) Method and device for converting image
KR20050082764A (en) Method for reconstructing intermediate video and 3d display using thereof
US6252982B1 (en) Image processing system for handling depth information
US11561508B2 (en) Method and apparatus for processing hologram image data
CA2540538C (en) Stereoscopic imaging
KR20210001254A (en) Method and apparatus for generating virtual view point image
JP4214529B2 (en) Depth signal generation device, depth signal generation program, pseudo stereoscopic image generation device, and pseudo stereoscopic image generation program
JP3126575B2 (en) 3D image generator for stereoscopic vision
US7009606B2 (en) Method and apparatus for generating pseudo-three-dimensional images
KR101071911B1 (en) Method for creation 3 dimensional image
JP4214527B2 (en) Pseudo stereoscopic image generation apparatus, pseudo stereoscopic image generation program, and pseudo stereoscopic image display system
WO1998043442A1 (en) Multiple viewpoint image generation
JP2006186510A (en) Pseudo-stereoscopic image generating apparatus, and program, and pseudo-stereoscopic image display system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP US

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase