US20110043540A1 - System and method for region classification of 2d images for 2d-to-3d conversion - Google Patents

System and method for region classification of 2d images for 2d-to-3d conversion Download PDF

Info

Publication number
US20110043540A1
US20110043540A1 US12/531,906 US53190607A US2011043540A1 US 20110043540 A1 US20110043540 A1 US 20110043540A1 US 53190607 A US53190607 A US 53190607A US 2011043540 A1 US2011043540 A1 US 2011043540A1
Authority
US
United States
Prior art keywords
region
image
dimensional
images
conversion mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/531,906
Inventor
James Arthur Fancher
Dong-Qing Zhang
Ana Belen Benitez
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing LLC
Original Assignee
Thomson Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing LLC filed Critical Thomson Licensing LLC
Assigned to Thomson Licensing, LLC reassignment Thomson Licensing, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FANCHER, JAMES ARTHUR, BENITEZ, ANA BELEN, ZHANG, DONG-QING
Publication of US20110043540A1 publication Critical patent/US20110043540A1/en
Assigned to REALD DDMG ACQUISITION, LLC reassignment REALD DDMG ACQUISITION, LLC RELEASE FROM PATENT SECURITY AGREEMENT AT REEL/FRAME NO. 29855/0189 Assignors: CITY NATIONAL BANK
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Definitions

  • the present disclosure generally relates to computer graphics processing and display systems, and more particularly, to a system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion.
  • 2D-to-3D conversion is a process to convert existing two-dimensional (2D) films into three-dimensional (3D) stereoscopic films.
  • 3D stereoscopic films reproduce moving images in such a way that depth is perceived and experienced by a viewer, for example, while viewing such a film with passive or active 3D glasses.
  • Stereoscopic imaging is the process of visually combining at least two images of a scene, taken from slightly different viewpoints, to produce the illusion of three-dimensional depth. This technique relies on the fact that human eyes are spaced some distance apart and do not, therefore, view exactly the same scene. By providing each eye with an image from a different perspective, the viewer's eyes are tricked into perceiving depth.
  • the component images are referred to as the “left” and “right” images, also know as a reference image and complementary image, respectively.
  • more than two viewpoints may be combined to form a stereoscopic image.
  • Stereoscopic images may be produced by a computer using a variety of techniques.
  • the “anaglyph” method uses color to encode the left and right components of a stereoscopic image. Thereafter, a viewer wears a special pair of glasses that filters light such that each eye perceives only one of the views.
  • page-flipped stereoscopic imaging is a technique for rapidly switching a display between the right and left views of an image.
  • the viewer wears a special pair of eyeglasses that contains high-speed electronic shutters, typically made with liquid crystal material, which open and close in sync with the images on the display.
  • high-speed electronic shutters typically made with liquid crystal material, which open and close in sync with the images on the display.
  • each eye perceives only one of the component images.
  • lenticular imaging partitions two or more disparate image views into thin slices and interleaves the slices to form a single image. The interleaved image is then positioned behind a lenticular lens that reconstructs the disparate views such that each eye perceives a different view.
  • Some lenticular displays are implemented by a lenticular lens positioned over a conventional LCD display, as commonly found on computer laptops.
  • FIG. 1 illustrates the workflow developed by the process disclosed in U.S. Pat. No. 6,208,348, where FIG. 1 originally appeared as FIG. 5 in U.S. Pat. No.
  • a system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion of images to create stereoscopic images are provided.
  • the system and method of the present disclosure utilizes a plurality of conversion methods or modes (e.g., converters) and selects the best approach based on content in the images.
  • the conversion process is conducted on a region-by-region basis where regions in the images are classified to determine the best converter or conversion mode available.
  • the system and method of the present disclosure uses a pattern-recognition-based system that includes two components: a classification component and a learning component.
  • the inputs to the classification component are features extracted from a region of a 2D image and the output is an identifier of the 2D-to-3D conversion modes or converters expected to provide the best results.
  • the learning component optimizes the classification parameters to achieve minimum classification error of the region using a set of training images and corresponding user annotations. For the training images, the user annotates the identifier of the best conversion mode or converter to each region. The learning component then optimizes the classification (i.e., learns) by using the visual features of the regions for training and their annotated converter identifiers. After each region of an image is converted, a second image (e.g., the right eye image or complementary image) is created by projecting the 3D scene, which includes the converted 3D regions or objects, onto another imaging plane with a different camera view angle.
  • a second image e.g., the right eye image or complementary image
  • a three-dimensional (3D) conversion method for creating stereoscopic images includes acquiring a two-dimensional image; identifying a region of the two dimensional image; classifying the identified region; selecting a conversion mode based on the classification of the identified region; converting the region into a three-dimensional model based on the selected conversion mode; and creating a complementary image by projecting the three-dimensional model onto an image plane different than an image plane of the two-dimensional image.
  • the method includes extracting features from the region; classifying the extracted features and selecting the conversion mode based on the classification of the extracted features.
  • the extracting step further includes determining a feature vector from the extracted features, wherein the feature vector is employed in the classifying step to classify the identified region.
  • the extracted features may include texture and edge direction features.
  • the conversion mode is a fuzzy object conversion mode or a solid object conversion mode.
  • the classifying step further includes acquiring a plurality of 2D images; selecting a region in each of the plurality of 2D images; annotating the selected region with an optimal conversion mode based on a type of the selected region; and optimizing the classifying step based on the annotated 2D images, wherein the type of the selected region corresponds to a fuzzy object or solid object.
  • a system for three-dimensional (3D) conversion of objects from two-dimensional (2D) images is provided.
  • the system includes a post-processing device configured for creating a complementary image from at least one 2D image; the post-processing device including a region detector configured for detecting at least one region in at least one 2D image; a region classifier configured for classifying a detected region to determine an identifier of at least one converter; the at least one converter configured for converting a detected region into a 3D model; and a reconstruction module configured for creating a complementary image by projecting the selected 3D model onto an image plane different than an image plane of the at least one 2D image.
  • the at least one converter may include a fuzzy object converter or a solid object converter.
  • system further includes a feature extractor configured to extract features from the detected region.
  • the extracted features may include texture and edge direction features.
  • the system further includes a classifier learner configured to acquire a plurality of 2D images, select at least one region in each of the plurality of 2D images and annotate the selected at least one region with the identifier of an optimal converter based on a type of the selected at least one region, wherein the region classifier is optimized based on the annotated 2D images.
  • a classifier learner configured to acquire a plurality of 2D images, select at least one region in each of the plurality of 2D images and annotate the selected at least one region with the identifier of an optimal converter based on a type of the selected at least one region, wherein the region classifier is optimized based on the annotated 2D images.
  • a program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform method steps for creating stereoscopic images from a two-dimensional (2D) image
  • the method including acquiring a two-dimensional image; identifying a region of the two-dimensional image; classifying the identified region; selecting a conversion mode based on the classification of the identified region; converting the region into a three-dimensional model based on the selected conversion mode; and creating a complementary image by projecting the three-dimensional model onto an image plane different than an image plane of the two-dimensional image.
  • FIG. 1 illustrates a prior art technique for creating a right-eye or complementary image from an input image
  • FIG. 2 is a flow diagram illustrating a system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion of the images according to an aspect of the present disclosure
  • FIG. 3 is an exemplary illustration of a system for two-dimensional (2D) to three-dimensional (3D) conversion of images for creating stereoscopic images according to an aspect of the present disclosure
  • FIG. 4 is a flow diagram of an exemplary method for converting two-dimensional (2D) images to three-dimensional (3D) images for creating stereoscopic images according to an aspect of the present disclosure.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read only memory (“ROM”) for storing software, random access memory (“RAM”), and nonvolatile storage.
  • DSP digital signal processor
  • ROM read only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
  • the disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • the present disclosure deals with the problem of creating 3D geometry from 2D images.
  • the problem arises in various film production applications, including visual effects (VXF), 2D film to 3D film conversion, among others.
  • VXF visual effects
  • Previous systems for 2D-to-3D conversion are realized by creating a complimentary image (also known as a right-eye image) by shifting selected regions in the input image, therefore, creating stereo disparity for 3D playback.
  • the process is very inefficient, and it is difficult to convert regions of images to 3D surfaces if the surfaces are curved rather than flat.
  • a system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion of images to create stereoscopic images are provided.
  • the system and method of the present disclosure provide a 3D-based technique for 2D-to-3D conversion of images to create stereoscopic images.
  • the stereoscopic images can then be employed in further processes to create 3D stereoscopic films.
  • the system and method of the present disclosure utilizes a plurality of conversion methods or modes (e.g., converters) 18 and selects the best approach based on content in the images 14 .
  • the conversion process is conducted on a region-by-region basis where regions 16 in the images 14 are classified to determine the best converter or conversion mode 18 available.
  • the system and method of the present disclosure uses a pattern-recognition-based system that includes two components: a classification component 20 and a learning component 22 .
  • the inputs to the classification component 20 are features extracted from a region 16 of a 2D image 14 and the output of the classification component 20 is an identifier (i.e., an integer number) of the 2D-to-3D conversion modes or converters 18 expected to provide the best results.
  • the learning component 22 or classifier learner, optimizes the classification parameters of the region classifier 20 to achieve minimum classification error of the region using a set of training images 24 and corresponding user annotations. For the training images 24 , the user annotates the identifier of the best conversion mode or converter 18 to each region 16 .
  • the learning component then optimizes the classification (i.e., learns) by using the converter index and visual features of the region.
  • a second image e.g., the right eye image or complementary image
  • the 3D scene 26 which includes the converted 3D regions or objects, onto another imaging plane with a different camera view angle.
  • a scanning device 103 may be provided for scanning film prints 104 , e.g., camera-original film negatives, into a digital format, e.g., a Cineon-format or SMPTE DPX files.
  • the scanning device 103 may comprise, e.g., a telecine or any device that will generate a video output from film such as, e.g., an Arri LocProTM with video output.
  • files from the post production process or digital cinema 106 e.g., files already in computer-readable form
  • Potential sources of computer-readable files are AVIDTM editors, DPX files, D5 tapes etc.
  • Scanned film prints are input to a post-processing device 102 , e.g., a computer.
  • the computer is implemented on any of the various known computer platforms having hardware such as one or more central processing units (CPU), memory 110 such as random access memory (RAM) and/or read only memory (ROM) and input/output (I/O) user interface(s) 112 such as a keyboard, cursor control device (e.g., a mouse or joystick) and display device.
  • the computer platform also includes an operating system and micro instruction code.
  • the various processes and functions described herein may either be part of the micro instruction code or part of a software application program (or a combination thereof) which is executed via the operating system.
  • peripheral devices may be connected to the computer platform by various interfaces and bus structures, such a parallel port, serial port or universal serial bus (USB).
  • Other peripheral devices may include additional storage devices 124 and a printer 128 .
  • the printer 128 may be employed for printing a revised version of the film 126 , e.g., a stereoscopic version of the film, wherein a scene or a plurality of scenes may have been altered or replaced using 3D modeled objects as a result of the techniques described below.
  • files/film prints already in computer-readable form 106 may be directly input into the computer 102 .
  • files/film prints already in computer-readable form 106 may be directly input into the computer 102 .
  • film used herein may refer to either film prints or digital cinema.
  • a software program includes a three-dimensional (3D) reconstruction module 114 stored in the memory 110 for converting two-dimensional (2D) images to three-dimensional (3D) images for creating stereoscopic images.
  • the 3D conversion module 114 includes a region or object detector 116 for identifying objects or regions in 2D images.
  • the region or object detector 116 identifies objects either manually by outlining image regions containing objects by image editing software or by isolating image regions containing objects with automatic detection algorithms, e.g., segmentation algorithms.
  • a feature extractor 119 is provided to extract features from the regions of the 2D images. Feature extractors are known in the art and extract features including but not limited to texture, line direction, edges, etc.
  • the 3D reconstruction module 114 also includes a region classifier 117 configured to classify the regions of the 2D image and determine the best available converter for a particular region of an image.
  • the region classifier 117 will output an identifier, e.g., an integer number, for identifying the conversion mode or converter to be used for the detected region.
  • the 3D reconstruction module 114 includes a 3D conversion module 118 for converting the detected region into a 3D model.
  • the 3D conversion module 118 includes a plurality of converters 118 - 1 . . . 118 - n , where each converter is configured to convert a different type of region.
  • solid objects or regions containing solid objects will be converted by object matcher 118 - 1
  • fuzzy regions or objects will be converted by particle system generator 118 - 2
  • An exemplary converter for solid objects is disclosed in commonly owned PCT Patent Application PCT/US2006/044834, filed on Nov. 17, 2006, entitled “SYSTEM AND METHOD FOR MODEL FITTING AND REGISTRATION OF OBJECTS FOR 2D-TO-3D CONVERSION” (hereinafter “the '834 application”) and an exemplary converter for fuzzy objects is disclosed in commonly owned PCT Patent Application PCT/US2006/042586, filed on Oct.
  • the system includes a library of 3D models that will be employed by the various converters 118 - 1 . . . 118 - n .
  • the converters 118 will interact with various libraries of 3D models 122 selected for the particular converter or conversion mode.
  • the library of 3D models 122 will include a plurality of 3D object models where each object model relates to a predefined object.
  • the library 122 will include a library of predefined particle systems.
  • An object renderer 120 is provided for rendering the 3D models into a 3D scene to create a complementary image. This is realized by a rasterization process or more advanced techniques, such as ray tracing or photon mapping.
  • FIG. 4 is a flow diagram of an exemplary method for converting two-dimensional (2D) images to three-dimensional (3D) images for creating stereoscopic images according to an aspect of the present disclosure.
  • the post-processing device 102 acquires at least one two-dimensional (2D) image, e.g., a reference or left-eye image.
  • the post-processing device 102 acquires at least one 2D image by obtaining the digital master video file in a computer-readable format, as described above.
  • the digital video file may be acquired by capturing a temporal sequence of video images with a digital video camera.
  • the video sequence may be captured by a conventional film-type camera. In this scenario, the film is scanned via scanning device 103 .
  • the camera will acquire 2D images while moving either the object in a scene or the camera.
  • the camera will acquire multiple viewpoints of the scene.
  • the digital file of the film will include indications or information on locations of the frames, e.g., a frame number, time from start of the film, etc.
  • Each frame of the digital video file will include one image, e.g., I 1 , I 2 , . . . , I n .
  • a region in the 2D image is identified or detected. It is to be appreciated that a region can contain several objects or can be part of an object. Using the region detector 116 , an object or region may be manually selected and outlined by a user using image editing tools, or alternatively, the object or region may be automatically detected and outlined using image detection algorithms, e.g., object detection or region segmentation algorithms. It is to be appreciated that a plurality of objects or regions may be identified in the 2D image.
  • the region classifier 117 is basically a function that outputs the identifier of the best expected converter according to features extracted from regions. In various embodiments, different features can be chosen. For a particular classification purpose (i.e. select solid object converter 118 - 1 or particle system converter 118 - 2 ), texture features may perform better than other features such as color since particle systems usually have richer textures than the solid objects. Furthermore, many solid objects, such as buildings, have prominent vertical and horizontal lines, therefore, edge direction may be the most relevant feature. Below is one example of how texture feature and edge feature can be used as inputs to the region classifier 117 .
  • Texture features can be computed in many ways.
  • Gabor wavelet feature is one of the most widely used texture features in image processing.
  • the extraction process first applies a set of Gabor kernels with different spatial frequencies to the image and then computes the total pixel intensity of the filtered image.
  • the filter kernel function follows:
  • h ⁇ ( x , y ) 1 2 ⁇ ⁇ g 2 ⁇ exp ⁇ [ - x 2 + y 2 2 ⁇ ⁇ g 2 ] ⁇ exp ⁇ ( j2 ⁇ ⁇ ⁇ F ⁇ ( x ⁇ ⁇ cos ⁇ ⁇ ⁇ + y ⁇ ⁇ sin ⁇ ⁇ ⁇ ) ) ( 1 )
  • Edge features can be extracted by first applying horizontal and vertical line detection algorithms to the 2D image and, then, counting the edge pixels.
  • Line detection can be realized by applying directional edge filters and, then, connecting the small edge segments into lines.
  • Canny edge detection can be used for this purpose and is known in the art. If only horizontal lines and vertical lines (e.g., for the case of buildings) are to be detected, then, a two-dimensional feature vector, a dimension for each direction, is obtained.
  • the two-dimensional case described is for illustration purposes only and can be easily extended to more dimensions.
  • the extracted feature vector is input to the region classifier 117 .
  • the output of the classifier is the identifier of the recommended 2D-to-3D converter 118 . It is to be appreciated that the feature vector could be different depending on different feature extractors.
  • the input to the region classifier 117 can be other features than those described above and can be any feature that is relevant to the content in the region.
  • training data which contains images with different kinds of regions is collected.
  • Each region in the images is then outlined and manually annotated with the identifier of the converter or conversion mode that is expected to perform best based on the type of the region (e.g., corresponding to a fuzzy object such as a tree or a solid object such as a building).
  • a region may contain several objects and all of the objects within the region use the same converter. Therefore, to select a good converter, the content within the region should have homogeneous properties, so that a correct converter can be selected.
  • the learning process takes the annotated training data and builds the best region classifier so as to minimize the difference between the output of the classifier and the annotated identifier for the images in the training set.
  • the region classifier 117 is controlled by a set of parameters. For the same input, changing the parameters of the region classifier 117 gives different classification output, i.e. different identifier of the converter.
  • the learning process automatically and continuously changes the parameters of the classifier to some point that the classifier outputs the best classification results for the training data. Then, the parameters are taken as the optimal parameters for future uses. Mathematically, if Means Square Error is used, the cost function to be minimized can be written as follows:
  • R i is the region i in the training images
  • I i is the identifier of the best converter assigned to the region during annotation process
  • f ⁇ ( ) is the classifier whose parameter is represented by ⁇ .
  • SVM Support Vector Machine
  • the identifier of the converter is then used to select the appropriate converter 118 - 1 . . . 118 - n in the 3D conversion module 118 .
  • the selected converter then converts the detected region into a 3D model (step 210 ).
  • Such converters are known in the art.
  • an exemplary converter or conversion mode for solid objects is disclosed in the commonly owned '834 application.
  • This application discloses a system and method for model fitting and registration of objects for 2D-to-3D conversion of images to create stereoscopic images.
  • the system includes a database that stores a variety of 3D models of real-world objects. For a first 2D input image (e.g., the left eye image or reference image), regions to be converted to 3D are identified or outlined by a system operator or automatic detection algorithm. For each region, the system selects a stored 3D model from the database and registers the selected 3D model so the projection of the 3D model matches the image content within the identified region in an optimal way.
  • the matching process can be implemented using geometric approaches or photometric approaches.
  • a second image (e.g., the right eye image or complementary image) is created by projecting the 3D scene, which includes the registered 3D objects with deformed texture, onto another imaging plane with a different camera view angle.
  • an exemplary converter or conversion mode for fuzzy objects is disclosed in the commonly owned '586 application.
  • This application discloses a system and method for recovering three-dimensional (3D) particle systems from two-dimensional (2D) images.
  • the geometry reconstruction system and method recovers 3D particle systems representing the geometry of fuzzy objects from 2D images.
  • the geometry reconstruction system and method identifies fuzzy objects in 2D images, which can, therefore, be generated by a particle system.
  • the identification of the fuzzy objects is either done manually by outlining regions containing the fuzzy objects with image editing tools or by automatic detection algorithms. These fuzzy objects are then further analyzed to develop criteria for matching them to a library of particle systems.
  • the best match is determined by analyzing light properties and surface properties of the image segment both in the frame and temporally, i.e., in a sequential series of images.
  • the system and method simulate and render a particle system selected from the library, and then, compare the rendering result with the fuzzy object in the image.
  • the system and method determines whether the particle system is a good match or not according to certain matching criteria.
  • the complementary image (e.g., the right-eye image) is created by rendering the 3D scene including converted 3D objects and a background plate into another imaging plane, at step 212 , different than the imaging plane of the input 2D image, which is determined by a virtual right camera.
  • the rendering may be realized by a rasterization process as in the standard graphics card pipeline, or by more advanced techniques such as ray tracing used in the professional post-production workflow.
  • the position of the new imaging plane is determined by the position and view angle of the virtual right camera.
  • the setting of the position and view angle of the virtual right camera should result in an imaging plane that is parallel to the imaging plane of the left camera that yields the input image. In one embodiment, this can be achieved by tweaking the position and view angle of the virtual camera and getting feedback by viewing the resulting 3D playback on a display device.
  • the position and view angle of the right camera is adjusted so that the created stereoscopic image can be viewed in the most comfortable way by the viewers.
  • the projected scene is then stored as a complementary image, e.g., the right-eye image, to the input image, e.g., the left-eye image (step 214 ).
  • the complementary image will be associated to the input image in any conventional manner so they may be retrieved together at a later point in time.
  • the complementary image may be saved with the input, or reference, image in a digital file 130 creating a stereoscopic film.
  • the digital file 130 may be stored in storage device 124 for later retrieval, e.g., to print a stereoscopic version of the original film.

Abstract

A system and method for region classification of two-dimensional images for 2D-to-3D conversion of images to create stereoscopic images are provided. The system and method of the present disclosure provides for acquiring a two-dimensional image, identifying a region of the 2D image, extracting features from the region, classifying the extracted features of the region, selecting a conversion mode based on the classification of the identified region, converting the region into a 3D model based on the selected conversion mode, and creating a complementary image by projecting the 3D model onto an image plane different than an image plane of the 2D image. A learning component optimizes the classification parameters to achieve minimum classification error of the region using a set of training images and corresponding user annotations.

Description

    TECHNICAL FIELD OF THE INVENTION
  • The present disclosure generally relates to computer graphics processing and display systems, and more particularly, to a system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion.
  • BACKGROUND OF THE INVENTION
  • 2D-to-3D conversion is a process to convert existing two-dimensional (2D) films into three-dimensional (3D) stereoscopic films. 3D stereoscopic films reproduce moving images in such a way that depth is perceived and experienced by a viewer, for example, while viewing such a film with passive or active 3D glasses. There have been significant interests from major film studios in converting legacy films into 3D stereoscopic films.
  • Stereoscopic imaging is the process of visually combining at least two images of a scene, taken from slightly different viewpoints, to produce the illusion of three-dimensional depth. This technique relies on the fact that human eyes are spaced some distance apart and do not, therefore, view exactly the same scene. By providing each eye with an image from a different perspective, the viewer's eyes are tricked into perceiving depth. Typically, where two distinct perspectives are provided, the component images are referred to as the “left” and “right” images, also know as a reference image and complementary image, respectively. However, those skilled in the art will recognize that more than two viewpoints may be combined to form a stereoscopic image.
  • Stereoscopic images may be produced by a computer using a variety of techniques. For example, the “anaglyph” method uses color to encode the left and right components of a stereoscopic image. Thereafter, a viewer wears a special pair of glasses that filters light such that each eye perceives only one of the views.
  • Similarly, page-flipped stereoscopic imaging is a technique for rapidly switching a display between the right and left views of an image. Again, the viewer wears a special pair of eyeglasses that contains high-speed electronic shutters, typically made with liquid crystal material, which open and close in sync with the images on the display. As in the case of anaglyphs, each eye perceives only one of the component images.
  • Other stereoscopic imaging techniques have been recently developed that do not require special eyeglasses or headgear. For example, lenticular imaging partitions two or more disparate image views into thin slices and interleaves the slices to form a single image. The interleaved image is then positioned behind a lenticular lens that reconstructs the disparate views such that each eye perceives a different view. Some lenticular displays are implemented by a lenticular lens positioned over a conventional LCD display, as commonly found on computer laptops.
  • Another stereoscopic imaging technique involves shifting regions of an input image to create a complementary image. Such techniques have been utilized in a manual 2D-to-3D film conversion system developed by a company called In-Three, Inc. of Westlake Village, Calif. The 2D-to-3D conversion system is described in U.S. Pat. No. 6,208,348 issued on Mar. 27, 2001 to Kaye. Although referred to as a 3D system, the process is actually 2D because it does not convert a 2D image back into a 3D scene, but rather manipulates the 2D input image to create the right-eye image. FIG. 1 illustrates the workflow developed by the process disclosed in U.S. Pat. No. 6,208,348, where FIG. 1 originally appeared as FIG. 5 in U.S. Pat. No. 6,208,348. The process can be described as the following: for an input image, regions 2, 4, 6 are first outlined manually. An operator then shifts each region to create stereo disparity, e.g., 8, 10, 12. The depth of each region can be seen by viewing its 3D playback in another display by 3D glasses. The operator adjusts the shifting distance of the region until an optimal depth is achieved. However, the 2D-to-3D conversion is achieved mostly manually by shifting the regions in the input 2D images to create the complementary right-eye images. The process is very inefficient and requires enormous human intervention.
  • Recently, automatic 2D-to-3D conversion systems and methods have been proposed. However, certain methods have better results than others depending on the type of object being converted in the image, e.g., fuzzy objects, solid objects, etc. Since most images contain both fuzzy objects and solid objects, an operator of the system would need to manually select the objects in the images and then manually select the corresponding 2D-to-3D conversion mode for each object. Therefore, a need exists for techniques for automatically selecting the best 2D-to-3D conversion mode among a list of candidates to achieve the best results based on the local image content.
  • SUMMARY
  • A system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion of images to create stereoscopic images are provided. The system and method of the present disclosure utilizes a plurality of conversion methods or modes (e.g., converters) and selects the best approach based on content in the images. The conversion process is conducted on a region-by-region basis where regions in the images are classified to determine the best converter or conversion mode available. The system and method of the present disclosure uses a pattern-recognition-based system that includes two components: a classification component and a learning component. The inputs to the classification component are features extracted from a region of a 2D image and the output is an identifier of the 2D-to-3D conversion modes or converters expected to provide the best results. The learning component optimizes the classification parameters to achieve minimum classification error of the region using a set of training images and corresponding user annotations. For the training images, the user annotates the identifier of the best conversion mode or converter to each region. The learning component then optimizes the classification (i.e., learns) by using the visual features of the regions for training and their annotated converter identifiers. After each region of an image is converted, a second image (e.g., the right eye image or complementary image) is created by projecting the 3D scene, which includes the converted 3D regions or objects, onto another imaging plane with a different camera view angle.
  • According to one aspect of the present disclosure, a three-dimensional (3D) conversion method for creating stereoscopic images includes acquiring a two-dimensional image; identifying a region of the two dimensional image; classifying the identified region; selecting a conversion mode based on the classification of the identified region; converting the region into a three-dimensional model based on the selected conversion mode; and creating a complementary image by projecting the three-dimensional model onto an image plane different than an image plane of the two-dimensional image.
  • In another aspect, the method includes extracting features from the region; classifying the extracted features and selecting the conversion mode based on the classification of the extracted features. The extracting step further includes determining a feature vector from the extracted features, wherein the feature vector is employed in the classifying step to classify the identified region. The extracted features may include texture and edge direction features.
  • In a further aspect of the present disclosure, the conversion mode is a fuzzy object conversion mode or a solid object conversion mode.
  • In yet a further aspect of the present disclosure, the classifying step further includes acquiring a plurality of 2D images; selecting a region in each of the plurality of 2D images; annotating the selected region with an optimal conversion mode based on a type of the selected region; and optimizing the classifying step based on the annotated 2D images, wherein the type of the selected region corresponds to a fuzzy object or solid object.
  • According to another aspect of the present disclosure, a system for three-dimensional (3D) conversion of objects from two-dimensional (2D) images is provided.
  • The system includes a post-processing device configured for creating a complementary image from at least one 2D image; the post-processing device including a region detector configured for detecting at least one region in at least one 2D image; a region classifier configured for classifying a detected region to determine an identifier of at least one converter; the at least one converter configured for converting a detected region into a 3D model; and a reconstruction module configured for creating a complementary image by projecting the selected 3D model onto an image plane different than an image plane of the at least one 2D image. The at least one converter may include a fuzzy object converter or a solid object converter.
  • In another aspect, the system further includes a feature extractor configured to extract features from the detected region. The extracted features may include texture and edge direction features.
  • According to yet another aspect, the system further includes a classifier learner configured to acquire a plurality of 2D images, select at least one region in each of the plurality of 2D images and annotate the selected at least one region with the identifier of an optimal converter based on a type of the selected at least one region, wherein the region classifier is optimized based on the annotated 2D images.
  • In a further aspect of the present disclosure, a program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform method steps for creating stereoscopic images from a two-dimensional (2D) image is provided, the method including acquiring a two-dimensional image; identifying a region of the two-dimensional image; classifying the identified region; selecting a conversion mode based on the classification of the identified region; converting the region into a three-dimensional model based on the selected conversion mode; and creating a complementary image by projecting the three-dimensional model onto an image plane different than an image plane of the two-dimensional image.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • These, and other aspects, features and advantages of the present disclosure will be described or become apparent from the following detailed description of the preferred embodiments, which is to be read in connection with the accompanying drawings.
  • In the drawings, wherein like reference numerals denote similar elements throughout the views:
  • FIG. 1 illustrates a prior art technique for creating a right-eye or complementary image from an input image;
  • FIG. 2 is a flow diagram illustrating a system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion of the images according to an aspect of the present disclosure;
  • FIG. 3 is an exemplary illustration of a system for two-dimensional (2D) to three-dimensional (3D) conversion of images for creating stereoscopic images according to an aspect of the present disclosure; and
  • FIG. 4 is a flow diagram of an exemplary method for converting two-dimensional (2D) images to three-dimensional (3D) images for creating stereoscopic images according to an aspect of the present disclosure.
  • It should be understood that the drawing(s) is for purposes of illustrating the concepts of the disclosure and is not necessarily the only possible configuration for illustrating the disclosure.
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
  • It should be understood that the elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.
  • The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope.
  • All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
  • Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
  • Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
  • The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read only memory (“ROM”) for storing software, random access memory (“RAM”), and nonvolatile storage.
  • Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • The present disclosure deals with the problem of creating 3D geometry from 2D images. The problem arises in various film production applications, including visual effects (VXF), 2D film to 3D film conversion, among others. Previous systems for 2D-to-3D conversion are realized by creating a complimentary image (also known as a right-eye image) by shifting selected regions in the input image, therefore, creating stereo disparity for 3D playback. The process is very inefficient, and it is difficult to convert regions of images to 3D surfaces if the surfaces are curved rather than flat.
  • There are different 2D-to-3D conversion approaches that work better or worse based on the content or the objects depicted in a region of the 2D image. For example, 3D particle systems work better for fuzzy objects; whereas, 3D geometry model fitting does a better job for solid objects. These two approaches actually complement each other since it is in general difficult to estimate accurate geometry for fuzzy objects, and vice versa. However, most 2D images in movies contain fuzzy objects such as trees and solid objects such as buildings that are best represented by particle systems and 3D geometry models, respectively. So, assuming there are several available 2D-to-3D conversion modes, the problem is to select the best approach according to the region content. Therefore, for general 2D-to-3D conversion, the present disclosure provides techniques to combine these two approaches, among others, to achieve the best results. The present disclosure provides a system and method for general 2D-to-3D conversion that automatically switches between several available conversion approaches according to the local content of the images. The 2D-to-3D conversion is, therefore, fully automated.
  • A system and method for region classification of two-dimensional (2D) images for 2D-to-3D conversion of images to create stereoscopic images are provided. The system and method of the present disclosure provide a 3D-based technique for 2D-to-3D conversion of images to create stereoscopic images. The stereoscopic images can then be employed in further processes to create 3D stereoscopic films. Referring to FIG. 2, the system and method of the present disclosure utilizes a plurality of conversion methods or modes (e.g., converters) 18 and selects the best approach based on content in the images 14. The conversion process is conducted on a region-by-region basis where regions 16 in the images 14 are classified to determine the best converter or conversion mode 18 available. The system and method of the present disclosure uses a pattern-recognition-based system that includes two components: a classification component 20 and a learning component 22. The inputs to the classification component 20, or region classifier, are features extracted from a region 16 of a 2D image 14 and the output of the classification component 20 is an identifier (i.e., an integer number) of the 2D-to-3D conversion modes or converters 18 expected to provide the best results. The learning component 22, or classifier learner, optimizes the classification parameters of the region classifier 20 to achieve minimum classification error of the region using a set of training images 24 and corresponding user annotations. For the training images 24, the user annotates the identifier of the best conversion mode or converter 18 to each region 16. The learning component then optimizes the classification (i.e., learns) by using the converter index and visual features of the region. After each region of an image is converted, a second image (e.g., the right eye image or complementary image) is created by projecting the 3D scene 26, which includes the converted 3D regions or objects, onto another imaging plane with a different camera view angle.
  • Referring now to FIG. 3, exemplary system components according to an embodiment of the present disclosure are shown. A scanning device 103 may be provided for scanning film prints 104, e.g., camera-original film negatives, into a digital format, e.g., a Cineon-format or SMPTE DPX files. The scanning device 103 may comprise, e.g., a telecine or any device that will generate a video output from film such as, e.g., an Arri LocPro™ with video output. Alternatively, files from the post production process or digital cinema 106 (e.g., files already in computer-readable form) can be used directly. Potential sources of computer-readable files are AVID™ editors, DPX files, D5 tapes etc.
  • Scanned film prints are input to a post-processing device 102, e.g., a computer. The computer is implemented on any of the various known computer platforms having hardware such as one or more central processing units (CPU), memory 110 such as random access memory (RAM) and/or read only memory (ROM) and input/output (I/O) user interface(s) 112 such as a keyboard, cursor control device (e.g., a mouse or joystick) and display device. The computer platform also includes an operating system and micro instruction code. The various processes and functions described herein may either be part of the micro instruction code or part of a software application program (or a combination thereof) which is executed via the operating system. In addition, various other peripheral devices may be connected to the computer platform by various interfaces and bus structures, such a parallel port, serial port or universal serial bus (USB). Other peripheral devices may include additional storage devices 124 and a printer 128. The printer 128 may be employed for printing a revised version of the film 126, e.g., a stereoscopic version of the film, wherein a scene or a plurality of scenes may have been altered or replaced using 3D modeled objects as a result of the techniques described below.
  • Alternatively, files/film prints already in computer-readable form 106 (e.g., digital cinema, which for example, may be stored on external hard drive 124) may be directly input into the computer 102. Note that the term “film” used herein may refer to either film prints or digital cinema.
  • A software program includes a three-dimensional (3D) reconstruction module 114 stored in the memory 110 for converting two-dimensional (2D) images to three-dimensional (3D) images for creating stereoscopic images. The 3D conversion module 114 includes a region or object detector 116 for identifying objects or regions in 2D images. The region or object detector 116 identifies objects either manually by outlining image regions containing objects by image editing software or by isolating image regions containing objects with automatic detection algorithms, e.g., segmentation algorithms. A feature extractor 119 is provided to extract features from the regions of the 2D images. Feature extractors are known in the art and extract features including but not limited to texture, line direction, edges, etc.
  • The 3D reconstruction module 114 also includes a region classifier 117 configured to classify the regions of the 2D image and determine the best available converter for a particular region of an image. The region classifier 117 will output an identifier, e.g., an integer number, for identifying the conversion mode or converter to be used for the detected region. Furthermore, the 3D reconstruction module 114 includes a 3D conversion module 118 for converting the detected region into a 3D model. The 3D conversion module 118 includes a plurality of converters 118-1 . . . 118-n, where each converter is configured to convert a different type of region. For example, solid objects or regions containing solid objects will be converted by object matcher 118-1, while fuzzy regions or objects will be converted by particle system generator 118-2. An exemplary converter for solid objects is disclosed in commonly owned PCT Patent Application PCT/US2006/044834, filed on Nov. 17, 2006, entitled “SYSTEM AND METHOD FOR MODEL FITTING AND REGISTRATION OF OBJECTS FOR 2D-TO-3D CONVERSION” (hereinafter “the '834 application”) and an exemplary converter for fuzzy objects is disclosed in commonly owned PCT Patent Application PCT/US2006/042586, filed on Oct. 27, 2006, entitled “SYSTEM AND METHOD FOR RECOVERING THREE-DIMENSIONAL PARTICLE SYSTEMS FROM TWO-DIMENSIONAL IMAGES” (hereinafter “the '586 application”), the contents of which are hereby incorporated by reference in their entireties.
  • It is to be appreciated that the system includes a library of 3D models that will be employed by the various converters 118-1 . . . 118-n. The converters 118 will interact with various libraries of 3D models 122 selected for the particular converter or conversion mode. For example, for the object matcher 118-1, the library of 3D models 122 will include a plurality of 3D object models where each object model relates to a predefined object. For the particle system generator 118-2, the library 122 will include a library of predefined particle systems.
  • An object renderer 120 is provided for rendering the 3D models into a 3D scene to create a complementary image. This is realized by a rasterization process or more advanced techniques, such as ray tracing or photon mapping.
  • FIG. 4 is a flow diagram of an exemplary method for converting two-dimensional (2D) images to three-dimensional (3D) images for creating stereoscopic images according to an aspect of the present disclosure. Initially, at step 202, the post-processing device 102 acquires at least one two-dimensional (2D) image, e.g., a reference or left-eye image. The post-processing device 102 acquires at least one 2D image by obtaining the digital master video file in a computer-readable format, as described above. The digital video file may be acquired by capturing a temporal sequence of video images with a digital video camera. Alternatively, the video sequence may be captured by a conventional film-type camera. In this scenario, the film is scanned via scanning device 103. The camera will acquire 2D images while moving either the object in a scene or the camera. The camera will acquire multiple viewpoints of the scene.
  • It is to be appreciated that whether the film is scanned or already in digital format, the digital file of the film will include indications or information on locations of the frames, e.g., a frame number, time from start of the film, etc. Each frame of the digital video file will include one image, e.g., I1, I2, . . . , In.
  • In step 204, a region in the 2D image is identified or detected. It is to be appreciated that a region can contain several objects or can be part of an object. Using the region detector 116, an object or region may be manually selected and outlined by a user using image editing tools, or alternatively, the object or region may be automatically detected and outlined using image detection algorithms, e.g., object detection or region segmentation algorithms. It is to be appreciated that a plurality of objects or regions may be identified in the 2D image.
  • Once the region is identified or detected, features are extracted, at step 206, from the detected region via feature extractor 119 and the extracted features are classified, at step 208, by the region classifier 117 to determine an identifier of at least one of the plurality of converters 118 or conversion modes. The region classifier 117 is basically a function that outputs the identifier of the best expected converter according to features extracted from regions. In various embodiments, different features can be chosen. For a particular classification purpose (i.e. select solid object converter 118-1 or particle system converter 118-2), texture features may perform better than other features such as color since particle systems usually have richer textures than the solid objects. Furthermore, many solid objects, such as buildings, have prominent vertical and horizontal lines, therefore, edge direction may be the most relevant feature. Below is one example of how texture feature and edge feature can be used as inputs to the region classifier 117.
  • Texture features can be computed in many ways. Gabor wavelet feature is one of the most widely used texture features in image processing. The extraction process first applies a set of Gabor kernels with different spatial frequencies to the image and then computes the total pixel intensity of the filtered image. The filter kernel function follows:
  • h ( x , y ) = 1 2 πσ g 2 exp [ - x 2 + y 2 2 πσ g 2 ] exp ( j2π F ( x cos θ + y sin θ ) ) ( 1 )
  • where F is the spatial frequency and θ is the direction of the Gabor filter. Assuming for illustration purposes 3 levels of spatial frequencies and 4 directions (e.g., only cover angles from 0−π due to symmetry), then, the number of Gabor filter features is 12.
  • Edge features can be extracted by first applying horizontal and vertical line detection algorithms to the 2D image and, then, counting the edge pixels. Line detection can be realized by applying directional edge filters and, then, connecting the small edge segments into lines. Canny edge detection can be used for this purpose and is known in the art. If only horizontal lines and vertical lines (e.g., for the case of buildings) are to be detected, then, a two-dimensional feature vector, a dimension for each direction, is obtained. The two-dimensional case described is for illustration purposes only and can be easily extended to more dimensions.
  • If texture features have N dimensions, and edge directional features have M dimensions, then all of these features can be put together in a large feature vector with (N+M) dimensions. For each region, the extracted feature vector is input to the region classifier 117. The output of the classifier is the identifier of the recommended 2D-to-3D converter 118. It is to be appreciated that the feature vector could be different depending on different feature extractors. Furthermore, the input to the region classifier 117 can be other features than those described above and can be any feature that is relevant to the content in the region.
  • For learning the region classifier 117, training data which contains images with different kinds of regions is collected. Each region in the images is then outlined and manually annotated with the identifier of the converter or conversion mode that is expected to perform best based on the type of the region (e.g., corresponding to a fuzzy object such as a tree or a solid object such as a building). A region may contain several objects and all of the objects within the region use the same converter. Therefore, to select a good converter, the content within the region should have homogeneous properties, so that a correct converter can be selected. The learning process takes the annotated training data and builds the best region classifier so as to minimize the difference between the output of the classifier and the annotated identifier for the images in the training set. The region classifier 117 is controlled by a set of parameters. For the same input, changing the parameters of the region classifier 117 gives different classification output, i.e. different identifier of the converter. The learning process automatically and continuously changes the parameters of the classifier to some point that the classifier outputs the best classification results for the training data. Then, the parameters are taken as the optimal parameters for future uses. Mathematically, if Means Square Error is used, the cost function to be minimized can be written as follows:
  • Cost ( φ ) = i ( I i - f φ ( R i ) ) ( 2 )
  • where Ri is the region i in the training images, Ii is the identifier of the best converter assigned to the region during annotation process, and fφ ( ) is the classifier whose parameter is represented by φ. The learning process maximizes the above overall cost with respect to the parameter φ.
  • Different types of classifiers can be chosen for region classification. A popular classifier in the pattern recognition field is Support Vector Machine (SVM). SVM is a non-linear optimization scheme that minimizes the classification error in the training set, but is also able to achieve a small prediction error for the testing set.
  • The identifier of the converter is then used to select the appropriate converter 118-1 . . . 118-n in the 3D conversion module 118. The selected converter then converts the detected region into a 3D model (step 210). Such converters are known in the art.
  • As previously discussed, an exemplary converter or conversion mode for solid objects is disclosed in the commonly owned '834 application. This application discloses a system and method for model fitting and registration of objects for 2D-to-3D conversion of images to create stereoscopic images. The system includes a database that stores a variety of 3D models of real-world objects. For a first 2D input image (e.g., the left eye image or reference image), regions to be converted to 3D are identified or outlined by a system operator or automatic detection algorithm. For each region, the system selects a stored 3D model from the database and registers the selected 3D model so the projection of the 3D model matches the image content within the identified region in an optimal way. The matching process can be implemented using geometric approaches or photometric approaches. After a 3D position and pose of the 3D object has been computed for the first 2D image via the registration process, a second image (e.g., the right eye image or complementary image) is created by projecting the 3D scene, which includes the registered 3D objects with deformed texture, onto another imaging plane with a different camera view angle.
  • Also as previously discussed, an exemplary converter or conversion mode for fuzzy objects is disclosed in the commonly owned '586 application. This application discloses a system and method for recovering three-dimensional (3D) particle systems from two-dimensional (2D) images. The geometry reconstruction system and method recovers 3D particle systems representing the geometry of fuzzy objects from 2D images. The geometry reconstruction system and method identifies fuzzy objects in 2D images, which can, therefore, be generated by a particle system. The identification of the fuzzy objects is either done manually by outlining regions containing the fuzzy objects with image editing tools or by automatic detection algorithms. These fuzzy objects are then further analyzed to develop criteria for matching them to a library of particle systems. The best match is determined by analyzing light properties and surface properties of the image segment both in the frame and temporally, i.e., in a sequential series of images. The system and method simulate and render a particle system selected from the library, and then, compare the rendering result with the fuzzy object in the image. The system and method then determines whether the particle system is a good match or not according to certain matching criteria.
  • Once all of the objects or detected regions identified in the scene have been converted into 3D space, the complementary image (e.g., the right-eye image) is created by rendering the 3D scene including converted 3D objects and a background plate into another imaging plane, at step 212, different than the imaging plane of the input 2D image, which is determined by a virtual right camera. The rendering may be realized by a rasterization process as in the standard graphics card pipeline, or by more advanced techniques such as ray tracing used in the professional post-production workflow. The position of the new imaging plane is determined by the position and view angle of the virtual right camera. The setting of the position and view angle of the virtual right camera (e.g., the camera simulated in the computer or post-processing device) should result in an imaging plane that is parallel to the imaging plane of the left camera that yields the input image. In one embodiment, this can be achieved by tweaking the position and view angle of the virtual camera and getting feedback by viewing the resulting 3D playback on a display device. The position and view angle of the right camera is adjusted so that the created stereoscopic image can be viewed in the most comfortable way by the viewers.
  • The projected scene is then stored as a complementary image, e.g., the right-eye image, to the input image, e.g., the left-eye image (step 214). The complementary image will be associated to the input image in any conventional manner so they may be retrieved together at a later point in time. The complementary image may be saved with the input, or reference, image in a digital file 130 creating a stereoscopic film. The digital file 130 may be stored in storage device 124 for later retrieval, e.g., to print a stereoscopic version of the original film.
  • Although the embodiment which incorporates the teachings of the present disclosure has been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. Having described preferred embodiments for a system and method for region classification of 2D images for 2D-to-3D conversion (which are intended to be illustrative and not limiting), it is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments of the disclosure disclosed which are within the scope and spirit of the disclosure as outlined by the appended claims. Having thus described the disclosure with the details and particularity required by the patent laws, what is claimed and desired protected by Letters Patent is set forth in the appended claims.

Claims (20)

1. A three-dimensional conversion method for creating stereoscopic images comprising:
acquiring a two-dimensional image;
identifying a region in the two-dimensional image;
classifying the identified region;
selecting a conversion mode based on the classification of the identified region;
converting the region into a three-dimensional model based on the selected conversion mode; and
creating a complementary image by projecting the three-dimensional model onto an image plane different than an image plane of the acquired two-dimensional image.
2. The method as in claim 1, further comprising:
extracting features from the region;
classifying the extracted features; and
selecting the conversion mode based on the classification of the extracted features.
3. The method as in claim 2, wherein the extracting step further comprises determining a feature vector from the extracted features.
4. The method as in claim 3, wherein the feature vector is employed in the classifying step to classify the identified region.
5. The method as in claim 2, wherein the extracted features are texture and edge direction.
6. The method as in claim 5, further comprising:
determining a feature vector from the texture features and the edge direction features; and
classifying the feature vector to select the conversion mode.
7. The method as in claim 1, wherein the conversion mode is a fuzzy object conversion mode or a solid object conversion mode.
8. The method as in claim 1, wherein the classifying step further comprises:
acquiring a plurality of two-dimensional images;
selecting a region in each of the plurality of two-dimensional images;
annotating the selected region with an optimal conversion mode based on a type of the selected region; and
optimizing the classifying step based on the annotated two-dimensional images.
9. The method as in claim 8, wherein the type of selected region corresponds to a fuzzy object.
10. The method as in claim 8, wherein the type of selected region corresponds to a solid object.
11. A system for three-dimensional conversion of objects from two-dimensional images, the system comprising:
a post-processing device configured for creating a complementary image from a two-dimensional image; the post-processing device including:
a region detector configured for detecting a region in at least one two-dimensional image;
a region classifier configured for classifying a detected region to determine an identifier of at least one converter;
the at least one converter configured for converting a detected region into a three-dimensional model; and
a reconstruction module configured for creating a complementary image by projecting the selected three-dimensional model onto an image plane different than an image plane of the one two-dimensional image.
12. The system as in claim 11, further comprising a feature extractor configured to extract features from the detected region.
13. The system as in claim 12, wherein the feature extractor is further configured to determine a feature vector for inputting into the region classifier.
14. The system as in claim 12, wherein the extracted features are texture and edge direction.
15. The system as in claim 11, wherein the region detector is a segmentation function.
16. The system as in claim 11, wherein the at least one converter is a fuzzy object converter or a solid object converter.
17. The system as in claim 11, further comprising a classifier learner configured to acquire a plurality of two-dimensional images, select at least one region in each of the plurality of two-dimensional images and annotate the selected at least one region with the identifier of an optimal converter based on a type of the selected at least one region, wherein the region classifier is optimized based on the annotated two-dimensional images.
18. The system as in claim 17, wherein the type of selected at least one region corresponds to a fuzzy object.
19. The system as in claim 17, wherein the type of selected at least one region corresponds to a solid object.
20. A program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine to perform method steps for creating stereoscopic images from a two-dimensional image, the method comprising;
acquiring a two-dimensional image;
identifying a region of the two dimensional image;
classifying the identified region;
selecting a conversion mode based on the classification of the identified region;
converting the region into a three-dimensional model based on the selected conversion mode; and
creating a complementary image by projecting the three-dimensional model onto an image plane different than an image plane of the two-dimensional image.
US12/531,906 2007-03-23 2007-03-23 System and method for region classification of 2d images for 2d-to-3d conversion Abandoned US20110043540A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2007/007234 WO2008118113A1 (en) 2007-03-23 2007-03-23 System and method for region classification of 2d images for 2d-to-3d conversion

Publications (1)

Publication Number Publication Date
US20110043540A1 true US20110043540A1 (en) 2011-02-24

Family

ID=38686187

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/531,906 Abandoned US20110043540A1 (en) 2007-03-23 2007-03-23 System and method for region classification of 2d images for 2d-to-3d conversion

Country Status (7)

Country Link
US (1) US20110043540A1 (en)
EP (1) EP2130178A1 (en)
JP (1) JP4938093B2 (en)
CN (1) CN101657839B (en)
BR (1) BRPI0721462A2 (en)
CA (1) CA2681342A1 (en)
WO (1) WO2008118113A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090220146A1 (en) * 2008-03-01 2009-09-03 Armin Bauer Method and apparatus for characterizing the formation of paper
US20120105581A1 (en) * 2010-10-29 2012-05-03 Sony Corporation 2d to 3d image and video conversion using gps and dsm
US20120287153A1 (en) * 2011-05-13 2012-11-15 Sony Corporation Image processing apparatus and method
US20130177235A1 (en) * 2012-01-05 2013-07-11 Philip Meier Evaluation of Three-Dimensional Scenes Using Two-Dimensional Representations
EP2618586A1 (en) 2012-01-18 2013-07-24 Nxp B.V. 2D to 3D image conversion
EP2774124A1 (en) * 2011-11-02 2014-09-10 Google, Inc. Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
US20140307946A1 (en) * 2013-04-12 2014-10-16 Hitachi High-Technologies Corporation Observation device and observation method
CN104867129A (en) * 2015-04-16 2015-08-26 东南大学 Light field image segmentation method
US9208608B2 (en) 2012-05-23 2015-12-08 Glasses.Com, Inc. Systems and methods for feature tracking
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
WO2016053067A1 (en) * 2014-10-03 2016-04-07 Samsung Electronics Co., Ltd. 3-dimensional model generation using edges
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
US9495791B2 (en) 2011-10-05 2016-11-15 Bitanimate, Inc. Resolution enhanced 3D rendering systems and methods
US9661307B1 (en) 2011-11-15 2017-05-23 Google Inc. Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D
US9674498B1 (en) 2013-03-15 2017-06-06 Google Inc. Detecting suitability for converting monoscopic visual content to stereoscopic 3D
CN107018400A (en) * 2017-04-07 2017-08-04 华中科技大学 It is a kind of by 2D Video Quality Metrics into 3D videos method
US9769460B1 (en) 2012-02-10 2017-09-19 Google Inc. Conversion of monoscopic visual content to stereoscopic 3D
US10015478B1 (en) 2010-06-24 2018-07-03 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
US10164776B1 (en) 2013-03-14 2018-12-25 goTenna Inc. System and method for private and point-to-point communication between computing devices
US20190058857A1 (en) * 2017-08-15 2019-02-21 International Business Machines Corporation Generating three-dimensional imagery
US10957099B2 (en) 2018-11-16 2021-03-23 Honda Motor Co., Ltd. System and method for display of visual representations of vehicle associated information based on three dimensional model
CN112561793A (en) * 2021-01-18 2021-03-26 深圳市图南文化设计有限公司 Planar design space conversion method and system
US11036965B2 (en) 2017-02-20 2021-06-15 Omron Corporation Shape estimating apparatus
US11069130B2 (en) 2011-11-30 2021-07-20 International Business Machines Corporation Generating three-dimensional virtual scene
CN113450458A (en) * 2021-06-28 2021-09-28 杭州群核信息技术有限公司 Data conversion system, method and device of household parametric model and storage medium
US11138410B1 (en) * 2020-08-25 2021-10-05 Covar Applied Technologies, Inc. 3-D object detection and classification from imagery
US11393164B2 (en) * 2019-05-06 2022-07-19 Apple Inc. Device, method, and graphical user interface for generating CGR objects

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2008260156B2 (en) 2007-05-29 2013-10-31 Trustees Of Tufts College Method for silk fibroin gelation using sonication
JP5352738B2 (en) * 2009-07-01 2013-11-27 本田技研工業株式会社 Object recognition using 3D model
US8520935B2 (en) 2010-02-04 2013-08-27 Sony Corporation 2D to 3D image conversion based on image content
CN102469318A (en) * 2010-11-04 2012-05-23 深圳Tcl新技术有限公司 Method for converting two-dimensional image into three-dimensional image
JP5907368B2 (en) * 2011-07-12 2016-04-26 ソニー株式会社 Image processing apparatus and method, and program
CN102523466A (en) * 2011-12-09 2012-06-27 彩虹集团公司 Method for converting 2D (two-dimensional) video signals into 3D (three-dimensional) video signals
US9208606B2 (en) * 2012-08-22 2015-12-08 Nvidia Corporation System, method, and computer program product for extruding a model through a two-dimensional scene
CN103198522B (en) * 2013-04-23 2015-08-12 清华大学 Three-dimensional scene models generation method
CN103533332B (en) * 2013-10-22 2016-01-20 清华大学深圳研究生院 A kind of 2D video turns the image processing method of 3D video
CN103716615B (en) * 2014-01-09 2015-06-17 西安电子科技大学 2D video three-dimensional method based on sample learning and depth image transmission
CN103955886A (en) * 2014-05-22 2014-07-30 哈尔滨工业大学 2D-3D image conversion method based on graph theory and vanishing point detection
JP6663926B2 (en) * 2015-05-13 2020-03-13 グーグル エルエルシー DeepStereo: learning to predict new views from real world images
CN105006012B (en) * 2015-07-14 2018-09-21 山东易创电子有限公司 A kind of the body rendering intent and system of human body layer data
CN106249857B (en) * 2015-12-31 2018-06-29 深圳超多维光电子有限公司 A kind of display converting method, device and terminal device
CN106227327B (en) * 2015-12-31 2018-03-30 深圳超多维光电子有限公司 A kind of display converting method, device and terminal device
CN106231281B (en) * 2015-12-31 2017-11-17 深圳超多维光电子有限公司 A kind of display converting method and device
CN106971129A (en) * 2016-01-13 2017-07-21 深圳超多维光电子有限公司 The application process and device of a kind of 3D rendering
KR102421856B1 (en) * 2017-12-20 2022-07-18 삼성전자주식회사 Method and apparatus for processing image interaction
CN108506170A (en) * 2018-03-08 2018-09-07 上海扩博智能技术有限公司 Fan blade detection method, system, equipment and storage medium
US10755112B2 (en) * 2018-03-13 2020-08-25 Toyota Research Institute, Inc. Systems and methods for reducing data storage in machine learning
CN108810547A (en) * 2018-07-03 2018-11-13 电子科技大学 A kind of efficient VR video-frequency compression methods based on neural network and PCA-KNN

Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5361386A (en) * 1987-12-04 1994-11-01 Evans & Sutherland Computer Corp. System for polygon interpolation using instantaneous values in a variable
US5594652A (en) * 1991-01-31 1997-01-14 Texas Instruments Incorporated Method and apparatus for the computer-controlled manufacture of three-dimensional objects from computer data
US5812691A (en) * 1995-02-24 1998-09-22 Udupa; Jayaram K. Extraction of fuzzy object information in multidimensional images for quantifying MS lesions of the brain
US20010052899A1 (en) * 1998-11-19 2001-12-20 Todd Simpson System and method for creating 3d models from 2d sequential image data
US20020048395A1 (en) * 2000-08-09 2002-04-25 Harman Philip Victor Image conversion and encoding techniques
US20030035098A1 (en) * 2001-08-10 2003-02-20 Nec Corporation Pose estimation method and apparatus
US6545673B1 (en) * 1999-03-08 2003-04-08 Fujitsu Limited Three-dimensional CG model generator and recording medium storing processing program thereof
US20030085890A1 (en) * 2001-11-05 2003-05-08 Baumberg Adam Michael Image processing apparatus
US6583787B1 (en) * 2000-02-28 2003-06-24 Mitsubishi Electric Research Laboratories, Inc. Rendering pipeline for surface elements
US6603475B1 (en) * 1999-11-17 2003-08-05 Korea Advanced Institute Of Science And Technology Method for generating stereographic image using Z-buffer
US20030234782A1 (en) * 2002-06-21 2003-12-25 Igor Terentyev System and method for adaptively labeling multi-dimensional images
US20050104878A1 (en) * 1998-05-27 2005-05-19 Kaye Michael C. Method of hidden surface reconstruction for creating accurate three-dimensional images converted from two-dimensional images
US20050146521A1 (en) * 1998-05-27 2005-07-07 Kaye Michael C. Method for creating and presenting an accurate reproduction of three-dimensional images converted from two-dimensional images
US20050244050A1 (en) * 2002-04-25 2005-11-03 Toshio Nomura Image data creation device, image data reproduction device, and image data recording medium
US20060061583A1 (en) * 2004-09-23 2006-03-23 Conversion Works, Inc. System and method for processing video images
US20060140473A1 (en) * 2004-12-23 2006-06-29 Brooksby Glen W System and method for object measurement
US20070024614A1 (en) * 2005-07-26 2007-02-01 Tam Wa J Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging
US7212656B2 (en) * 2000-03-09 2007-05-01 Microsoft Corporation Rapid computer modeling of faces for animation
US20070279415A1 (en) * 2006-06-01 2007-12-06 Steve Sullivan 2D to 3D image conversion
US20070299802A1 (en) * 2007-03-31 2007-12-27 Mitchell Kwok Human Level Artificial Intelligence Software Application for Machine & Computer Based Program Function
US20080150945A1 (en) * 2006-12-22 2008-06-26 Haohong Wang Complexity-adaptive 2d-to-3d video sequence conversion
US20080303894A1 (en) * 2005-12-02 2008-12-11 Fabian Edgar Ernst Stereoscopic Image Display Method and Apparatus, Method for Generating 3D Image Data From a 2D Image Data Input and an Apparatus for Generating 3D Image Data From a 2D Image Data Input
US20090116732A1 (en) * 2006-06-23 2009-05-07 Samuel Zhou Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US20090279767A1 (en) * 2008-05-12 2009-11-12 Siemens Medical Solutions Usa, Inc. System for three-dimensional medical instrument navigation
US20100026784A1 (en) * 2006-12-19 2010-02-04 Koninklijke Philips Electronics N.V. Method and system to convert 2d video into 3d video
US20100315410A1 (en) * 2006-10-27 2010-12-16 Dong-Qing Zhang System and method for recovering three-dimensional particle systems from two-dimensional images
US20110188780A1 (en) * 2010-02-04 2011-08-04 Sony Corporation 2D to 3D Image Conversion Based on Image Content

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3524147B2 (en) * 1994-04-28 2004-05-10 キヤノン株式会社 3D image display device
ID27878A (en) * 1997-12-05 2001-05-03 Dynamic Digital Depth Res Pty IMAGE IMPROVED IMAGE CONVERSION AND ENCODING ENGINEERING
CN1466737A (en) * 2000-08-09 2004-01-07 动态数字视距研究有限公司 Image conversion and encoding techniques
CN100416612C (en) * 2006-09-14 2008-09-03 浙江大学 Video flow based three-dimensional dynamic human face expression model construction method
US20090322860A1 (en) * 2006-11-17 2009-12-31 Dong-Qing Zhang System and method for model fitting and registration of objects for 2d-to-3d conversion

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5361386A (en) * 1987-12-04 1994-11-01 Evans & Sutherland Computer Corp. System for polygon interpolation using instantaneous values in a variable
US5594652A (en) * 1991-01-31 1997-01-14 Texas Instruments Incorporated Method and apparatus for the computer-controlled manufacture of three-dimensional objects from computer data
US5812691A (en) * 1995-02-24 1998-09-22 Udupa; Jayaram K. Extraction of fuzzy object information in multidimensional images for quantifying MS lesions of the brain
US20050146521A1 (en) * 1998-05-27 2005-07-07 Kaye Michael C. Method for creating and presenting an accurate reproduction of three-dimensional images converted from two-dimensional images
US20050104878A1 (en) * 1998-05-27 2005-05-19 Kaye Michael C. Method of hidden surface reconstruction for creating accurate three-dimensional images converted from two-dimensional images
US20010052899A1 (en) * 1998-11-19 2001-12-20 Todd Simpson System and method for creating 3d models from 2d sequential image data
US6545673B1 (en) * 1999-03-08 2003-04-08 Fujitsu Limited Three-dimensional CG model generator and recording medium storing processing program thereof
US6603475B1 (en) * 1999-11-17 2003-08-05 Korea Advanced Institute Of Science And Technology Method for generating stereographic image using Z-buffer
US6583787B1 (en) * 2000-02-28 2003-06-24 Mitsubishi Electric Research Laboratories, Inc. Rendering pipeline for surface elements
US7212656B2 (en) * 2000-03-09 2007-05-01 Microsoft Corporation Rapid computer modeling of faces for animation
US20020048395A1 (en) * 2000-08-09 2002-04-25 Harman Philip Victor Image conversion and encoding techniques
US20030035098A1 (en) * 2001-08-10 2003-02-20 Nec Corporation Pose estimation method and apparatus
US20030085890A1 (en) * 2001-11-05 2003-05-08 Baumberg Adam Michael Image processing apparatus
US20050244050A1 (en) * 2002-04-25 2005-11-03 Toshio Nomura Image data creation device, image data reproduction device, and image data recording medium
US20030234782A1 (en) * 2002-06-21 2003-12-25 Igor Terentyev System and method for adaptively labeling multi-dimensional images
US20060061583A1 (en) * 2004-09-23 2006-03-23 Conversion Works, Inc. System and method for processing video images
US20060140473A1 (en) * 2004-12-23 2006-06-29 Brooksby Glen W System and method for object measurement
US20070024614A1 (en) * 2005-07-26 2007-02-01 Tam Wa J Generating a depth map from a two-dimensional source image for stereoscopic and multiview imaging
US20080303894A1 (en) * 2005-12-02 2008-12-11 Fabian Edgar Ernst Stereoscopic Image Display Method and Apparatus, Method for Generating 3D Image Data From a 2D Image Data Input and an Apparatus for Generating 3D Image Data From a 2D Image Data Input
US20070279415A1 (en) * 2006-06-01 2007-12-06 Steve Sullivan 2D to 3D image conversion
US20090116732A1 (en) * 2006-06-23 2009-05-07 Samuel Zhou Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US20100315410A1 (en) * 2006-10-27 2010-12-16 Dong-Qing Zhang System and method for recovering three-dimensional particle systems from two-dimensional images
US20100026784A1 (en) * 2006-12-19 2010-02-04 Koninklijke Philips Electronics N.V. Method and system to convert 2d video into 3d video
US20080150945A1 (en) * 2006-12-22 2008-06-26 Haohong Wang Complexity-adaptive 2d-to-3d video sequence conversion
US20070299802A1 (en) * 2007-03-31 2007-12-27 Mitchell Kwok Human Level Artificial Intelligence Software Application for Machine & Computer Based Program Function
US20090279767A1 (en) * 2008-05-12 2009-11-12 Siemens Medical Solutions Usa, Inc. System for three-dimensional medical instrument navigation
US20110188780A1 (en) * 2010-02-04 2011-08-04 Sony Corporation 2D to 3D Image Conversion Based on Image Content

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090220146A1 (en) * 2008-03-01 2009-09-03 Armin Bauer Method and apparatus for characterizing the formation of paper
US11470303B1 (en) 2010-06-24 2022-10-11 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
US10015478B1 (en) 2010-06-24 2018-07-03 Steven M. Hoffberg Two dimensional to three dimensional moving image converter
US20120105581A1 (en) * 2010-10-29 2012-05-03 Sony Corporation 2d to 3d image and video conversion using gps and dsm
US20120287153A1 (en) * 2011-05-13 2012-11-15 Sony Corporation Image processing apparatus and method
US10600237B2 (en) 2011-10-05 2020-03-24 Bitanimate, Inc. Resolution enhanced 3D rendering systems and methods
US10102667B2 (en) 2011-10-05 2018-10-16 Bitanimate, Inc. Resolution enhanced 3D rendering systems and methods
US9495791B2 (en) 2011-10-05 2016-11-15 Bitanimate, Inc. Resolution enhanced 3D rendering systems and methods
US9471988B2 (en) 2011-11-02 2016-10-18 Google Inc. Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
EP2774124A1 (en) * 2011-11-02 2014-09-10 Google, Inc. Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
US10194137B1 (en) 2011-11-02 2019-01-29 Google Llc Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
EP2774124A4 (en) * 2011-11-02 2015-10-21 Google Inc Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
US9661307B1 (en) 2011-11-15 2017-05-23 Google Inc. Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D
US11574438B2 (en) 2011-11-30 2023-02-07 International Business Machines Corporation Generating three-dimensional virtual scene
US11069130B2 (en) 2011-11-30 2021-07-20 International Business Machines Corporation Generating three-dimensional virtual scene
US9236024B2 (en) 2011-12-06 2016-01-12 Glasses.Com Inc. Systems and methods for obtaining a pupillary distance measurement using a mobile computing device
US20130177235A1 (en) * 2012-01-05 2013-07-11 Philip Meier Evaluation of Three-Dimensional Scenes Using Two-Dimensional Representations
US9111375B2 (en) * 2012-01-05 2015-08-18 Philip Meier Evaluation of three-dimensional scenes using two-dimensional representations
EP2618586A1 (en) 2012-01-18 2013-07-24 Nxp B.V. 2D to 3D image conversion
US8908994B2 (en) 2012-01-18 2014-12-09 Nxp B.V. 2D to 3d image conversion
US9769460B1 (en) 2012-02-10 2017-09-19 Google Inc. Conversion of monoscopic visual content to stereoscopic 3D
US10147233B2 (en) 2012-05-23 2018-12-04 Glasses.Com Inc. Systems and methods for generating a 3-D model of a user for a virtual try-on product
US9235929B2 (en) 2012-05-23 2016-01-12 Glasses.Com Inc. Systems and methods for efficiently processing virtual 3-D data
US9311746B2 (en) 2012-05-23 2016-04-12 Glasses.Com Inc. Systems and methods for generating a 3-D model of a virtual try-on product
US9286715B2 (en) 2012-05-23 2016-03-15 Glasses.Com Inc. Systems and methods for adjusting a virtual try-on
US9208608B2 (en) 2012-05-23 2015-12-08 Glasses.Com, Inc. Systems and methods for feature tracking
US9378584B2 (en) 2012-05-23 2016-06-28 Glasses.Com Inc. Systems and methods for rendering virtual try-on products
US9483853B2 (en) 2012-05-23 2016-11-01 Glasses.Com Inc. Systems and methods to display rendered images
US10164776B1 (en) 2013-03-14 2018-12-25 goTenna Inc. System and method for private and point-to-point communication between computing devices
US9674498B1 (en) 2013-03-15 2017-06-06 Google Inc. Detecting suitability for converting monoscopic visual content to stereoscopic 3D
US20140307946A1 (en) * 2013-04-12 2014-10-16 Hitachi High-Technologies Corporation Observation device and observation method
US9305343B2 (en) * 2013-04-12 2016-04-05 Hitachi High-Technologies Corporation Observation device and observation method
US9846963B2 (en) 2014-10-03 2017-12-19 Samsung Electronics Co., Ltd. 3-dimensional model generation using edges
WO2016053067A1 (en) * 2014-10-03 2016-04-07 Samsung Electronics Co., Ltd. 3-dimensional model generation using edges
CN104867129A (en) * 2015-04-16 2015-08-26 东南大学 Light field image segmentation method
US11036965B2 (en) 2017-02-20 2021-06-15 Omron Corporation Shape estimating apparatus
CN107018400A (en) * 2017-04-07 2017-08-04 华中科技大学 It is a kind of by 2D Video Quality Metrics into 3D videos method
US10735707B2 (en) * 2017-08-15 2020-08-04 International Business Machines Corporation Generating three-dimensional imagery
US10785464B2 (en) 2017-08-15 2020-09-22 International Business Machines Corporation Generating three-dimensional imagery
US20190058857A1 (en) * 2017-08-15 2019-02-21 International Business Machines Corporation Generating three-dimensional imagery
US10957099B2 (en) 2018-11-16 2021-03-23 Honda Motor Co., Ltd. System and method for display of visual representations of vehicle associated information based on three dimensional model
US11393164B2 (en) * 2019-05-06 2022-07-19 Apple Inc. Device, method, and graphical user interface for generating CGR objects
US11138410B1 (en) * 2020-08-25 2021-10-05 Covar Applied Technologies, Inc. 3-D object detection and classification from imagery
US20220067342A1 (en) * 2020-08-25 2022-03-03 Covar Applied Technologies, Inc. 3-d object detection and classification from imagery
US11727575B2 (en) * 2020-08-25 2023-08-15 Covar Llc 3-D object detection and classification from imagery
CN112561793A (en) * 2021-01-18 2021-03-26 深圳市图南文化设计有限公司 Planar design space conversion method and system
CN113450458A (en) * 2021-06-28 2021-09-28 杭州群核信息技术有限公司 Data conversion system, method and device of household parametric model and storage medium

Also Published As

Publication number Publication date
WO2008118113A1 (en) 2008-10-02
CN101657839A (en) 2010-02-24
CN101657839B (en) 2013-02-06
BRPI0721462A2 (en) 2013-01-08
CA2681342A1 (en) 2008-10-02
JP4938093B2 (en) 2012-05-23
EP2130178A1 (en) 2009-12-09
JP2010522469A (en) 2010-07-01

Similar Documents

Publication Publication Date Title
US20110043540A1 (en) System and method for region classification of 2d images for 2d-to-3d conversion
CA2668941C (en) System and method for model fitting and registration of objects for 2d-to-3d conversion
CA2687213C (en) System and method for stereo matching of images
JP4879326B2 (en) System and method for synthesizing a three-dimensional image
CA2704479C (en) System and method for depth map extraction using region-based filtering
US8213708B2 (en) Adjusting perspective for objects in stereoscopic images
CN102474636A (en) Adjusting perspective and disparity in stereoscopic image pairs
US20150030233A1 (en) System and Method for Determining a Depth Map Sequence for a Two-Dimensional Video Sequence
WO2008152607A1 (en) Method, apparatus, system and computer program product for depth-related information propagation
Lee et al. Estimating scene-oriented pseudo depth with pictorial depth cues
Wang et al. Example-based video stereolization with foreground segmentation and depth propagation
EP2462539A1 (en) Systems and methods for three-dimensional video generation
Xu et al. Comprehensive depth estimation algorithm for efficient stereoscopic content creation in three-dimensional video systems
Cai et al. Image-guided depth propagation for 2-D-to-3-D video conversion using superpixel matching and adaptive autoregressive model
Liu Improving forward mapping and disocclusion inpainting algorithms for depth-image-based rendering and geomatics applications
CN101536040B (en) In order to 2D to 3D conversion carries out the system and method for models fitting and registration to object
Nazzar Automated detection of defects in 3D movies
Azad et al. Evaluation study of the reconstruction analysis of visualization for three ways construction using Epipolar geometry

Legal Events

Date Code Title Description
AS Assignment

Owner name: REALD DDMG ACQUISITION, LLC, CALIFORNIA

Free format text: RELEASE FROM PATENT SECURITY AGREEMENT AT REEL/FRAME NO. 29855/0189;ASSIGNOR:CITY NATIONAL BANK;REEL/FRAME:038216/0933

Effective date: 20160322

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION