US20090317061A1 - Image generating method and apparatus and image processing method and apparatus - Google Patents
Image generating method and apparatus and image processing method and apparatus Download PDFInfo
- Publication number
- US20090317061A1 US20090317061A1 US12/489,758 US48975809A US2009317061A1 US 20090317061 A1 US20090317061 A1 US 20090317061A1 US 48975809 A US48975809 A US 48975809A US 2009317061 A1 US2009317061 A1 US 2009317061A1
- Authority
- US
- United States
- Prior art keywords
- image
- shot
- video data
- information
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000000034 method Methods 0.000 title claims abstract description 9
- 239000000203 mixture Substances 0.000 claims description 37
- 238000004891 communication Methods 0.000 claims description 7
- 239000000284 extract Substances 0.000 claims description 7
- 238000009877 rendering Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000002194 synthesizing effect Effects 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000013598 vector Substances 0.000 description 2
- 238000004590 computer program Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G06T5/92—
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/19—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
- G11B27/28—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
- G11B27/32—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier
- G11B27/322—Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on separate auxiliary tracks of the same or an auxiliary record carrier used signal is digitally coded
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/133—Equalising the characteristics of different image components, e.g. their average brightness or colour balance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/139—Format conversion, e.g. of frame-rate or size
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/158—Switching image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/178—Metadata, e.g. disparity information
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
- H04N13/183—On-screen display [OSD] information, e.g. subtitles or menus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/189—Recording image signals; Reproducing recorded image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/261—Image signal generators with monoscopic-to-stereoscopic image conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/339—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using spatial multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/341—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using temporal multiplexing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/356—Image reproducers having separate monoscopic and stereoscopic modes
- H04N13/359—Switching between monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/361—Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20208—High dynamic range [HDR] image processing
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0626—Adjustment of display parameters for control of overall brightness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/156—Mixing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/286—Image signal generators having separate monoscopic and stereoscopic modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/005—Aspects relating to the "3D+depth" image format
Definitions
- aspects of the present invention generally relate to an image generating method and apparatus and an image processing method and apparatus, and more particularly, to an image generating method and apparatus and an image processing method and apparatus in which video data is output as a two-dimensional (2D) image or a three-dimensional (3D) image by using metadata associated with the video data.
- 3D image technology expresses a more realistic image by adding depth information to a two-dimensional (2D) image.
- the 3D image technology can be classified into technology to generate video data as a 3D image and technology to convert video data generated as a 2D image into a 3D image. Both technologies have been studied together.
- aspects of the present invention provide an image processing method and apparatus to output video data as a two-dimensional image or a three-dimensional image by using metadata associated with the video data can be provided.
- an image processing method to output video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image the image processing method including: extracting information about the video data from metadata associated with the video data; and outputting the video data as the 2D image or the 3D image by using the extracted information about the video data, wherein the information about the video data includes information to classify frames of the video data into predetermined units.
- the information to classify the frames of the video data as the predetermined units may be shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- the shot information may include output moment information of a frame being output first and output moment information of a frame being output last from among the group of frames classified as the shot.
- the metadata may include shot type information indicating whether the frames classified as the shot are to be output as the 2D image or the 3D image
- the outputting of the video data may include outputting the frames classified as the shot as the 2D image or the 3D image by using the shot type information.
- the outputting of the video data may include determining, by using the metadata, whether a background composition of a current frame is not predictable by using a previous frame preceding the current frame and thus the current frame is classified as a new shot, outputting the current frame as the 2D image when the current frame is classified as the new shot, and converting the remaining frames of the frames classified as the new shot into the 3D image and outputting the converted 3D image.
- the outputting of the video data may include determining, by using the metadata, whether a background composition of a current frame is not predictable by using a previous frame preceding the current frame and thus the current frame is classified as a new shot, extracting background depth information to be applied to the current frame classified as the new shot from the metadata when the current frame is classified as the new shot, and generating a depth map for the current frame by using the background depth information.
- the generating of the depth map for the current frame may include generating the depth map for a background of the current frame by using coordinate point values of the background of the current frame, depth values corresponding to the coordinate point values, and a panel position value, in which the coordinate point values, the depth value, and the panel position value are included in the background depth information.
- the image processing method may further include reading the metadata from a disc recorded with the video data or downloading the metadata from a server through a communication network.
- the metadata may include identification information to identify the video data
- the identification information may include a disc identifier (ID) to identify a disc recorded with the video data and a title ID to indicate a title including the video data among a plurality of titles recorded in the disc identified by the disc ID.
- ID disc identifier
- an image generating method including: receiving video data being a two-dimensional (2D) image; and generating metadata associated with the video data, the metadata including information to classify frames of the video data as predetermined units, wherein the information to classify the frames of the video data as the predetermined units is shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- 2D two-dimensional
- the shot information may include output moment information of a frame being output first and output moment information of a frame being output last from among the frames classified as the shot, and/or may include shot type information indicating whether the frames classified as the shot are to be output as the 2D image or a three-dimensional (3D) image.
- the metadata may include background depth information for frames classified as a predetermined shot and the background depth information may include coordinate point values of a background of the frame classified as the predetermined shot, depth values corresponding to the coordinate point values, and a panel position value.
- an image processing apparatus to output video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image
- the image processing apparatus including: a metadata analyzing unit to determine whether the video data is to be output as the 2D image or the 3D image by using metadata associated with the video data; a 3D image converting unit to convert the video data into the 3D image when the video data is to be output as the 3D image; and an output unit to output the video data as the 2D image or the 3D image, wherein the metadata includes information to classify frames of the video data into predetermined units.
- the information to classify the frames of the video data into the predetermined units may be shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- the shot information may include output moment information of a frame being output first and output moment information of a frame being output last from among the frames classified as the shot.
- the metadata may include shot type information indicating whether the frames classified as the shot are to be output as the 2D image or the 3D image.
- the metadata may include background depth information for a frame classified as a predetermined shot, and the background depth information may include coordinate point values of a background of the frame classified as the predetermined shot, depth values corresponding to the coordinate point values, and a panel position value.
- an image generating apparatus including: a video data encoding unit to encode video data being a two-dimensional (2D) image; a metadata generating unit to generate metadata associated with the video data, the metadata including information to classify frames of the video data into predetermined units; and a metadata encoding unit to encode the metadata, in which the information to classify the frames of the video data into the predetermined units is shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- 2D two-dimensional
- a computer-readable information storage medium including video data being a two-dimensional (2D) image and metadata associated with the video data, the metadata including information to classify frames of the video data into predetermined units, wherein the information to classify the frames of the video data into the predetermined units is shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- 2D two-dimensional
- a computer-readable information storage medium having recorded thereon a program to execute an image processing method to output video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image
- the image processing method including: extracting information about the video data from metadata associated with the video data; and outputting the video data as the 2D image or the 3D image by using the extracted information about the video data, wherein the information about the video data includes information to classify frames of the video data into predetermined units.
- a system to output video data as a two-dimensional (2D) image or a three-dimensional (3D) image including: an image generating apparatus including: a video data encoding unit to encode the video data being the 2D image, a metadata generating unit to generate metadata associated with the video data, the metadata comprising information to classify frames of the video data as predetermined units and used to determine whether each of the classified frames is to be converted to the 3D image; and an image processing apparatus to receive the encoded video data and the generated metadata, and to output the video data as the 2D image or the 3D image, the image processing apparatus including: a metadata analyzing unit to determine whether the video data is to be output as the 2D image or the 3D image by using the information to classify the frames of the video data comprised in the received metadata associated with the video data, a 3D image converting unit to convert the video data into the 3D image when the metadata analyzing unit determines that the video data is to be output as the 3D image, and an image generating apparatus including: a video data encoding unit
- a computer-readable information storage medium including: metadata associated with video data comprising two-dimensional (2D) frames, the metadata comprising information used by an image processing apparatus to classify the frames of the video data as predetermined units and used by the image processing apparatus to determine whether each of the classified frames is to be converted by the image processing apparatus to a three-dimensional (3D) image, wherein the information to classify the frames of the video data as the predetermined units comprises shot information to classify, as a shot, a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames.
- an image processing method to output video data having two-dimensional (2D) images as the 2D images or three-dimensional (3D) images, the image processing method including: determining, by an image processing apparatus, whether metadata associated with the video data exists on a disc comprising the video data; reading, by the image processing apparatus, the metadata from the disc if the metadata is determined to exist on the disc; retrieving, by the image processing apparatus, the metadata from a server if the metadata is determined to not exist on the disc; and outputting, by the image processing apparatus, the video data as selectable between the 2D image and the 3D image according to the metadata.
- FIG. 1 is a block diagram of an image generating apparatus according to an embodiment of the present invention
- FIG. 2 illustrates metadata generated by the image generating apparatus illustrated in FIG. 1 ;
- FIGS. 3A through 3C are views to explain a depth map generated by using background depth information
- FIG. 4 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
- FIG. 5 is a block diagram of an image processing apparatus according to another embodiment of the present invention.
- FIG. 6 is a flowchart illustrating an image processing method according to an embodiment of the present invention.
- FIG. 7 is a flowchart illustrating in detail an operation illustrated in FIG. 6 where video data is output as a two-dimensional (2D) image or a three-dimensional (3D) image.
- FIG. 1 is a block diagram of an image generating apparatus 100 according to an embodiment of the present invention.
- the image generating apparatus 100 includes a video data generating unit 110 , a video data encoding unit 120 , a metadata generating unit 130 , a metadata encoding unit 140 , and a multiplexing unit 150 .
- the video data generating unit 110 generates video data and outputs the generated video data to the video data encoding unit 120 .
- the video data encoding unit 120 encodes the input video data and outputs the encoded video data (OUT 1 ) to the multiplexing unit 150 , and/or to an image processing apparatus (not shown) through a communication network, though it is understood that the video data encoding unit 120 may output the encoded video data to the image processing apparatus through any wired and/or wireless connection (such as IEEE 1394, universal serial bus, a Bluetooth, an infrared, etc.).
- the image generating apparatus 100 may be a computer, a workstation, a camera device, a mobile device, a stand-alone device, etc.
- each of the units 110 , 120 , 130 , 140 , 150 can be one or more processors or processing elements on one or more chips or integrated circuits.
- the metadata generating unit 130 analyzes the video data generated by the video data generating unit 110 to generate metadata including information about frames of the video data.
- the metadata includes information to convert the generated video data from a two-dimensional (2D) image into a three-dimensional (3D) image.
- the metadata also includes information to classify the frames of the video data as predetermined units.
- the metadata generated by the metadata generating unit 130 will be described in more detail with reference to FIG. 2 .
- the metadata generating unit 130 outputs the generated metadata to the metadata encoding unit 140 .
- the metadata encoding unit 140 encodes the input metadata and outputs the encoded metadata (OUT 3 ) to the multiplexing unit 150 and/or to the image processing apparatus.
- the multiplexing unit 150 multiplexes the encoded video data (OUT 1 ) and the encoded metadata (OUT 3 ) and transmits the multiplexing result (OUT 2 ) to the image processing apparatus through a wired and/or wireless communication network, or any wired and/or wireless connection, as described above.
- the metadata encoding unit 140 may transmit the encoded metadata (OUT 3 ), separately from the encoded video data (OUT 1 ), to the image processing apparatus, instead of to or in addition to the multiplexing unit 150 . In this way, the image generating apparatus 100 generates metadata associated with video data, the metadata including information to convert the video data from a 2D image into a 3D image.
- FIG. 2 illustrates metadata generated by the image generating apparatus 100 illustrated in FIG. 1 .
- the metadata includes information about video data.
- disc identification information to identify a disc in which the video data is recorded is included in the metadata, though it is understood that the metadata does not include the disc identification information in other embodiments.
- the disc identification information may include a disc identifier (ID) to identify the disc recorded with the video data and a title ID to identify a title including the video data among a plurality of titles recorded in the disc identified by the disc ID.
- ID disc identifier
- the metadata includes information about the frames.
- the information about the frames may include information to classify the frames according to a predetermined criterion. Assuming that a group of similar frames is a unit, total frames of the video data can be classified as a plurality of units.
- information to classify the frames of the video data as predetermined units is included in the metadata. Specifically, a group of frames having similar background compositions in which a background composition of a current frame can be predicted by using a previous frame preceding the current frame is classified as a shot.
- the metadata generating unit 130 classifies the frames of the video data as a predetermined shot and incorporates information about the shot (i.e., shot information) into the metadata. When the background composition of the current frame is different from that of the previous frame due to a significant change in the frame background composition, the current frame and the previous frame are classified as different shots.
- the shot information includes information about output moments of frames classified within the shot. For example, such information includes output moment information of a frame being output first (shot start moment information in FIG. 2 ) and output moment information of a frame being output last (shot end moment information in FIG. 2 ) among the frames classified as each shot, though aspects of the present invention are not limited thereto.
- the shot information includes the shot start moment information and information on a number of frames included in the shot.
- the metadata further includes shot type information about frames classified as a shot. The shot type information indicates for each shot whether frames classified as a shot are to be output as a 2D image or a 3D image.
- the metadata also includes background depth information, which will be described in detail with reference to FIGS. 3A through 3C .
- FIGS. 3A through 3C are views to explain a depth map generated by using the background depth information.
- FIG. 3A illustrates a 2D image
- FIG. 3B illustrates a depth map to be applied to the 2D image illustrated in FIG. 3A
- FIG. 3C illustrates a result of applying the depth map to the 2D image.
- a sense of depth is given to the 2D image.
- an image projected on the screen is formed in each of the user's two eyes.
- a distance between two points of the images formed in the eyes is called parallax, and the parallax can be classified into positive parallax, zero parallax, and negative parallax.
- the positive parallax refers to parallax corresponding to a case when the image appears to be formed inside the screen, and the positive parallax is less than or equal to a distance between the eyes. As the positive parallax increases, more cubic effect by which the image appears to lie behind the screen is given. When the image appears to be two-dimensionally formed on the screen plane, a parallax is 0 (i.e., zero parallax). In the case of the zero parallax, the user cannot feel a cubic effect because the image is formed on the screen plane.
- the negative parallax refers to parallax corresponding to a case when the image appears to lie in front of the screen. This parallax is generated when lines of sight to the user's eyes intersect. The negative parallax gives a cubic effect by which the image appears to protrude forward.
- a motion of a current frame may be predicted by using a previous frame and the sense of depth may be added to an image of the current frame by using the predicted motion.
- a depth map for a frame may be generated by using a composition of the frame and the sense of depth may be added to the frame by using the depth map.
- Metadata includes information to classify frames of video data as predetermined shots.
- a composition of a current frame cannot be predicted by using a previous frame due to no similarity in composition between the current frame and the previous frame, the current frame and the previous frame are classified as different shots.
- the metadata includes information about compositions to be applied to frames classified as a shot due to their similarity in composition, and/or includes information about a composition to be applied to each shot.
- the metadata includes background depth information to indicate a composition of a corresponding frame.
- the background depth information may include type information of a background included in a frame, coordinate point information of the background, and a depth value of the background corresponding to a coordinate point.
- the type information of the background may be an ID indicating a composition of the background from among a plurality of compositions.
- a frame includes a background including the ground and the sky.
- the horizon where the ground and the sky meet is the farthest point from the perspective of a viewer, and an image corresponding to the bottom portion of the ground is the nearest point from the perspective of the viewer.
- the image generating apparatus 100 determines that a composition of a type illustrated in FIG. 3B is to be applied to the frame illustrated in FIG. 3A , and generates metadata including type information indicative of the composition illustrated in FIG. 3B for the frame illustrated in FIG. 3A .
- Coordinate point values refer to values of a coordinate point of a predetermined position in 2D images.
- a depth value refers to the degree of depth of an image.
- the depth value may be one of 256 values ranging from 0 to 255. As the depth value decreases, the depth becomes greater and thus an image appears to be farther from a viewer. Conversely, as the depth value increases, an image appears nearer to a viewer. Referring to FIGS. 3B and 3C , it can be seen that a portion where the ground and the sky meets (i.e., the horizon portion) has a smallest depth value and the bottom portion of the ground has a largest depth value in the frame.
- the image processing apparatus extracts the background depth information included in the metadata, generates the depth map as illustrated in FIG. 3C by using the extracted depth information, and outputs a 2D image as a 3D image by using the depth map.
- FIG. 4 is a block diagram of an image processing apparatus 400 according to an embodiment of the present invention.
- the image processing apparatus 400 includes a video data decoding unit 410 , a metadata analyzing unit 420 , and a 3D image converting unit 430 , and an output unit 440 to output a 3D image to a screen.
- the image processing apparatus 400 need not include the output unit 440 in all embodiments, and/or the output unit 440 may be provided separately from the image processing apparatus 400 .
- the image processing apparatus 400 may be a computer, a mobile device, a set-top box, a workstation, etc.
- the output unit 440 may be a cathode ray tube device, a liquid crystal display device, a plasma display device, an organic light emitting diode display device, etc. and/or be connected to the same and or connected to goggles through wired and/or wireless protocols.
- the video data decoding unit 410 reads video data (IN 2 ) from a disc (such as a DVD, Blu-ray, etc.), a local storage, transmitted the image generating device 100 of FIG. 1 , or any external storage device (such as a hard disk drive, a flash memory, etc.) and decodes the read video data.
- the metadata analyzing unit 420 decodes metadata (IN 3 ) to extract information about frames of the read video data from the metadata, and analyzes the extracted information. By using the metadata, the metadata analyzing unit 420 controls a switching unit 433 included in the 3D image converting unit 430 in order to output a frame as a 2D image or a 3D image.
- the metadata analyzing unit 420 receives the metadata IN 3 from a disc, a local storage, transmitted from the image generating device 100 of FIG. 1 , or any external storage device (such as a hard disk drive, a flash memory, etc.).
- the metadata need not be stored with the video data in all aspects of the invention.
- the 3D image converting unit 430 converts the video data from a 2D image received from the video data decoding unit 410 into a 3D image.
- the 3D image converting unit 430 estimates a motion of a current frame by using a previous frame in order to generate a 3D image for the current frame.
- the metadata analyzing unit 420 extracts, from the metadata, output moment information of a frame being output first and/or output moment information of a frame being output last among frames classified as a shot, and determines whether a current frame being currently decoded by the video data decoding unit 410 is classified as a new shot, based on the extracted output moment information.
- the metadata analyzing unit 420 determines that the current frame is classified as a new shot
- the metadata analyzing unit 420 controls the switching unit 433 in order to not convert the current frame into a 3D image such that a motion estimating unit 434 does not estimate the motion of the current frame by using a previous frame stored in a previous frame storing unit 432 .
- the switch unit 433 disconnects the storing unit 432 to prevent use of the previous frame, but aspects of the invention are not limited thereto.
- the metadata includes the shot type information indicating that frames of the video data are to be output as a 2D image.
- the metadata analyzing unit 420 determines whether the video data is to be output as a 2D image or a 3D image for each shot using the shot type information and controls the switching unit 433 depending on a result of the determination.
- the metadata analyzing unit 420 determines, based on the shot type information, that video data classified as a predetermined shot does is not to be converted into a 3D image
- the metadata analyzing unit 420 controls the switching unit 433 such that the 3D image converting unit 430 does not estimate the motion of the current frame by using the previous frame by disconnected the storing unit 432 from the motion estimating unit 434 .
- the metadata analyzing unit 420 determines, based on the shot type information, that video data classified as a predetermined shot is to be converted into a 3D image
- the metadata analyzing unit 420 controls the switching unit 433 such that the image converting unit 430 converts the current frame into a 3D image by using the previous frame by connecting the storing unit 432 and the motion estimating unit 434 .
- the 3D image converting unit 430 converts the video data being a 2D image received from the video data decoding unit 410 into the 3D image.
- the 3D image converting unit 430 includes an image block unit 431 , the previous frame storing unit 432 , the motion estimating unit 434 , a block synthesizing unit 435 , a left-/right-view image determining unit 436 , and the switching unit 433 .
- the image block unit 431 divides a frame of video data, which is a 2D image, into blocks of a predetermined size.
- the previous frame storing unit 432 stores a predetermined number of previous frames preceding a current frame. Under the control of the metadata analyzing unit 420 , the switching unit 433 enables or disables outputting of previous frames stored in the previous frame storing unit 432 to the motion estimating unit 434 .
- the motion estimating unit 434 obtains a per-block motion vector regarding the amount and direction of motion using a block of a current frame and a block of a previous frame.
- the block synthesizing unit 435 synthesizes blocks selected by using the motion vectors obtained by the motion estimating unit 434 from among predetermined blocks of previous frames in order to generate a new frame.
- the motion estimating unit 434 outputs the current frame received from the image block unit 431 to the block synthesizing unit 435 .
- the generated new frame or the current frame is input to the left-/right-view image determining unit 436 .
- the left-/right-view image determining unit 436 determines a left-view image and a right-view image by using the frame received from the block synthesizing unit 435 and a frame received from the video data decoding unit 410 .
- the metadata analyzing unit 420 controls the switching unit 433 to not convert video data into a 3D image
- the left-/right-view image determining unit 436 generates the left-view image and the right-view image that are the same as each other by using the frame with a 2D image received from the block synthesizing unit 435 and the frame with a 2D image received from the video data decoding unit 410 .
- the left-/right-view image determining unit 436 outputs the left-view image and the right-view image to the output unit 440 , an external output device, and/or an external terminal (such as a computer, an external display device, a server, etc.).
- the image processing apparatus 400 further includes the output unit 440 to output the left-view image and the right-view image (OUT 2 ) determined by the left-/right-view image determining unit 436 to the screen alternately at lest every 1/120 second.
- the image processing apparatus 400 does not convert video data corresponding to a shot change point or video data for which 3D image conversion is not required according to the determination based on the shot information provided in metadata, thereby reducing unnecessary computation and complexity of the apparatus 400 .
- the output image OUT 2 can be received at a receiving unit through which a user sees the screen, such as goggles, through wired and/or wireless protocols.
- FIG. 5 is a block diagram of an image processing apparatus 500 according to another embodiment of the present invention.
- the image processing apparatus 500 includes a video data decoding unit 510 , a metadata analyzing unit 520 , a 3D image converting unit 530 , and an output unit 540 .
- the image processing apparatus 500 need not include the output unit 540 in all embodiments, and/or the output unit 540 may be provided separately from the image processing apparatus 500 .
- the image processing apparatus 500 may be a computer, a mobile device, a set-top box, a workstation, etc.
- the output unit 540 may be a cathode ray tube device, a liquid crystal display device, a plasma display device, an organic light emitting diode display device, etc. and/or connected to the same or connected to goggles through wired and/or wireless protocols.
- each of the units 510 , 520 , 530 can be one or more processors or processing elements on one or more chips or integrated circuits.
- the video data decoding unit 510 and the metadata analyzing unit 520 read the video data (IN 4 ) and the metadata (IN 5 ) from the loaded disc.
- the metadata may be recorded in a lead-in region, a user data region, and/or a lead-out region of the disc.
- aspects of the present invention are not limited to receiving the video data and the metadata from a disc.
- the image processing apparatus 500 may further include a communicating unit (not shown) to communicate with an external server or an external terminal (for example, through a communication network and/or any wired/wireless connection).
- the image processing apparatus 500 may download video data and/or metadata associated therewith from the external server or the external terminal and store the downloaded data in a local storage (not shown).
- the image processing apparatus 500 may receive the video data and/or metadata from any external storage device different from the disc (for example, a flash memory).
- the video data decoding unit 510 reads the video data from the disc, the external storage device, the external terminal, or the local storage and decodes the read video data.
- the metadata analyzing unit 520 reads the metadata associated with the video data from the disc, the external storage device, the external terminal, or the local storage and analyzes the read metadata.
- the metadata analyzing unit 520 extracts, from the metadata, a disc ID to identify the disc recorded with the video data and a title ID indicating titles including the video data among a plurality of titles in the disc, and determines which video data the metadata is associated with by using the extracted disc ID and title ID.
- the metadata analyzing unit 520 analyzes the metadata to extract information about frames of the video data classified as a predetermined shot.
- the metadata analyzing unit 520 determines whether a current frame is video data corresponding to a shot change point (i.e., is classified as a new shot), in order to control a depth map generating unit 531 .
- the metadata analyzing unit 520 determines whether the frames classified as the predetermined shot are to be output as a 2D image or a 3D image by using shot type information, and controls the depth map generating unit 531 according to a result of the determination. Furthermore, the metadata analyzing unit 520 extracts depth information from the metadata and outputs the depth information to the depth map generating unit 531 .
- the 3D image converting unit 530 generates a 3D image for video data.
- the 3D image converting unit 530 includes the depth map generating unit 531 and a stereo rendering unit 533 .
- the depth map generating unit 531 generates a depth map for a frame by using the background depth information received from the metadata analyzing unit 520 .
- the background depth information includes coordinate point values of a background included in a current frame, a depth value corresponding to the coordinate point values, and a panel position value that represents a depth value of the screen on which an image is output.
- the depth map generating unit 531 generates a depth map for the background of the current frame by using the background depth information and outputs the generated depth map to the stereo rendering unit 533 .
- the depth map generating unit 531 outputs the current frame to the stereo rendering unit 533 without generating the depth map for the current frame.
- the stereo rendering unit 533 generates a left-view image and a right-view image by using the video data received from the video data decoding unit 510 and the depth map received from the depth map generating unit 531 . Accordingly, the stereo rendering unit 533 generates a 3D-format image including both the generated left-view image and the generated right-view image.
- a frame received from the depth map generating unit 531 and a frame received from the video data decoding unit 510 are the same as each other, and thus the left-view image and the right-view image generated by the stereo rendering unit 533 are also the same as each other.
- the 3D format may be a top-and-down format, a side-by-side format, or an interlaced format.
- the stereo rendering unit 533 outputs the left-view image and the right-view image to the output unit 540 , an external output device, and/or an external terminal (such as a computer, an external display device, a server, etc.).
- the image processing apparatus 500 further includes the output unit 540 that operates as an output device.
- the output unit 540 sequentially outputs the left-view image and the right-view image received from the stereo rendering unit 533 to the screen.
- a viewer perceives that an image is sequentially and seamlessly reproduced when the image is output at a frame rate of at least 60 Hz as viewed from a single eye. Therefore, the output unit 540 outputs the screen at a frame rate of at least 120 Hz so that the viewer can perceive that a 3D image is seamlessly reproduced.
- the output unit 540 sequentially outputs the left-view image and the right-view image (OUT 3 ) included in a frame to the screen at least every 1/120 second. The viewer can have his/her view selectively blocked using goggles to alternate which eye receives the image and/or using polarized light.
- FIG. 6 is a flowchart illustrating an image processing method according to an embodiment of the present invention.
- the image processing apparatus 400 or 500 determines whether metadata associated with read video data exists in operation 610 . For example, when the video data and metadata are provided on a disc and the disc is loaded and the image processing apparatus 400 or 500 is instructed to output a predetermined title of the loaded disc, the image processing apparatus 400 or 500 determines whether metadata associated with the title exists therein by using a disc ID and a title ID in operation 610 . If the image processing apparatus 400 or 500 determines that the disc does not have the metadata therein, the image forming apparatus 400 or 500 may download the metadata from an external server or the like through a communication network in operation 620 .
- existing video such as movies on DVD and Blu-ray discs or computer games
- the disc could only contain the metadata, and when the metadata for a particular video is selected, the video is downloaded from the server.
- the image processing apparatus 400 or 500 extracts information about a unit in which the video data is classified from the metadata associated with the video data in operation 630 .
- the information about a unit may be information about a shot (i.e., shot information) in some aspects of the present invention.
- the shot information indicates whether a current frame is classified as the same shot as a previous frame, and may include shot type information indicating whether the current frame is to be output as a 2D image or a 3D image.
- the image processing apparatus 400 or 500 determines whether to output frames as a 2D image or a 3D image by using the shot information, and outputs frames classified as a predetermined shot as a 2D image or a 3D image according to a result of the determination in operation 640 .
- FIG. 7 is a flowchart illustrating in detail operation 640 of FIG. 6 .
- the image processing apparatus 400 or 500 when outputting video data, determines whether a current frame has a different composition from a previous frame and is, thus, classified as a new shot in operation 710 .
- the image processing apparatus 400 or 500 determines that the current frame is classified as the new shot, the image processing apparatus 400 or 500 outputs an initial frame included in the new shot as a 2D image without converting the initial frame into a 3D image in operation 720 .
- the image processing apparatus 400 or 500 determines whether to output the remaining frames following the initial frame among total frames classified as the new shot as a 2D image or a 3D image by using shot type information regarding the new shot, provided in metadata, in operation 730 .
- the image processing apparatus 400 or 500 converts the video data classified as the new shot into a 3D image in operation 740 .
- the image processing apparatus 400 or 500 determines a left-view image and a right-view image from the video data converted into the 3D image and the video data being a 2D image and outputs the video data classified as the new shot as a 3D image in operation 740 .
- the image processing apparatus 500 When the image processing apparatus 500 generates a 3D image by using composition information as in FIG. 5 , the image processing apparatus 500 extracts background depth information to be applied to a current frame classified as a new shot from metadata and generates a depth map for the current frame by using the background depth information.
- the image processing apparatus 400 and 500 When the shot type information regarding the new shot indicates that the video data classified as the new shot is to be output as a 2D image (operation 730 ), the image processing apparatus 400 and 500 outputs the video data as a 2D image without converting the video data into a 3D image in operation 750 . The image processing apparatus 400 or 500 determines whether the entire video data has been completely output in operation 760 . If not, the image processing apparatus 400 or 500 repeats operation 710 .
- video data can be output as a 2D image at a shot change point.
- it is determined for each shot whether to output video data as a 2D image or a 3D image and the video data is output according to a result of the determination, thereby reducing the amount of computation that may increase due to conversion of total video data into a 3D image.
- aspects of the present invention can also be embodied as computer-readable code on a computer-readable recording medium.
- the computer-readable recording medium is any data storage device that can store data that can be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices.
- the computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion.
- aspects of the present invention may also be realized as a data signal embodied in a carrier wave and comprising a program readable by a computer and transmittable over the Internet.
- one or more units of the image processing apparatus 400 and 500 can include a processor or microprocessor executing a computer program stored in a computer-readable medium, such as a local storage (not shown).
- a processor or microprocessor executing a computer program stored in a computer-readable medium, such as a local storage (not shown).
- the image generating apparatus 100 and the image processing apparatus 400 or 500 may be provided in a single apparatus in some embodiments.
Abstract
An image processing method and apparatus and an image generating method and apparatus, the image processing method to output a video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image including: extracting information about the video data from metadata associated with the video data; and outputting the video data as the 2D image or the 3D image by using the extracted information about the video data.
Description
- This application claims the benefit of U.S. Provisional Application No. 61/075,184, filed on Jun. 24, 2008 in the U.S. Patent and Trademark Office, and the benefit of Korean Patent Application No. 10-2008-0091269, filed on Sep. 17, 2008 in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entirety by reference.
- 1. Field of the Invention
- Aspects of the present invention generally relate to an image generating method and apparatus and an image processing method and apparatus, and more particularly, to an image generating method and apparatus and an image processing method and apparatus in which video data is output as a two-dimensional (2D) image or a three-dimensional (3D) image by using metadata associated with the video data.
- 2. Description of the Related Art
- With the development of digital technology, three-dimensional (3D) image technology has widely spread. The 3D image technology expresses a more realistic image by adding depth information to a two-dimensional (2D) image. The 3D image technology can be classified into technology to generate video data as a 3D image and technology to convert video data generated as a 2D image into a 3D image. Both technologies have been studied together.
- Aspects of the present invention provide an image processing method and apparatus to output video data as a two-dimensional image or a three-dimensional image by using metadata associated with the video data can be provided.
- According to an aspect of the present invention, there is provided an image processing method to output video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image, the image processing method including: extracting information about the video data from metadata associated with the video data; and outputting the video data as the 2D image or the 3D image by using the extracted information about the video data, wherein the information about the video data includes information to classify frames of the video data into predetermined units.
- According to an aspect of the present invention, the information to classify the frames of the video data as the predetermined units may be shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- According to an aspect of the present invention, the shot information may include output moment information of a frame being output first and output moment information of a frame being output last from among the group of frames classified as the shot.
- According to an aspect of the present invention, the metadata may include shot type information indicating whether the frames classified as the shot are to be output as the 2D image or the 3D image, and the outputting of the video data may include outputting the frames classified as the shot as the 2D image or the 3D image by using the shot type information.
- According to an aspect of the present invention, the outputting of the video data may include determining, by using the metadata, whether a background composition of a current frame is not predictable by using a previous frame preceding the current frame and thus the current frame is classified as a new shot, outputting the current frame as the 2D image when the current frame is classified as the new shot, and converting the remaining frames of the frames classified as the new shot into the 3D image and outputting the converted 3D image.
- According to an aspect of the present invention, the outputting of the video data may include determining, by using the metadata, whether a background composition of a current frame is not predictable by using a previous frame preceding the current frame and thus the current frame is classified as a new shot, extracting background depth information to be applied to the current frame classified as the new shot from the metadata when the current frame is classified as the new shot, and generating a depth map for the current frame by using the background depth information.
- According to an aspect of the present invention, the generating of the depth map for the current frame may include generating the depth map for a background of the current frame by using coordinate point values of the background of the current frame, depth values corresponding to the coordinate point values, and a panel position value, in which the coordinate point values, the depth value, and the panel position value are included in the background depth information.
- According to an aspect of the present invention, the image processing method may further include reading the metadata from a disc recorded with the video data or downloading the metadata from a server through a communication network.
- According to an aspect of the present invention, the metadata may include identification information to identify the video data, and the identification information may include a disc identifier (ID) to identify a disc recorded with the video data and a title ID to indicate a title including the video data among a plurality of titles recorded in the disc identified by the disc ID.
- According to another aspect of the present invention, there is provided an image generating method including: receiving video data being a two-dimensional (2D) image; and generating metadata associated with the video data, the metadata including information to classify frames of the video data as predetermined units, wherein the information to classify the frames of the video data as the predetermined units is shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- According to an aspect of the present invention, the shot information may include output moment information of a frame being output first and output moment information of a frame being output last from among the frames classified as the shot, and/or may include shot type information indicating whether the frames classified as the shot are to be output as the 2D image or a three-dimensional (3D) image.
- According to an aspect of the present invention, the metadata may include background depth information for frames classified as a predetermined shot and the background depth information may include coordinate point values of a background of the frame classified as the predetermined shot, depth values corresponding to the coordinate point values, and a panel position value.
- According to another aspect of the present invention, there is provided an image processing apparatus to output video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image, the image processing apparatus including: a metadata analyzing unit to determine whether the video data is to be output as the 2D image or the 3D image by using metadata associated with the video data; a 3D image converting unit to convert the video data into the 3D image when the video data is to be output as the 3D image; and an output unit to output the video data as the 2D image or the 3D image, wherein the metadata includes information to classify frames of the video data into predetermined units.
- According to an aspect of the present invention, the information to classify the frames of the video data into the predetermined units may be shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- According to an aspect of the present invention, the shot information may include output moment information of a frame being output first and output moment information of a frame being output last from among the frames classified as the shot.
- According to an aspect of the present invention, the metadata may include shot type information indicating whether the frames classified as the shot are to be output as the 2D image or the 3D image.
- According to an aspect of the present invention, the metadata may include background depth information for a frame classified as a predetermined shot, and the background depth information may include coordinate point values of a background of the frame classified as the predetermined shot, depth values corresponding to the coordinate point values, and a panel position value.
- According to another aspect of the present invention, there is provided an image generating apparatus including: a video data encoding unit to encode video data being a two-dimensional (2D) image; a metadata generating unit to generate metadata associated with the video data, the metadata including information to classify frames of the video data into predetermined units; and a metadata encoding unit to encode the metadata, in which the information to classify the frames of the video data into the predetermined units is shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- According to yet another aspect of the present invention, there is provided a computer-readable information storage medium including video data being a two-dimensional (2D) image and metadata associated with the video data, the metadata including information to classify frames of the video data into predetermined units, wherein the information to classify the frames of the video data into the predetermined units is shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame as a shot.
- According to still another aspect of the present invention, there is provided a computer-readable information storage medium having recorded thereon a program to execute an image processing method to output video data being a two-dimensional (2D) image as the 2D image or a three-dimensional (3D) image, the image processing method including: extracting information about the video data from metadata associated with the video data; and outputting the video data as the 2D image or the 3D image by using the extracted information about the video data, wherein the information about the video data includes information to classify frames of the video data into predetermined units.
- According to an aspect of the present invention, there is provided a system to output video data as a two-dimensional (2D) image or a three-dimensional (3D) image, the system including: an image generating apparatus including: a video data encoding unit to encode the video data being the 2D image, a metadata generating unit to generate metadata associated with the video data, the metadata comprising information to classify frames of the video data as predetermined units and used to determine whether each of the classified frames is to be converted to the 3D image; and an image processing apparatus to receive the encoded video data and the generated metadata, and to output the video data as the 2D image or the 3D image, the image processing apparatus including: a metadata analyzing unit to determine whether the video data is to be output as the 2D image or the 3D image by using the information to classify the frames of the video data comprised in the received metadata associated with the video data, a 3D image converting unit to convert the video data into the 3D image when the metadata analyzing unit determines that the video data is to be output as the 3D image, and an output unit to output the video data as the 2D image or the 3D image according to the determination of the metadata analyzing unit, wherein the information to classify the frames of the video data as the predetermined units is shot information to classify a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames as a shot.
- According to another aspect of the present invention, there is provided a computer-readable information storage medium including: metadata associated with video data comprising two-dimensional (2D) frames, the metadata comprising information used by an image processing apparatus to classify the frames of the video data as predetermined units and used by the image processing apparatus to determine whether each of the classified frames is to be converted by the image processing apparatus to a three-dimensional (3D) image, wherein the information to classify the frames of the video data as the predetermined units comprises shot information to classify, as a shot, a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames.
- According to another aspect of the present invention, there is provided an image processing method to output video data having two-dimensional (2D) images as the 2D images or three-dimensional (3D) images, the image processing method including: determining, by an image processing apparatus, whether metadata associated with the video data exists on a disc comprising the video data; reading, by the image processing apparatus, the metadata from the disc if the metadata is determined to exist on the disc; retrieving, by the image processing apparatus, the metadata from a server if the metadata is determined to not exist on the disc; and outputting, by the image processing apparatus, the video data as selectable between the 2D image and the 3D image according to the metadata.
- Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
- These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
-
FIG. 1 is a block diagram of an image generating apparatus according to an embodiment of the present invention; -
FIG. 2 illustrates metadata generated by the image generating apparatus illustrated inFIG. 1 ; -
FIGS. 3A through 3C are views to explain a depth map generated by using background depth information; -
FIG. 4 is a block diagram of an image processing apparatus according to an embodiment of the present invention; -
FIG. 5 is a block diagram of an image processing apparatus according to another embodiment of the present invention; -
FIG. 6 is a flowchart illustrating an image processing method according to an embodiment of the present invention; and -
FIG. 7 is a flowchart illustrating in detail an operation illustrated inFIG. 6 where video data is output as a two-dimensional (2D) image or a three-dimensional (3D) image. - Reference will now be made in detail to the present embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present invention by referring to the figures.
-
FIG. 1 is a block diagram of animage generating apparatus 100 according to an embodiment of the present invention. Referring toFIG. 1 , theimage generating apparatus 100 includes a videodata generating unit 110, a videodata encoding unit 120, ametadata generating unit 130, ametadata encoding unit 140, and amultiplexing unit 150. The videodata generating unit 110 generates video data and outputs the generated video data to the videodata encoding unit 120. The videodata encoding unit 120 encodes the input video data and outputs the encoded video data (OUT1) to themultiplexing unit 150, and/or to an image processing apparatus (not shown) through a communication network, though it is understood that the videodata encoding unit 120 may output the encoded video data to the image processing apparatus through any wired and/or wireless connection (such as IEEE 1394, universal serial bus, a Bluetooth, an infrared, etc.). Theimage generating apparatus 100 may be a computer, a workstation, a camera device, a mobile device, a stand-alone device, etc. Moreover, while not required, each of theunits - The
metadata generating unit 130 analyzes the video data generated by the videodata generating unit 110 to generate metadata including information about frames of the video data. The metadata includes information to convert the generated video data from a two-dimensional (2D) image into a three-dimensional (3D) image. The metadata also includes information to classify the frames of the video data as predetermined units. The metadata generated by themetadata generating unit 130 will be described in more detail with reference toFIG. 2 . Themetadata generating unit 130 outputs the generated metadata to themetadata encoding unit 140. - The
metadata encoding unit 140 encodes the input metadata and outputs the encoded metadata (OUT3) to themultiplexing unit 150 and/or to the image processing apparatus. Themultiplexing unit 150 multiplexes the encoded video data (OUT1) and the encoded metadata (OUT3) and transmits the multiplexing result (OUT2) to the image processing apparatus through a wired and/or wireless communication network, or any wired and/or wireless connection, as described above. Themetadata encoding unit 140 may transmit the encoded metadata (OUT3), separately from the encoded video data (OUT1), to the image processing apparatus, instead of to or in addition to themultiplexing unit 150. In this way, theimage generating apparatus 100 generates metadata associated with video data, the metadata including information to convert the video data from a 2D image into a 3D image. -
FIG. 2 illustrates metadata generated by theimage generating apparatus 100 illustrated inFIG. 1 . The metadata includes information about video data. In order to indicate with which video data the information included in the metadata is associated, disc identification information to identify a disc in which the video data is recorded is included in the metadata, though it is understood that the metadata does not include the disc identification information in other embodiments. The disc identification information may include a disc identifier (ID) to identify the disc recorded with the video data and a title ID to identify a title including the video data among a plurality of titles recorded in the disc identified by the disc ID. - Since the video data has a series of frames, the metadata includes information about the frames. The information about the frames may include information to classify the frames according to a predetermined criterion. Assuming that a group of similar frames is a unit, total frames of the video data can be classified as a plurality of units. In the present embodiment, information to classify the frames of the video data as predetermined units is included in the metadata. Specifically, a group of frames having similar background compositions in which a background composition of a current frame can be predicted by using a previous frame preceding the current frame is classified as a shot. The
metadata generating unit 130 classifies the frames of the video data as a predetermined shot and incorporates information about the shot (i.e., shot information) into the metadata. When the background composition of the current frame is different from that of the previous frame due to a significant change in the frame background composition, the current frame and the previous frame are classified as different shots. - The shot information includes information about output moments of frames classified within the shot. For example, such information includes output moment information of a frame being output first (shot start moment information in
FIG. 2 ) and output moment information of a frame being output last (shot end moment information inFIG. 2 ) among the frames classified as each shot, though aspects of the present invention are not limited thereto. For example, according to other aspects, the shot information includes the shot start moment information and information on a number of frames included in the shot. The metadata further includes shot type information about frames classified as a shot. The shot type information indicates for each shot whether frames classified as a shot are to be output as a 2D image or a 3D image. The metadata also includes background depth information, which will be described in detail with reference toFIGS. 3A through 3C . -
FIGS. 3A through 3C are views to explain a depth map generated by using the background depth information.FIG. 3A illustrates a 2D image,FIG. 3B illustrates a depth map to be applied to the 2D image illustrated inFIG. 3A , andFIG. 3C illustrates a result of applying the depth map to the 2D image. In order to add a cubic effect to a 2D image, a sense of depth is given to the 2D image. When a user sees a screen, an image projected on the screen is formed in each of the user's two eyes. A distance between two points of the images formed in the eyes is called parallax, and the parallax can be classified into positive parallax, zero parallax, and negative parallax. The positive parallax refers to parallax corresponding to a case when the image appears to be formed inside the screen, and the positive parallax is less than or equal to a distance between the eyes. As the positive parallax increases, more cubic effect by which the image appears to lie behind the screen is given. When the image appears to be two-dimensionally formed on the screen plane, a parallax is 0 (i.e., zero parallax). In the case of the zero parallax, the user cannot feel a cubic effect because the image is formed on the screen plane. The negative parallax refers to parallax corresponding to a case when the image appears to lie in front of the screen. This parallax is generated when lines of sight to the user's eyes intersect. The negative parallax gives a cubic effect by which the image appears to protrude forward. - In order to generate a 3D image by adding the sense of depth to a 2D image, a motion of a current frame may be predicted by using a previous frame and the sense of depth may be added to an image of the current frame by using the predicted motion. For the same purpose, a depth map for a frame may be generated by using a composition of the frame and the sense of depth may be added to the frame by using the depth map. The former will be described in detail with reference to
FIG. 4 , and the latter will be described in detail with reference toFIG. 5 . - As stated previously, metadata includes information to classify frames of video data as predetermined shots. When a composition of a current frame cannot be predicted by using a previous frame due to no similarity in composition between the current frame and the previous frame, the current frame and the previous frame are classified as different shots. The metadata includes information about compositions to be applied to frames classified as a shot due to their similarity in composition, and/or includes information about a composition to be applied to each shot.
- Background compositions of frames may vary. The metadata includes background depth information to indicate a composition of a corresponding frame. The background depth information may include type information of a background included in a frame, coordinate point information of the background, and a depth value of the background corresponding to a coordinate point. The type information of the background may be an ID indicating a composition of the background from among a plurality of compositions.
- Referring to
FIG. 3A , a frame includes a background including the ground and the sky. In this frame, the horizon where the ground and the sky meet is the farthest point from the perspective of a viewer, and an image corresponding to the bottom portion of the ground is the nearest point from the perspective of the viewer. Theimage generating apparatus 100 determines that a composition of a type illustrated inFIG. 3B is to be applied to the frame illustrated inFIG. 3A , and generates metadata including type information indicative of the composition illustrated inFIG. 3B for the frame illustrated inFIG. 3A . - Coordinate point values refer to values of a coordinate point of a predetermined position in 2D images. A depth value refers to the degree of depth of an image. In aspects of the present invention, the depth value may be one of 256 values ranging from 0 to 255. As the depth value decreases, the depth becomes greater and thus an image appears to be farther from a viewer. Conversely, as the depth value increases, an image appears nearer to a viewer. Referring to
FIGS. 3B and 3C , it can be seen that a portion where the ground and the sky meets (i.e., the horizon portion) has a smallest depth value and the bottom portion of the ground has a largest depth value in the frame. The image processing apparatus (not shown) extracts the background depth information included in the metadata, generates the depth map as illustrated inFIG. 3C by using the extracted depth information, and outputs a 2D image as a 3D image by using the depth map. -
FIG. 4 is a block diagram of animage processing apparatus 400 according to an embodiment of the present invention. Referring toFIG. 4 , theimage processing apparatus 400 includes a videodata decoding unit 410, ametadata analyzing unit 420, and a 3Dimage converting unit 430, and anoutput unit 440 to output a 3D image to a screen. However, it is understood that theimage processing apparatus 400 need not include theoutput unit 440 in all embodiments, and/or theoutput unit 440 may be provided separately from theimage processing apparatus 400. Moreover, theimage processing apparatus 400 may be a computer, a mobile device, a set-top box, a workstation, etc. Theoutput unit 440 may be a cathode ray tube device, a liquid crystal display device, a plasma display device, an organic light emitting diode display device, etc. and/or be connected to the same and or connected to goggles through wired and/or wireless protocols. - The video
data decoding unit 410 reads video data (IN2) from a disc (such as a DVD, Blu-ray, etc.), a local storage, transmitted theimage generating device 100 ofFIG. 1 , or any external storage device (such as a hard disk drive, a flash memory, etc.) and decodes the read video data. Themetadata analyzing unit 420 decodes metadata (IN3) to extract information about frames of the read video data from the metadata, and analyzes the extracted information. By using the metadata, themetadata analyzing unit 420 controls aswitching unit 433 included in the 3Dimage converting unit 430 in order to output a frame as a 2D image or a 3D image. Themetadata analyzing unit 420 receives the metadata IN3 from a disc, a local storage, transmitted from theimage generating device 100 ofFIG. 1 , or any external storage device (such as a hard disk drive, a flash memory, etc.). The metadata need not be stored with the video data in all aspects of the invention. - The 3D
image converting unit 430 converts the video data from a 2D image received from the videodata decoding unit 410 into a 3D image. InFIG. 4 , the 3Dimage converting unit 430 estimates a motion of a current frame by using a previous frame in order to generate a 3D image for the current frame. - The
metadata analyzing unit 420 extracts, from the metadata, output moment information of a frame being output first and/or output moment information of a frame being output last among frames classified as a shot, and determines whether a current frame being currently decoded by the videodata decoding unit 410 is classified as a new shot, based on the extracted output moment information. When themetadata analyzing unit 420 determines that the current frame is classified as a new shot, themetadata analyzing unit 420 controls theswitching unit 433 in order to not convert the current frame into a 3D image such that amotion estimating unit 434 does not estimate the motion of the current frame by using a previous frame stored in a previousframe storing unit 432. This is because motion information of a current frame is extracted by referring to a previous frame in order to convert video data from a 2D image into a 3D image. However, if the current frame and the previous frame are classified as different shots, the current frame and the previous frame do not have sufficient similarity therebetween, and thus a composition of the current frame cannot be predicted by using the previous frame. As shown, theswitch unit 433 disconnects thestoring unit 432 to prevent use of the previous frame, but aspects of the invention are not limited thereto. - When the video data is not to be converted into a 3D image (for example, when the video data is a warning sentence, a menu screen, an ending credit, etc.), the metadata includes the shot type information indicating that frames of the video data are to be output as a 2D image. The
metadata analyzing unit 420 determines whether the video data is to be output as a 2D image or a 3D image for each shot using the shot type information and controls theswitching unit 433 depending on a result of the determination. Specifically, when themetadata analyzing unit 420 determines, based on the shot type information, that video data classified as a predetermined shot does is not to be converted into a 3D image, themetadata analyzing unit 420 controls theswitching unit 433 such that the 3Dimage converting unit 430 does not estimate the motion of the current frame by using the previous frame by disconnected thestoring unit 432 from themotion estimating unit 434. When themetadata analyzing unit 420 determines, based on the shot type information, that video data classified as a predetermined shot is to be converted into a 3D image, themetadata analyzing unit 420 controls theswitching unit 433 such that theimage converting unit 430 converts the current frame into a 3D image by using the previous frame by connecting thestoring unit 432 and themotion estimating unit 434. - When the video data is classified as a predetermined shot and is to be output as a 3D image, the 3D
image converting unit 430 converts the video data being a 2D image received from the videodata decoding unit 410 into the 3D image. The 3Dimage converting unit 430 includes animage block unit 431, the previousframe storing unit 432, themotion estimating unit 434, ablock synthesizing unit 435, a left-/right-viewimage determining unit 436, and theswitching unit 433. Theimage block unit 431 divides a frame of video data, which is a 2D image, into blocks of a predetermined size. The previousframe storing unit 432 stores a predetermined number of previous frames preceding a current frame. Under the control of themetadata analyzing unit 420, theswitching unit 433 enables or disables outputting of previous frames stored in the previousframe storing unit 432 to themotion estimating unit 434. - The
motion estimating unit 434 obtains a per-block motion vector regarding the amount and direction of motion using a block of a current frame and a block of a previous frame. Theblock synthesizing unit 435 synthesizes blocks selected by using the motion vectors obtained by themotion estimating unit 434 from among predetermined blocks of previous frames in order to generate a new frame. When themotion estimating unit 434 does not use a previous frame due to the control of theswitching unit 433 by themetadata analyzing unit 420, themotion estimating unit 434 outputs the current frame received from theimage block unit 431 to theblock synthesizing unit 435. - The generated new frame or the current frame is input to the left-/right-view
image determining unit 436. The left-/right-viewimage determining unit 436 determines a left-view image and a right-view image by using the frame received from theblock synthesizing unit 435 and a frame received from the videodata decoding unit 410. When themetadata analyzing unit 420 controls theswitching unit 433 to not convert video data into a 3D image, the left-/right-viewimage determining unit 436 generates the left-view image and the right-view image that are the same as each other by using the frame with a 2D image received from theblock synthesizing unit 435 and the frame with a 2D image received from the videodata decoding unit 410. The left-/right-viewimage determining unit 436 outputs the left-view image and the right-view image to theoutput unit 440, an external output device, and/or an external terminal (such as a computer, an external display device, a server, etc.). - The
image processing apparatus 400 further includes theoutput unit 440 to output the left-view image and the right-view image (OUT2) determined by the left-/right-viewimage determining unit 436 to the screen alternately at lest every 1/120 second. As such, by using the shot information included in the metadata, theimage processing apparatus 400 according to an embodiment of the present invention does not convert video data corresponding to a shot change point or video data for which 3D image conversion is not required according to the determination based on the shot information provided in metadata, thereby reducing unnecessary computation and complexity of theapparatus 400. While not required, the output image OUT2 can be received at a receiving unit through which a user sees the screen, such as goggles, through wired and/or wireless protocols. -
FIG. 5 is a block diagram of animage processing apparatus 500 according to another embodiment of the present invention. Referring toFIG. 5 , theimage processing apparatus 500 includes a videodata decoding unit 510, ametadata analyzing unit 520, a 3Dimage converting unit 530, and anoutput unit 540. However, it is understood that theimage processing apparatus 500 need not include theoutput unit 540 in all embodiments, and/or theoutput unit 540 may be provided separately from theimage processing apparatus 500. Moreover, theimage processing apparatus 500 may be a computer, a mobile device, a set-top box, a workstation, etc. Theoutput unit 540 may be a cathode ray tube device, a liquid crystal display device, a plasma display device, an organic light emitting diode display device, etc. and/or connected to the same or connected to goggles through wired and/or wireless protocols. Moreover, while not required, each of theunits - When video data that is a 2D image and metadata associated with the video data are recorded in a disc (not shown) in a multiplexed state or separately from each other, upon loading of the disc recorded with the video data and the metadata into the
image processing apparatus 500, the videodata decoding unit 510 and themetadata analyzing unit 520 read the video data (IN4) and the metadata (IN5) from the loaded disc. The metadata may be recorded in a lead-in region, a user data region, and/or a lead-out region of the disc. However, it is understood that aspects of the present invention are not limited to receiving the video data and the metadata from a disc. For example, according to other aspects, theimage processing apparatus 500 may further include a communicating unit (not shown) to communicate with an external server or an external terminal (for example, through a communication network and/or any wired/wireless connection). Theimage processing apparatus 500 may download video data and/or metadata associated therewith from the external server or the external terminal and store the downloaded data in a local storage (not shown). Furthermore, theimage processing apparatus 500 may receive the video data and/or metadata from any external storage device different from the disc (for example, a flash memory). - The video
data decoding unit 510 reads the video data from the disc, the external storage device, the external terminal, or the local storage and decodes the read video data. Themetadata analyzing unit 520 reads the metadata associated with the video data from the disc, the external storage device, the external terminal, or the local storage and analyzes the read metadata. When the video data is recorded in the disc, themetadata analyzing unit 520 extracts, from the metadata, a disc ID to identify the disc recorded with the video data and a title ID indicating titles including the video data among a plurality of titles in the disc, and determines which video data the metadata is associated with by using the extracted disc ID and title ID. - The
metadata analyzing unit 520 analyzes the metadata to extract information about frames of the video data classified as a predetermined shot. Themetadata analyzing unit 520 determines whether a current frame is video data corresponding to a shot change point (i.e., is classified as a new shot), in order to control a depthmap generating unit 531. Themetadata analyzing unit 520 determines whether the frames classified as the predetermined shot are to be output as a 2D image or a 3D image by using shot type information, and controls the depthmap generating unit 531 according to a result of the determination. Furthermore, themetadata analyzing unit 520 extracts depth information from the metadata and outputs the depth information to the depthmap generating unit 531. - The 3D
image converting unit 530 generates a 3D image for video data. The 3Dimage converting unit 530 includes the depthmap generating unit 531 and astereo rendering unit 533. The depthmap generating unit 531 generates a depth map for a frame by using the background depth information received from themetadata analyzing unit 520. The background depth information includes coordinate point values of a background included in a current frame, a depth value corresponding to the coordinate point values, and a panel position value that represents a depth value of the screen on which an image is output. The depthmap generating unit 531 generates a depth map for the background of the current frame by using the background depth information and outputs the generated depth map to thestereo rendering unit 533. However, when the current frame is to be output as a 2D image, the depthmap generating unit 531 outputs the current frame to thestereo rendering unit 533 without generating the depth map for the current frame. - The
stereo rendering unit 533 generates a left-view image and a right-view image by using the video data received from the videodata decoding unit 510 and the depth map received from the depthmap generating unit 531. Accordingly, thestereo rendering unit 533 generates a 3D-format image including both the generated left-view image and the generated right-view image. When the current frame is to be output as a 2D image, a frame received from the depthmap generating unit 531 and a frame received from the videodata decoding unit 510 are the same as each other, and thus the left-view image and the right-view image generated by thestereo rendering unit 533 are also the same as each other. The 3D format may be a top-and-down format, a side-by-side format, or an interlaced format. Thestereo rendering unit 533 outputs the left-view image and the right-view image to theoutput unit 540, an external output device, and/or an external terminal (such as a computer, an external display device, a server, etc.). - In the present embodiment, the
image processing apparatus 500 further includes theoutput unit 540 that operates as an output device. In this case, theoutput unit 540 sequentially outputs the left-view image and the right-view image received from thestereo rendering unit 533 to the screen. A viewer perceives that an image is sequentially and seamlessly reproduced when the image is output at a frame rate of at least 60 Hz as viewed from a single eye. Therefore, theoutput unit 540 outputs the screen at a frame rate of at least 120 Hz so that the viewer can perceive that a 3D image is seamlessly reproduced. Accordingly, theoutput unit 540 sequentially outputs the left-view image and the right-view image (OUT3) included in a frame to the screen at least every 1/120 second. The viewer can have his/her view selectively blocked using goggles to alternate which eye receives the image and/or using polarized light. -
FIG. 6 is a flowchart illustrating an image processing method according to an embodiment of the present invention. Referring toFIG. 6 , theimage processing apparatus operation 610. For example, when the video data and metadata are provided on a disc and the disc is loaded and theimage processing apparatus image processing apparatus operation 610. If theimage processing apparatus image forming apparatus operation 620. In this manner, existing video (such as movies on DVD and Blu-ray discs or computer games) can become 3D by merely downloading the corresponding metadata. Alternatively, the disc could only contain the metadata, and when the metadata for a particular video is selected, the video is downloaded from the server. - The
image processing apparatus operation 630. As previously described, the information about a unit may be information about a shot (i.e., shot information) in some aspects of the present invention. The shot information indicates whether a current frame is classified as the same shot as a previous frame, and may include shot type information indicating whether the current frame is to be output as a 2D image or a 3D image. Theimage processing apparatus operation 640. -
FIG. 7 is a flowchart illustrating indetail operation 640 ofFIG. 6 . Referring toFIG. 7 , theimage processing apparatus operation 710. When theimage processing apparatus image processing apparatus operation 720. - The
image processing apparatus operation 730. When the shot type information regarding the new shot indicates that video data classified as the new shot is to be output as a 3D image, theimage processing apparatus operation 740. Specifically, theimage processing apparatus operation 740. When theimage processing apparatus 500 generates a 3D image by using composition information as inFIG. 5 , theimage processing apparatus 500 extracts background depth information to be applied to a current frame classified as a new shot from metadata and generates a depth map for the current frame by using the background depth information. - When the shot type information regarding the new shot indicates that the video data classified as the new shot is to be output as a 2D image (operation 730), the
image processing apparatus operation 750. Theimage processing apparatus operation 760. If not, theimage processing apparatus repeats operation 710. - In this way, according to aspects of the present invention, by using shot information included in metadata, video data can be output as a 2D image at a shot change point. Moreover, according to an embodiment of the present invention, it is determined for each shot whether to output video data as a 2D image or a 3D image and the video data is output according to a result of the determination, thereby reducing the amount of computation that may increase due to conversion of total video data into a 3D image.
- While not restricted thereto, aspects of the present invention can also be embodied as computer-readable code on a computer-readable recording medium. The computer-readable recording medium is any data storage device that can store data that can be thereafter read by a computer system. Examples of the computer-readable recording medium include read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, and optical data storage devices. The computer-readable recording medium can also be distributed over network-coupled computer systems so that the computer-readable code is stored and executed in a distributed fashion. Aspects of the present invention may also be realized as a data signal embodied in a carrier wave and comprising a program readable by a computer and transmittable over the Internet. Moreover, while not required in all aspects, one or more units of the
image processing apparatus image generating apparatus 100 and theimage processing apparatus - Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Claims (29)
1. An image processing method to output video data having two-dimensional (2D) images as the 2D images or three-dimensional (3D) images, the image processing method comprises:
extracting, by an image processing apparatus, information about the video data from metadata associated with the video data; and
outputting, by the image processing apparatus, the video data as selectable between the 2D image and the 3D image according to the extracted information about the video data,
wherein the information about the video data includes information to classify frames of the video data into predetermined units.
2. The image processing method as claimed in claim 1 , wherein the information to classify the frames of the video data into the predetermined units is shot information to classify, as a shot, a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames.
3. The image processing method as claimed in claim 2 , wherein the shot information comprises output moment information of a frame being output first from among the group of frames classified as the shot and/or output moment information of a frame being output last from among the group of frames classified as the shot.
4. The image processing method as claimed in claim 2 , wherein:
the metadata comprises shot type information indicating whether the group of frames classified as the shot are to be output as the 2D image or the 3D image; and
the outputting of the video data comprises outputting the group of frames classified as the shot as the 2D image or the 3D image according to the shot type information.
5. The image processing method as claimed in claim 2 , wherein the outputting of the video data comprises:
according to the metadata, determining that a current frame is classified as a new shot as compared to a previous frame preceding the current frame when a background composition of the current frame is not predictable by using the previous frame;
when the current frame is classified as the new shot, outputting the current frame as the 2D image; and
converting other frames of a group of frames classified as the new shot into the 3D image and outputting the converted 3D image.
6. The image processing method as claimed in claim 2 , wherein the outputting of the video data comprises:
according to the metadata, determining that a current frame is classified as a new shot as compared to a previous frame preceding the current frame when a background composition of the current frame is not predictable by using the previous frame;
when the current frame is classified as the new shot, extracting background depth information to be applied to the current frame classified as the new shot from the metadata; and
when the current frame is classified as the new shot, generating a depth map for the current frame by using the background depth information.
7. The image processing method as claimed in claim 6 , wherein:
the background depth information comprises coordinate point values of a background of the current frame, depth values respectively corresponding to the coordinate point values, and a panel position value; and
the generating of the depth map for the current frame comprises generating the depth map for the background of the current frame by using the coordinate point values, the depth values, and the panel position value that represents a depth value of an output screen.
8. The image processing method as claimed in claim 1 , further comprising reading the metadata from a disc recorded with the video data or downloading the metadata from a server through a communication network.
9. The image processing method as claimed in claim 1 , wherein the metadata comprises identification information to identify the video data, and the identification information comprises a disc identifier (ID) to identify a disc recorded with the video data and a title ID to indicate a title including the video data from among a plurality of titles recorded in the disc identified by the disc ID.
10. An image generating method comprising:
receiving, by an image generating apparatus, video data as two-dimensional (2D) images; and
generating, by the image generating apparatus, metadata associated with the video data, the metadata comprising information to classify frames of the video data as predetermined units and used to determine whether each of the classified frames is to be converted to a three-dimensional (3D) image,
wherein the information to classify the frames of the video data as the predetermined units comprises shot information to classify a group of frames, as a shot, in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames.
11. The image generating method as claimed in claim 10 , wherein the shot information comprises output moment information of a frame being output first from among the group of frames classified as the shot, output moment information of a frame being output last from among the group of frames classified as the shot, and/or shot type information indicating whether the group of frames classified as the shot are to be output as the 2D image or the 3D image.
12. The image generating method as claimed in claim 10 , wherein:
the metadata further comprises background depth information for the group of frames classified as the predetermined shot; and
the background depth information comprises coordinate point values of a background of the group of frames classified as the predetermined shot, depth values corresponding to the coordinate point values, and a panel position value that represents a depth value of an output screen.
13. An image processing apparatus to output video data having two-dimensional (2D) images as the 2D images or three-dimensional (3D) images, the image processing apparatus comprising:
a metadata analyzing unit to determine whether the video data is to be output as the 2D image or the 3D image by using metadata associated with the video data;
a 3D image converting unit to convert the video data into the 3D image when the metadata analyzing unit determines that the video data is to be output as the 3D image; and
an output unit to output the video data as the 2D image or the 3D image according to the determination of the metadata analyzing unit,
wherein the metadata includes information to classify frames of the video data into predetermined units.
14. The image processing apparatus as claimed in claim 13 , wherein the information to classify the frames of the video data into the predetermined units comprises shot information to classify, as a shot, a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames.
15. The image processing apparatus as claimed in claim 14 , wherein the shot information comprises output moment information of a frame being output first from among the group of frames classified as the shot and/or output moment information of a frame being output last from among the group of frames classified as the shot.
16. The image processing apparatus as claimed in claim 14 , wherein:
the metadata comprises shot type information indicating whether the group of frames classified as the shot are to be output as the 2D image or the 3D image; and
the metadata analyzing unit determines whether the group of frames classified as the shot are to be output as the 2D image or the 3D image according to the shot type information.
17. The image processing apparatus as claimed in claim 14 , wherein the metadata analyzing unit determines, according to the metadata, that a current frame is classified as a new shot as compared to a previous frame preceding the current frame when a background composition of the current frame is not predictable by using the previous frame, determines that the current frame is to be output as the 2D image when the current frame is classified as the new shot, and determines that the current frame is to be output as the 3D image when the current frame is not classified as the new shot.
18. The image processing apparatus as claimed in claim 14 , wherein:
the metadata analyzing unit determines, according to the metadata, that a current frame is classified as a new shot as compared to a previous frame preceding the current frame when a background composition of the current frame is not predictable by using the previous frame; and
when the current frame is classified as the new shot, the 3D image converting unit extracts background depth information to be applied to the current frame classified as the new shot from the metadata and generates a depth map for the current frame by using the background depth information.
19. The image processing apparatus as claimed in claim 18 , wherein:
the background depth information comprises coordinate point values of a background of the current frame, depth values respectively corresponding to the coordinate point values, and a panel position value that represents a depth value of an output screen; and
the 3D image converting unit generates the depth map for a background of the current frame by using the coordinate point values of the background of the current frame, the depth values respectively corresponding to the coordinate point values, and the panel position value.
20. The image processing apparatus as claimed in claim 13 , wherein the metadata is read from a disc recorded with the video data or downloaded from a server through a communication network.
21. The image processing apparatus as claimed in claim 13 , wherein the metadata comprises identification information to identify the video data, and the identification information comprises a disc identifier (ID) to identify a disc recorded with the video data and a title ID to indicate a title including the video data from among a plurality of titles recorded in the disc identified by the disc ID.
22. An image generating apparatus comprising:
a video data encoding unit to encode video data as two-dimensional (2D) images;
a metadata generating unit to generate metadata associated with the video data, the metadata comprising information to classify frames of the video data as predetermined units and used to determine whether each of the classified frames is to be converted to a three-dimensional (3D) image; and
a metadata encoding unit to encode the metadata,
wherein the information to classify the frames of the video data as the predetermined units comprises shot information to classify, as a shot, a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames.
23. The image generating apparatus as claimed in claim 22 , wherein the shot information comprises output moment information of a frame being output first from among the group of frames classified as the shot, output moment information of a frame being output last from among the group of frames classified as the shot, and/or includes shot type information indicating whether the group of frames classified as the shot are to be output as the 2D image or the 3D image.
24. The image generating apparatus as claimed in claim 22 , wherein:
the metadata further comprises background depth information for the group of frames classified as the predetermined shot; and
the background depth information comprises coordinate point values of a background of the group of frames classified as the predetermined shot, depth values corresponding to the coordinate point values, and a panel position value that represents a depth value of an output screen.
25. A computer-readable information storage medium comprising:
video data recorded as two-dimensional (2D) images; and
metadata associated with the video data, the metadata comprising information used by an image processing apparatus to classify frames of the video data as predetermined units and used by the image processing apparatus to determine whether each of the classified frames is to be converted by the image processing apparatus to a three-dimensional (3D) image,
wherein the information to classify the frames of the video data as the predetermined units comprises shot information used by the used by the image processing apparatus to classify, as a shot, a group of frames in which a background composition of a current frame is predictable by using a previous frame preceding the current frame in the group of frames.
26. The computer-readable information storage medium as claimed in claim 25 , wherein the shot information comprises output moment information of a frame being output first from among the group of frames classified as the shot, output moment information of a frame being output last from among the group of frames classified as the shot, and/or shot type information indicating whether the group of frames classified as the shot are to be output as the 2D image or the 3D image.
27. The computer-readable information storage medium as claimed in claim 25 , wherein:
the metadata further comprises background depth information for the group of frames classified as the predetermined shot; and
the background depth information comprises coordinate point values of a background of the group of frames classified as the predetermined shot, depth values corresponding to the coordinate point values, and a panel position value that represents to the image processing apparatus a depth value of an output screen.
28. A computer-readable information storage medium having recorded thereon a program to execute the image processing method of claim 1 and implemented by the image processing apparatus.
29. A computer-readable information storage medium having recorded thereon a program to execute the image generating method of claim 10 and implemented by the image generating apparatus.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/489,758 US20090317061A1 (en) | 2008-06-24 | 2009-06-23 | Image generating method and apparatus and image processing method and apparatus |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US7518408P | 2008-06-24 | 2008-06-24 | |
KR1020080091269A KR20100002032A (en) | 2008-06-24 | 2008-09-17 | Image generating method, image processing method, and apparatus thereof |
KR10-2008-0091269 | 2008-09-17 | ||
US12/489,758 US20090317061A1 (en) | 2008-06-24 | 2009-06-23 | Image generating method and apparatus and image processing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090317061A1 true US20090317061A1 (en) | 2009-12-24 |
Family
ID=41812276
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/479,978 Abandoned US20090315884A1 (en) | 2008-06-24 | 2009-06-08 | Method and apparatus for outputting and displaying image data |
US12/489,758 Abandoned US20090317061A1 (en) | 2008-06-24 | 2009-06-23 | Image generating method and apparatus and image processing method and apparatus |
US12/489,726 Abandoned US20090315979A1 (en) | 2008-06-24 | 2009-06-23 | Method and apparatus for processing 3d video image |
US12/490,589 Abandoned US20090315977A1 (en) | 2008-06-24 | 2009-06-24 | Method and apparatus for processing three dimensional video data |
US12/556,699 Expired - Fee Related US8488869B2 (en) | 2008-06-24 | 2009-09-10 | Image processing method and apparatus |
US12/564,201 Abandoned US20100103168A1 (en) | 2008-06-24 | 2009-09-22 | Methods and apparatuses for processing and displaying image |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/479,978 Abandoned US20090315884A1 (en) | 2008-06-24 | 2009-06-08 | Method and apparatus for outputting and displaying image data |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/489,726 Abandoned US20090315979A1 (en) | 2008-06-24 | 2009-06-23 | Method and apparatus for processing 3d video image |
US12/490,589 Abandoned US20090315977A1 (en) | 2008-06-24 | 2009-06-24 | Method and apparatus for processing three dimensional video data |
US12/556,699 Expired - Fee Related US8488869B2 (en) | 2008-06-24 | 2009-09-10 | Image processing method and apparatus |
US12/564,201 Abandoned US20100103168A1 (en) | 2008-06-24 | 2009-09-22 | Methods and apparatuses for processing and displaying image |
Country Status (7)
Country | Link |
---|---|
US (6) | US20090315884A1 (en) |
EP (4) | EP2292019A4 (en) |
JP (4) | JP2011525743A (en) |
KR (9) | KR20100002032A (en) |
CN (4) | CN102077600A (en) |
MY (1) | MY159672A (en) |
WO (4) | WO2009157668A2 (en) |
Cited By (63)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100177969A1 (en) * | 2009-01-13 | 2010-07-15 | Futurewei Technologies, Inc. | Method and System for Image Processing to Classify an Object in an Image |
US20110063410A1 (en) * | 2009-09-11 | 2011-03-17 | Disney Enterprises, Inc. | System and method for three-dimensional video capture workflow for dynamic rendering |
US20110242099A1 (en) * | 2010-03-30 | 2011-10-06 | Sony Corporation | Image processing apparatus, image processing method, and program |
US20110254917A1 (en) * | 2010-04-16 | 2011-10-20 | General Instrument Corporation | Method and apparatus for distribution of 3d television program materials |
US20120050481A1 (en) * | 2010-08-27 | 2012-03-01 | Xuemin Chen | Method and system for creating a 3d video from a monoscopic 2d video and corresponding depth information |
US20120075290A1 (en) * | 2010-09-29 | 2012-03-29 | Sony Corporation | Image processing apparatus, image processing method, and computer program |
US20120081514A1 (en) * | 2010-10-01 | 2012-04-05 | Minoru Hasegawa | Reproducing apparatus and reproducing method |
US20120092327A1 (en) * | 2010-10-14 | 2012-04-19 | Sony Corporation | Overlaying graphical assets onto viewing plane of 3d glasses per metadata accompanying 3d image |
US20120120204A1 (en) * | 2010-10-01 | 2012-05-17 | Chiyo Ohno | Receiver |
US20120120200A1 (en) * | 2009-07-27 | 2012-05-17 | Koninklijke Philips Electronics N.V. | Combining 3d video and auxiliary data |
US20120176371A1 (en) * | 2009-08-31 | 2012-07-12 | Takafumi Morifuji | Stereoscopic image display system, disparity conversion device, disparity conversion method, and program |
US20120188335A1 (en) * | 2011-01-26 | 2012-07-26 | Samsung Electronics Co., Ltd. | Apparatus and method for processing 3d video |
US20130021445A1 (en) * | 2010-04-12 | 2013-01-24 | Alexandre Cossette-Pacheco | Camera Projection Meshes |
CN103262555A (en) * | 2010-12-16 | 2013-08-21 | Jvc建伍株式会社 | Image processing device |
US20130232398A1 (en) * | 2012-03-01 | 2013-09-05 | Sony Pictures Technologies Inc. | Asset management during production of media |
EP2942949A4 (en) * | 2013-02-20 | 2016-06-01 | Mooovr Inc | System for providing complex-dimensional content service using complex 2d-3d content file, method for providing said service, and complex-dimensional content file therefor |
US9712759B2 (en) | 2008-05-20 | 2017-07-18 | Fotonation Cayman Limited | Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras |
US9749568B2 (en) | 2012-11-13 | 2017-08-29 | Fotonation Cayman Limited | Systems and methods for array camera focal plane control |
US9754422B2 (en) | 2012-02-21 | 2017-09-05 | Fotonation Cayman Limited | Systems and method for performing depth based image editing |
US9774831B2 (en) | 2013-02-24 | 2017-09-26 | Fotonation Cayman Limited | Thin form factor computational array cameras and modular array cameras |
US9794476B2 (en) | 2011-09-19 | 2017-10-17 | Fotonation Cayman Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures |
US9807382B2 (en) | 2012-06-28 | 2017-10-31 | Fotonation Cayman Limited | Systems and methods for detecting defective camera arrays and optic arrays |
US9811753B2 (en) | 2011-09-28 | 2017-11-07 | Fotonation Cayman Limited | Systems and methods for encoding light field image files |
US9813616B2 (en) | 2012-08-23 | 2017-11-07 | Fotonation Cayman Limited | Feature based high resolution motion estimation from low resolution images captured using an array source |
US9858673B2 (en) | 2012-08-21 | 2018-01-02 | Fotonation Cayman Limited | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
US9888194B2 (en) | 2013-03-13 | 2018-02-06 | Fotonation Cayman Limited | Array camera architecture implementing quantum film image sensors |
US9898856B2 (en) | 2013-09-27 | 2018-02-20 | Fotonation Cayman Limited | Systems and methods for depth-assisted perspective distortion correction |
US9917998B2 (en) | 2013-03-08 | 2018-03-13 | Fotonation Cayman Limited | Systems and methods for measuring scene information while capturing images using array cameras |
US9986224B2 (en) | 2013-03-10 | 2018-05-29 | Fotonation Cayman Limited | System and methods for calibration of an array camera |
US10009538B2 (en) | 2013-02-21 | 2018-06-26 | Fotonation Cayman Limited | Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information |
US10089740B2 (en) | 2014-03-07 | 2018-10-02 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
US10091405B2 (en) | 2013-03-14 | 2018-10-02 | Fotonation Cayman Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US10122993B2 (en) | 2013-03-15 | 2018-11-06 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
US10119808B2 (en) | 2013-11-18 | 2018-11-06 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US10127682B2 (en) | 2013-03-13 | 2018-11-13 | Fotonation Limited | System and methods for calibration of an array camera |
US10142560B2 (en) | 2008-05-20 | 2018-11-27 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US10182216B2 (en) | 2013-03-15 | 2019-01-15 | Fotonation Limited | Extended color processing on pelican array cameras |
US10218889B2 (en) | 2011-05-11 | 2019-02-26 | Fotonation Limited | Systems and methods for transmitting and receiving array camera image data |
US10231033B1 (en) * | 2014-09-30 | 2019-03-12 | Apple Inc. | Synchronizing out-of-band content with a media stream |
US10250871B2 (en) | 2014-09-29 | 2019-04-02 | Fotonation Limited | Systems and methods for dynamic calibration of array cameras |
US10261219B2 (en) | 2012-06-30 | 2019-04-16 | Fotonation Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
US10306120B2 (en) | 2009-11-20 | 2019-05-28 | Fotonation Limited | Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps |
US10366472B2 (en) | 2010-12-14 | 2019-07-30 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US10412314B2 (en) | 2013-03-14 | 2019-09-10 | Fotonation Limited | Systems and methods for photometric normalization in array cameras |
US10455168B2 (en) | 2010-05-12 | 2019-10-22 | Fotonation Limited | Imager array interfaces |
US10455218B2 (en) | 2013-03-15 | 2019-10-22 | Fotonation Limited | Systems and methods for estimating depth using stereo array cameras |
US10482618B2 (en) | 2017-08-21 | 2019-11-19 | Fotonation Limited | Systems and methods for hybrid depth regularization |
US10542208B2 (en) | 2013-03-15 | 2020-01-21 | Fotonation Limited | Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information |
US10545569B2 (en) | 2014-08-06 | 2020-01-28 | Apple Inc. | Low power mode |
US10708492B2 (en) | 2013-11-26 | 2020-07-07 | Fotonation Limited | Array camera configurations incorporating constituent array cameras and constituent cameras |
US10708391B1 (en) | 2014-09-30 | 2020-07-07 | Apple Inc. | Delivery of apps in a media stream |
US10817307B1 (en) | 2017-12-20 | 2020-10-27 | Apple Inc. | API behavior modification based on power source health |
US11088567B2 (en) | 2014-08-26 | 2021-08-10 | Apple Inc. | Brownout avoidance |
US11270110B2 (en) | 2019-09-17 | 2022-03-08 | Boston Polarimetrics, Inc. | Systems and methods for surface modeling using polarization cues |
US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
US11302012B2 (en) | 2019-11-30 | 2022-04-12 | Boston Polarimetrics, Inc. | Systems and methods for transparent object segmentation using polarization cues |
US11363133B1 (en) | 2017-12-20 | 2022-06-14 | Apple Inc. | Battery health-based power management |
US11525906B2 (en) | 2019-10-07 | 2022-12-13 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
US11580667B2 (en) | 2020-01-29 | 2023-02-14 | Intrinsic Innovation Llc | Systems and methods for characterizing object pose detection and measurement systems |
US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US11797863B2 (en) | 2020-01-30 | 2023-10-24 | Intrinsic Innovation Llc | Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images |
US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
Families Citing this family (143)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8520979B2 (en) | 2008-08-19 | 2013-08-27 | Digimarc Corporation | Methods and systems for content processing |
US20100045779A1 (en) * | 2008-08-20 | 2010-02-25 | Samsung Electronics Co., Ltd. | Three-dimensional video apparatus and method of providing on screen display applied thereto |
JP2010088092A (en) * | 2008-09-02 | 2010-04-15 | Panasonic Corp | Three-dimensional video transmission system, video display device and video output device |
JP2010062695A (en) * | 2008-09-02 | 2010-03-18 | Sony Corp | Image processing apparatus, image processing method, and program |
KR101258106B1 (en) * | 2008-09-07 | 2013-04-25 | 돌비 레버러토리즈 라이쎈싱 코오포레이션 | Conversion of interleaved data sets, including chroma correction and/or correction of checkerboard interleaved formatted 3d images |
JP5859309B2 (en) * | 2008-11-24 | 2016-02-10 | コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. | Combination of 3D video and auxiliary data |
CN104065950B (en) * | 2008-12-02 | 2016-06-15 | Lg电子株式会社 | The method and apparatus of 3D caption presentation method and equipment and transmission 3D captions |
CN102257825B (en) * | 2008-12-19 | 2016-11-16 | 皇家飞利浦电子股份有限公司 | For the method and apparatus covering 3D figure on 3D video |
KR20100112940A (en) * | 2009-04-10 | 2010-10-20 | 엘지전자 주식회사 | A method for processing data and a receiving system |
TW201119353A (en) | 2009-06-24 | 2011-06-01 | Dolby Lab Licensing Corp | Perceptual depth placement for 3D objects |
EP2446636A1 (en) * | 2009-06-24 | 2012-05-02 | Dolby Laboratories Licensing Corporation | Method for embedding subtitles and/or graphic overlays in a 3d or multi-view video data |
US9479766B2 (en) * | 2009-07-10 | 2016-10-25 | Dolby Laboratories Licensing Corporation | Modifying images for a 3-dimensional display mode |
JP2011029849A (en) * | 2009-07-23 | 2011-02-10 | Sony Corp | Receiving device, communication system, method of combining caption with stereoscopic image, program, and data structure |
KR101056281B1 (en) | 2009-08-03 | 2011-08-11 | 삼성모바일디스플레이주식회사 | Organic electroluminescent display and driving method thereof |
KR20110013693A (en) * | 2009-08-03 | 2011-02-10 | 삼성모바일디스플레이주식회사 | Organic light emitting display and driving method thereof |
US20110063298A1 (en) * | 2009-09-15 | 2011-03-17 | Samir Hulyalkar | Method and system for rendering 3d graphics based on 3d display capabilities |
US8988495B2 (en) * | 2009-11-03 | 2015-03-24 | Lg Eletronics Inc. | Image display apparatus, method for controlling the image display apparatus, and image display system |
JP2011109398A (en) * | 2009-11-17 | 2011-06-02 | Sony Corp | Image transmission method, image receiving method, image transmission device, image receiving device, and image transmission system |
JP5502436B2 (en) * | 2009-11-27 | 2014-05-28 | パナソニック株式会社 | Video signal processing device |
WO2011072016A1 (en) * | 2009-12-08 | 2011-06-16 | Broadcom Corporation | Method and system for handling multiple 3-d video formats |
TWI491243B (en) * | 2009-12-21 | 2015-07-01 | Chunghwa Picture Tubes Ltd | Image processing method |
JP2011139261A (en) * | 2009-12-28 | 2011-07-14 | Sony Corp | Image processing device, image processing method, and program |
WO2011087470A1 (en) * | 2010-01-13 | 2011-07-21 | Thomson Licensing | System and method for combining 3d text with 3d content |
WO2011086653A1 (en) * | 2010-01-14 | 2011-07-21 | パナソニック株式会社 | Video output device and video display system |
WO2011091309A1 (en) * | 2010-01-21 | 2011-07-28 | General Instrument Corporation | Stereoscopic video graphics overlay |
EP2534844A2 (en) * | 2010-02-09 | 2012-12-19 | Koninklijke Philips Electronics N.V. | 3d video format detection |
US9025933B2 (en) * | 2010-02-12 | 2015-05-05 | Sony Corporation | Information processing device, information processing method, playback device, playback method, program and recording medium |
JP2011166666A (en) * | 2010-02-15 | 2011-08-25 | Sony Corp | Image processor, image processing method, and program |
KR101445777B1 (en) * | 2010-02-19 | 2014-11-04 | 삼성전자 주식회사 | Reproducing apparatus and control method thereof |
WO2011102818A1 (en) * | 2010-02-19 | 2011-08-25 | Thomson Licensing | Stereo logo insertion |
MX2012009888A (en) * | 2010-02-24 | 2012-09-12 | Thomson Licensing | Subtitling for stereoscopic images. |
KR20110098420A (en) * | 2010-02-26 | 2011-09-01 | 삼성전자주식회사 | Display device and driving method thereof |
US20110216083A1 (en) * | 2010-03-03 | 2011-09-08 | Vizio, Inc. | System, method and apparatus for controlling brightness of a device |
MX2012010268A (en) * | 2010-03-05 | 2012-10-05 | Gen Instrument Corp | Method and apparatus for converting two-dimensional video content for insertion into three-dimensional video content. |
US9426441B2 (en) * | 2010-03-08 | 2016-08-23 | Dolby Laboratories Licensing Corporation | Methods for carrying and transmitting 3D z-norm attributes in digital TV closed captioning |
US8830300B2 (en) * | 2010-03-11 | 2014-09-09 | Dolby Laboratories Licensing Corporation | Multiscalar stereo video format conversion |
US8730301B2 (en) * | 2010-03-12 | 2014-05-20 | Sony Corporation | Service linkage to caption disparity data transport |
JP2011199388A (en) * | 2010-03-17 | 2011-10-06 | Sony Corp | Reproducing device, reproduction control method, and program |
JP2011217361A (en) * | 2010-03-18 | 2011-10-27 | Panasonic Corp | Device and method of reproducing stereoscopic image and integrated circuit |
KR20110107151A (en) * | 2010-03-24 | 2011-09-30 | 삼성전자주식회사 | Method and apparatus for processing 3d image in mobile terminal |
WO2011118215A1 (en) * | 2010-03-24 | 2011-09-29 | パナソニック株式会社 | Video processing device |
CN102939748B (en) | 2010-04-14 | 2015-12-16 | 三星电子株式会社 | Method and apparatus for generation of the broadcast bit stream of the digital broadcasting for having captions and the method and apparatus for the broadcast bit stream that receives the digital broadcasting for having captions |
US20110255003A1 (en) * | 2010-04-16 | 2011-10-20 | The Directv Group, Inc. | Method and apparatus for presenting on-screen graphics in a frame-compatible 3d format |
KR101697184B1 (en) | 2010-04-20 | 2017-01-17 | 삼성전자주식회사 | Apparatus and Method for generating mesh, and apparatus and method for processing image |
US9414042B2 (en) * | 2010-05-05 | 2016-08-09 | Google Technology Holdings LLC | Program guide graphics and video in window for 3DTV |
KR20120119927A (en) * | 2010-05-11 | 2012-11-01 | 삼성전자주식회사 | 3-Dimension glasses and System for wireless power transmission |
KR101082234B1 (en) * | 2010-05-13 | 2011-11-09 | 삼성모바일디스플레이주식회사 | Organic light emitting display device and driving method thereof |
JP2011249895A (en) * | 2010-05-24 | 2011-12-08 | Panasonic Corp | Signal processing system and signal processing apparatus |
US20110292038A1 (en) * | 2010-05-27 | 2011-12-01 | Sony Computer Entertainment America, LLC | 3d video conversion |
KR101699875B1 (en) * | 2010-06-03 | 2017-01-25 | 엘지디스플레이 주식회사 | Apparatus and method for three- dimension liquid crystal display device |
US9030536B2 (en) | 2010-06-04 | 2015-05-12 | At&T Intellectual Property I, Lp | Apparatus and method for presenting media content |
JP5682149B2 (en) * | 2010-06-10 | 2015-03-11 | ソニー株式会社 | Stereo image data transmitting apparatus, stereo image data transmitting method, stereo image data receiving apparatus, and stereo image data receiving method |
US8402502B2 (en) | 2010-06-16 | 2013-03-19 | At&T Intellectual Property I, L.P. | Method and apparatus for presenting media content |
US9053562B1 (en) | 2010-06-24 | 2015-06-09 | Gregory S. Rabin | Two dimensional to three dimensional moving image converter |
US8593574B2 (en) | 2010-06-30 | 2013-11-26 | At&T Intellectual Property I, L.P. | Apparatus and method for providing dimensional media content based on detected display capability |
US9787974B2 (en) | 2010-06-30 | 2017-10-10 | At&T Intellectual Property I, L.P. | Method and apparatus for delivering media content |
US8640182B2 (en) | 2010-06-30 | 2014-01-28 | At&T Intellectual Property I, L.P. | Method for detecting a viewing apparatus |
US8918831B2 (en) | 2010-07-06 | 2014-12-23 | At&T Intellectual Property I, Lp | Method and apparatus for managing a presentation of media content |
KR101645404B1 (en) | 2010-07-06 | 2016-08-04 | 삼성디스플레이 주식회사 | Organic Light Emitting Display |
US9049426B2 (en) | 2010-07-07 | 2015-06-02 | At&T Intellectual Property I, Lp | Apparatus and method for distributing three dimensional media content |
JP5609336B2 (en) * | 2010-07-07 | 2014-10-22 | ソニー株式会社 | Image data transmitting apparatus, image data transmitting method, image data receiving apparatus, image data receiving method, and image data transmitting / receiving system |
KR101279660B1 (en) * | 2010-07-07 | 2013-06-27 | 엘지디스플레이 주식회사 | 3d image display device and driving method thereof |
US8848038B2 (en) * | 2010-07-09 | 2014-09-30 | Lg Electronics Inc. | Method and device for converting 3D images |
US9232274B2 (en) | 2010-07-20 | 2016-01-05 | At&T Intellectual Property I, L.P. | Apparatus for adapting a presentation of media content to a requesting device |
US9032470B2 (en) | 2010-07-20 | 2015-05-12 | At&T Intellectual Property I, Lp | Apparatus for adapting a presentation of media content according to a position of a viewing apparatus |
US9560406B2 (en) | 2010-07-20 | 2017-01-31 | At&T Intellectual Property I, L.P. | Method and apparatus for adapting a presentation of media content |
IT1401367B1 (en) * | 2010-07-28 | 2013-07-18 | Sisvel Technology Srl | METHOD TO COMBINE REFERENCE IMAGES TO A THREE-DIMENSIONAL CONTENT. |
US8994716B2 (en) | 2010-08-02 | 2015-03-31 | At&T Intellectual Property I, Lp | Apparatus and method for providing media content |
JPWO2012017687A1 (en) * | 2010-08-05 | 2013-10-03 | パナソニック株式会社 | Video playback device |
KR101674688B1 (en) * | 2010-08-12 | 2016-11-09 | 엘지전자 주식회사 | A method for displaying a stereoscopic image and stereoscopic image playing device |
JP2012044625A (en) * | 2010-08-23 | 2012-03-01 | Sony Corp | Stereoscopic image data transmission device, stereoscopic image data transmission method, stereoscopic image data reception device and stereoscopic image data reception method |
US8438502B2 (en) | 2010-08-25 | 2013-05-07 | At&T Intellectual Property I, L.P. | Apparatus for controlling three-dimensional images |
KR101218815B1 (en) * | 2010-08-26 | 2013-01-21 | 주식회사 티스마트 | 3D user interface processing method and set-top box using the same |
JP5058316B2 (en) * | 2010-09-03 | 2012-10-24 | 株式会社東芝 | Electronic device, image processing method, and image processing program |
EP2426931A1 (en) * | 2010-09-06 | 2012-03-07 | Advanced Digital Broadcast S.A. | A method and a system for determining a video frame type |
CN103262544A (en) * | 2010-09-10 | 2013-08-21 | 青岛海信电器股份有限公司 | Display method and equipment for 3d tv interface |
WO2012044272A1 (en) * | 2010-09-29 | 2012-04-05 | Thomson Licensing | Automatically switching between three dimensional and two dimensional contents for display |
US8947511B2 (en) | 2010-10-01 | 2015-02-03 | At&T Intellectual Property I, L.P. | Apparatus and method for presenting three-dimensional media content |
TWI420151B (en) * | 2010-10-07 | 2013-12-21 | Innolux Corp | Display method |
KR101232086B1 (en) * | 2010-10-08 | 2013-02-08 | 엘지디스플레이 주식회사 | Liquid crystal display and local dimming control method of thereof |
JP5550520B2 (en) * | 2010-10-20 | 2014-07-16 | 日立コンシューマエレクトロニクス株式会社 | Playback apparatus and playback method |
KR20120047055A (en) * | 2010-11-03 | 2012-05-11 | 삼성전자주식회사 | Display apparatus and method for providing graphic image |
CN102469319A (en) * | 2010-11-10 | 2012-05-23 | 康佳集团股份有限公司 | Three-dimensional menu generation method and three-dimensional display device |
JP5789960B2 (en) * | 2010-11-18 | 2015-10-07 | セイコーエプソン株式会社 | Display device, display device control method, and program |
JP5786315B2 (en) * | 2010-11-24 | 2015-09-30 | セイコーエプソン株式会社 | Display device, display device control method, and program |
CN101980545B (en) * | 2010-11-29 | 2012-08-01 | 深圳市九洲电器有限公司 | Method for automatically detecting 3DTV video program format |
CN101984671B (en) * | 2010-11-29 | 2013-04-17 | 深圳市九洲电器有限公司 | Method for synthesizing video images and interface graphs by 3DTV receiving system |
JP5611807B2 (en) * | 2010-12-27 | 2014-10-22 | Necパーソナルコンピュータ株式会社 | Video display device |
US8600151B2 (en) | 2011-01-03 | 2013-12-03 | Apple Inc. | Producing stereoscopic image |
CN105554551A (en) * | 2011-03-02 | 2016-05-04 | 华为技术有限公司 | Method and device for acquiring three-dimensional (3D) format description information |
CN102157012B (en) * | 2011-03-23 | 2012-11-28 | 深圳超多维光电子有限公司 | Method for three-dimensionally rendering scene, graphic image treatment device, equipment and system |
WO2012145191A1 (en) | 2011-04-15 | 2012-10-26 | Dolby Laboratories Licensing Corporation | Systems and methods for rendering 3d images independent of display size and viewing distance |
KR101801141B1 (en) * | 2011-04-19 | 2017-11-24 | 엘지전자 주식회사 | Apparatus for displaying image and method for operating the same |
KR20120119173A (en) * | 2011-04-20 | 2012-10-30 | 삼성전자주식회사 | 3d image processing apparatus and method for adjusting three-dimensional effect thereof |
JP2012231254A (en) * | 2011-04-25 | 2012-11-22 | Toshiba Corp | Stereoscopic image generating apparatus and stereoscopic image generating method |
US9030522B2 (en) | 2011-06-24 | 2015-05-12 | At&T Intellectual Property I, Lp | Apparatus and method for providing media content |
US8947497B2 (en) | 2011-06-24 | 2015-02-03 | At&T Intellectual Property I, Lp | Apparatus and method for managing telepresence sessions |
US9445046B2 (en) | 2011-06-24 | 2016-09-13 | At&T Intellectual Property I, L.P. | Apparatus and method for presenting media content with telepresence |
US9602766B2 (en) | 2011-06-24 | 2017-03-21 | At&T Intellectual Property I, L.P. | Apparatus and method for presenting three dimensional objects with telepresence |
CN102231829B (en) * | 2011-06-27 | 2014-12-17 | 深圳超多维光电子有限公司 | Display format identification method and device of video file as well as video player |
US9294752B2 (en) | 2011-07-13 | 2016-03-22 | Google Technology Holdings LLC | Dual mode user interface system and method for 3D video |
US8587635B2 (en) | 2011-07-15 | 2013-11-19 | At&T Intellectual Property I, L.P. | Apparatus and method for providing media services with telepresence |
EP2629531A4 (en) * | 2011-08-18 | 2015-01-21 | Beijing Goland Tech Co Ltd | Method for converting 2d into 3d based on image motion information |
CN103002297A (en) * | 2011-09-16 | 2013-03-27 | 联咏科技股份有限公司 | Method and device for generating dynamic depth values |
US8952996B2 (en) * | 2011-09-27 | 2015-02-10 | Delta Electronics, Inc. | Image display system |
US8813109B2 (en) | 2011-10-21 | 2014-08-19 | The Nielsen Company (Us), Llc | Methods and apparatus to identify exposure to 3D media presentations |
US8687470B2 (en) | 2011-10-24 | 2014-04-01 | Lsi Corporation | Optical disk playback device with three-dimensional playback functionality |
JP5289538B2 (en) * | 2011-11-11 | 2013-09-11 | 株式会社東芝 | Electronic device, display control method and program |
CN102413350B (en) * | 2011-11-30 | 2014-04-16 | 四川长虹电器股份有限公司 | Method for processing blue-light 3D (three-dimensional) video |
FR2983673A1 (en) * | 2011-12-02 | 2013-06-07 | Binocle | CORRECTION METHOD FOR ALTERNATE PROJECTION OF STEREOSCOPIC IMAGES |
US8713590B2 (en) | 2012-02-21 | 2014-04-29 | The Nielsen Company (Us), Llc | Methods and apparatus to identify exposure to 3D media presentations |
US8479226B1 (en) * | 2012-02-21 | 2013-07-02 | The Nielsen Company (Us), Llc | Methods and apparatus to identify exposure to 3D media presentations |
US9315997B2 (en) | 2012-04-10 | 2016-04-19 | Dirtt Environmental Solutions, Ltd | Tamper evident wall cladding system |
US20150109411A1 (en) * | 2012-04-26 | 2015-04-23 | Electronics And Telecommunications Research Institute | Image playback apparatus for 3dtv and method performed by the apparatus |
KR20140039649A (en) | 2012-09-24 | 2014-04-02 | 삼성전자주식회사 | Multi view image generating method and multi view image display apparatus |
KR20140049834A (en) * | 2012-10-18 | 2014-04-28 | 삼성전자주식회사 | Broadcast receiving apparatus and method of controlling the same, and user terminal device and method of providing the screen. |
US20150296198A1 (en) * | 2012-11-27 | 2015-10-15 | Intellectual Discovery Co., Ltd. | Method for encoding and decoding image using depth information, and device and image system using same |
US9992021B1 (en) | 2013-03-14 | 2018-06-05 | GoTenna, Inc. | System and method for private and point-to-point communication between computing devices |
CN104079941B (en) * | 2013-03-27 | 2017-08-25 | 中兴通讯股份有限公司 | A kind of depth information decoding method, device and Video processing playback equipment |
CN104469338B (en) * | 2013-09-25 | 2016-08-17 | 联想(北京)有限公司 | A kind of control method and device |
US10491916B2 (en) * | 2013-10-01 | 2019-11-26 | Advanced Micro Devices, Inc. | Exploiting camera depth information for video encoding |
CN103543953B (en) * | 2013-11-08 | 2017-01-04 | 深圳市汉普电子技术开发有限公司 | The method of the 3D film source that broadcasting identifies without 3D and touch apparatus |
JP2015119464A (en) * | 2013-11-12 | 2015-06-25 | セイコーエプソン株式会社 | Display device and control method of the same |
CN104143308B (en) | 2014-07-24 | 2016-09-07 | 京东方科技集团股份有限公司 | The display methods of a kind of 3-D view and device |
CN105095895B (en) * | 2015-04-23 | 2018-09-25 | 广州广电运通金融电子股份有限公司 | Valuable file identification device self-correction recognition methods |
CN105376546A (en) * | 2015-11-09 | 2016-03-02 | 中科创达软件股份有限公司 | 2D-to-3D method, device and mobile terminal |
CN105472374A (en) * | 2015-11-19 | 2016-04-06 | 广州华多网络科技有限公司 | 3D live video realization method, apparatus, and system |
US20170150138A1 (en) * | 2015-11-25 | 2017-05-25 | Atheer, Inc. | Method and apparatus for selective mono/stereo visual display |
US20170150137A1 (en) * | 2015-11-25 | 2017-05-25 | Atheer, Inc. | Method and apparatus for selective mono/stereo visual display |
CN105872519B (en) * | 2016-04-13 | 2018-03-27 | 万云数码媒体有限公司 | A kind of 2D plus depth 3D rendering transverse direction storage methods based on RGB compressions |
US10433025B2 (en) | 2016-05-10 | 2019-10-01 | Jaunt Inc. | Virtual reality resource scheduling of process in a cloud-based virtual reality processing system |
CN106101681A (en) * | 2016-06-21 | 2016-11-09 | 青岛海信电器股份有限公司 | 3-D view display processing method, signal input device and television terminal |
CN106982367A (en) * | 2017-03-31 | 2017-07-25 | 联想(北京)有限公司 | Video transmission method and its device |
US10038500B1 (en) * | 2017-05-11 | 2018-07-31 | Qualcomm Incorporated | Visible light communication |
US10735707B2 (en) * | 2017-08-15 | 2020-08-04 | International Business Machines Corporation | Generating three-dimensional imagery |
CN107589989A (en) | 2017-09-14 | 2018-01-16 | 晨星半导体股份有限公司 | Display device and its method for displaying image based on Android platform |
EP3644604A1 (en) * | 2018-10-23 | 2020-04-29 | Koninklijke Philips N.V. | Image generating apparatus and method therefor |
CN109257585B (en) * | 2018-10-25 | 2021-04-06 | 京东方科技集团股份有限公司 | Brightness correction device and method, display device, display system and method |
CN109274949A (en) * | 2018-10-30 | 2019-01-25 | 京东方科技集团股份有限公司 | A kind of method of video image processing and its device, display equipment |
CN112188181B (en) * | 2019-07-02 | 2023-07-04 | 中强光电股份有限公司 | Image display device, stereoscopic image processing circuit and synchronization signal correction method thereof |
KR102241615B1 (en) * | 2020-01-15 | 2021-04-19 | 한국과학기술원 | Method to identify and video titles using metadata in video webpage source code, and apparatuses performing the same |
CN112004162B (en) * | 2020-09-08 | 2022-06-21 | 宁波视睿迪光电有限公司 | Online 3D content playing system and method |
US11770513B1 (en) * | 2022-07-13 | 2023-09-26 | Rovi Guides, Inc. | Systems and methods for reducing a number of focal planes used to display three-dimensional objects |
Citations (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4523226A (en) * | 1982-01-27 | 1985-06-11 | Stereographics Corporation | Stereoscopic television system |
US5058992A (en) * | 1988-09-07 | 1991-10-22 | Toppan Printing Co., Ltd. | Method for producing a display with a diffraction grating pattern and a display produced by the method |
US5132812A (en) * | 1989-10-16 | 1992-07-21 | Toppan Printing Co., Ltd. | Method of manufacturing display having diffraction grating patterns |
US5262879A (en) * | 1988-07-18 | 1993-11-16 | Dimensional Arts. Inc. | Holographic image conversion method for making a controlled holographic grating |
US5291317A (en) * | 1990-07-12 | 1994-03-01 | Applied Holographics Corporation | Holographic diffraction grating patterns and methods for creating the same |
US5808664A (en) * | 1994-07-14 | 1998-09-15 | Sanyo Electric Co., Ltd. | Method of converting two-dimensional images into three-dimensional images |
US5986781A (en) * | 1996-10-28 | 1999-11-16 | Pacific Holographics, Inc. | Apparatus and method for generating diffractive element using liquid crystal display |
US20030095177A1 (en) * | 2001-11-21 | 2003-05-22 | Kug-Jin Yun | 3D stereoscopic/multiview video processing system and its method |
US20030128273A1 (en) * | 1998-12-10 | 2003-07-10 | Taichi Matsui | Video processing apparatus, control method therefor, and storage medium |
US20040008893A1 (en) * | 2002-07-10 | 2004-01-15 | Nec Corporation | Stereoscopic image encoding and decoding device |
US20040066846A1 (en) * | 2002-10-07 | 2004-04-08 | Kugjin Yun | Data processing system for stereoscopic 3-dimensional video based on MPEG-4 and method thereof |
US20040145655A1 (en) * | 2002-12-02 | 2004-07-29 | Seijiro Tomita | Stereoscopic video image display apparatus and stereoscopic video signal processing circuit |
US20040201888A1 (en) * | 2003-04-08 | 2004-10-14 | Shoji Hagita | Image pickup device and stereoscopic image generation device |
US20050030301A1 (en) * | 2001-12-14 | 2005-02-10 | Ocuity Limited | Control of optical switching apparatus |
US20050046700A1 (en) * | 2003-08-25 | 2005-03-03 | Ive Bracke | Device and method for performing multiple view imaging by means of a plurality of video processing devices |
US20050053276A1 (en) * | 2003-07-15 | 2005-03-10 | Stmicroelectronics S.R.I. | Method of obtaining a depth map from a digital image |
US20050147166A1 (en) * | 2003-12-12 | 2005-07-07 | Shojiro Shibata | Decoding device, electronic apparatus, computer, decoding method, program, and recording medium |
US6968568B1 (en) * | 1999-12-20 | 2005-11-22 | International Business Machines Corporation | Methods and apparatus of disseminating broadcast information to a handheld device |
US20050259147A1 (en) * | 2002-07-16 | 2005-11-24 | Nam Jeho | Apparatus and method for adapting 2d and 3d stereoscopic video signal |
US20050259959A1 (en) * | 2004-05-19 | 2005-11-24 | Kabushiki Kaisha Toshiba | Media data play apparatus and system |
US20060117071A1 (en) * | 2004-11-29 | 2006-06-01 | Samsung Electronics Co., Ltd. | Recording apparatus including a plurality of data blocks having different sizes, file managing method using the recording apparatus, and printing apparatus including the recording apparatus |
US20060288081A1 (en) * | 2005-05-26 | 2006-12-21 | Samsung Electronics Co., Ltd. | Information storage medium including application for obtaining metadata and apparatus and method of obtaining metadata |
US20070081587A1 (en) * | 2005-09-27 | 2007-04-12 | Raveendran Vijayalakshmi R | Content driven transcoder that orchestrates multimedia transcoding using content information |
US20070120972A1 (en) * | 2005-11-28 | 2007-05-31 | Samsung Electronics Co., Ltd. | Apparatus and method for processing 3D video signal |
US20070189760A1 (en) * | 2006-02-14 | 2007-08-16 | Lg Electronics Inc. | Display device for storing various sets of configuration data and method for controlling the same |
US20080018731A1 (en) * | 2004-03-08 | 2008-01-24 | Kazunari Era | Steroscopic Parameter Embedding Apparatus and Steroscopic Image Reproducer |
US20080198218A1 (en) * | 2006-11-03 | 2008-08-21 | Quanta Computer Inc. | Stereoscopic image format transformation method applied to display system |
US20090116732A1 (en) * | 2006-06-23 | 2009-05-07 | Samuel Zhou | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
US7613344B2 (en) * | 2003-12-08 | 2009-11-03 | Electronics And Telecommunications Research Institute | System and method for encoding and decoding an image using bitstream map and recording medium thereof |
US20090315981A1 (en) * | 2008-06-24 | 2009-12-24 | Samsung Electronics Co., Ltd. | Image processing method and apparatus |
US7720857B2 (en) * | 2003-08-29 | 2010-05-18 | Sap Ag | Method and system for providing an invisible attractor in a predetermined sector, which attracts a subset of entities depending on an entity type |
US20100165077A1 (en) * | 2005-10-19 | 2010-07-01 | Peng Yin | Multi-View Video Coding Using Scalable Video Coding |
US20100182403A1 (en) * | 2006-09-04 | 2010-07-22 | Enhanced Chip Technology Inc. | File format for encoded stereoscopic image/video data |
US20100217785A1 (en) * | 2007-10-10 | 2010-08-26 | Electronics And Telecommunications Research Institute | Metadata structure for storing and playing stereoscopic data, and method for storing stereoscopic content file using this metadata |
US7826709B2 (en) * | 2002-04-12 | 2010-11-02 | Mitsubishi Denki Kabushiki Kaisha | Metadata editing apparatus, metadata reproduction apparatus, metadata delivery apparatus, metadata search apparatus, metadata re-generation condition setting apparatus, metadata delivery method and hint information description method |
US7893908B2 (en) * | 2006-05-11 | 2011-02-22 | Nec Display Solutions, Ltd. | Liquid crystal display device and liquid crystal panel drive method |
US7953315B2 (en) * | 2006-05-22 | 2011-05-31 | Broadcom Corporation | Adaptive video processing circuitry and player using sub-frame metadata |
US7986283B2 (en) * | 2007-01-02 | 2011-07-26 | Samsung Mobile Display Co., Ltd. | Multi-dimensional image selectable display device |
US8054329B2 (en) * | 2005-07-08 | 2011-11-08 | Samsung Electronics Co., Ltd. | High resolution 2D-3D switchable autostereoscopic display apparatus |
US8077117B2 (en) * | 2007-04-17 | 2011-12-13 | Samsung Mobile Display Co., Ltd. | Electronic display device and method thereof |
Family Cites Families (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4667228A (en) * | 1983-10-14 | 1987-05-19 | Canon Kabushiki Kaisha | Image signal processing apparatus |
JPS63116593A (en) * | 1986-11-04 | 1988-05-20 | Matsushita Electric Ind Co Ltd | Stereoscopic picture reproducing device |
JP3081675B2 (en) * | 1991-07-24 | 2000-08-28 | オリンパス光学工業株式会社 | Image recording device and image reproducing device |
US5740274A (en) * | 1991-09-12 | 1998-04-14 | Fuji Photo Film Co., Ltd. | Method for recognizing object images and learning method for neural networks |
US6011581A (en) * | 1992-11-16 | 2000-01-04 | Reveo, Inc. | Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments |
US6084978A (en) * | 1993-12-16 | 2000-07-04 | Eastman Kodak Company | Hierarchical storage and display of digital images used in constructing three-dimensional image hard copy |
KR100358021B1 (en) * | 1994-02-01 | 2003-01-24 | 산요 덴키 가부시키가이샤 | Method of converting 2D image into 3D image and stereoscopic image display system |
US5739844A (en) * | 1994-02-04 | 1998-04-14 | Sanyo Electric Co. Ltd. | Method of converting two-dimensional image into three-dimensional image |
US5684890A (en) * | 1994-02-28 | 1997-11-04 | Nec Corporation | Three-dimensional reference image segmenting method and apparatus |
US6104828A (en) * | 1994-03-24 | 2000-08-15 | Kabushiki Kaisha Topcon | Ophthalmologic image processor |
KR100374463B1 (en) * | 1994-09-22 | 2003-05-09 | 산요 덴키 가부시키가이샤 | How to convert 2D image to 3D image |
US6985168B2 (en) * | 1994-11-14 | 2006-01-10 | Reveo, Inc. | Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments |
JPH09116931A (en) * | 1995-10-18 | 1997-05-02 | Sanyo Electric Co Ltd | Method and identifying left and right video image for time division stereoscopic video signal |
US5917940A (en) * | 1996-01-23 | 1999-06-29 | Nec Corporation | Three dimensional reference image segmenting method and device and object discrimination system |
JPH09322199A (en) * | 1996-05-29 | 1997-12-12 | Olympus Optical Co Ltd | Stereoscopic video display device |
JPH10224822A (en) * | 1997-01-31 | 1998-08-21 | Sony Corp | Video display method and display device |
JPH10313417A (en) * | 1997-03-12 | 1998-11-24 | Seiko Epson Corp | Digital gamma correction circuit, liquid crystal display device using the same and electronic device |
DE19806547C2 (en) * | 1997-04-30 | 2001-01-25 | Hewlett Packard Co | System and method for generating stereoscopic display signals from a single computer graphics pipeline |
JPH11113028A (en) * | 1997-09-30 | 1999-04-23 | Toshiba Corp | Three-dimension video image display device |
ID27878A (en) * | 1997-12-05 | 2001-05-03 | Dynamic Digital Depth Res Pty | IMAGE IMPROVED IMAGE CONVERSION AND ENCODING ENGINEERING |
US6850631B1 (en) * | 1998-02-20 | 2005-02-01 | Oki Electric Industry Co., Ltd. | Photographing device, iris input device and iris image input method |
JP4149037B2 (en) * | 1998-06-04 | 2008-09-10 | オリンパス株式会社 | Video system |
JP2000298246A (en) * | 1999-02-12 | 2000-10-24 | Canon Inc | Device and method for display, and storage medium |
JP2000275575A (en) * | 1999-03-24 | 2000-10-06 | Sharp Corp | Stereoscopic video display device |
KR100334722B1 (en) * | 1999-06-05 | 2002-05-04 | 강호석 | Method and the apparatus for generating stereoscopic image using MPEG data |
JP2001012946A (en) * | 1999-06-30 | 2001-01-19 | Toshiba Corp | Dynamic image processor and processing method |
US6839663B1 (en) * | 1999-09-30 | 2005-01-04 | Texas Tech University | Haptic rendering of volumetric soft-bodies objects |
CA2394352C (en) * | 1999-12-14 | 2008-07-15 | Scientific-Atlanta, Inc. | System and method for adaptive decoding of a video signal with coordinated resource allocation |
US20020009137A1 (en) * | 2000-02-01 | 2002-01-24 | Nelson John E. | Three-dimensional video broadcasting system |
US7215809B2 (en) * | 2000-04-04 | 2007-05-08 | Sony Corporation | Three-dimensional image producing method and apparatus therefor |
JP2001320693A (en) * | 2000-05-12 | 2001-11-16 | Sony Corp | Service providing device and method, reception terminal and method, service providing system |
WO2001097531A2 (en) * | 2000-06-12 | 2001-12-20 | Vrex, Inc. | Electronic stereoscopic media delivery system |
US6762755B2 (en) * | 2000-10-16 | 2004-07-13 | Pixel Science, Inc. | Method and apparatus for creating and displaying interactive three dimensional computer images |
JP3667620B2 (en) * | 2000-10-16 | 2005-07-06 | 株式会社アイ・オー・データ機器 | Stereo image capturing adapter, stereo image capturing camera, and stereo image processing apparatus |
GB0100563D0 (en) * | 2001-01-09 | 2001-02-21 | Pace Micro Tech Plc | Dynamic adjustment of on-screen displays to cope with different widescreen signalling types |
US6678323B2 (en) * | 2001-01-24 | 2004-01-13 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry Through Communications Research Centre | Bandwidth reduction for stereoscopic imagery and video signals |
KR20020096203A (en) * | 2001-06-18 | 2002-12-31 | (주)디지털국영 | The Method for Enlarging or Reducing Stereoscopic Images |
JP2003157292A (en) * | 2001-11-20 | 2003-05-30 | Nec Corp | System and method for managing layout of product |
US20040218269A1 (en) * | 2002-01-14 | 2004-11-04 | Divelbiss Adam W. | General purpose stereoscopic 3D format conversion system and method |
US7319720B2 (en) * | 2002-01-28 | 2008-01-15 | Microsoft Corporation | Stereoscopic video |
JP2003284099A (en) * | 2002-03-22 | 2003-10-03 | Olympus Optical Co Ltd | Video information signal recording medium and video display apparatus |
US6771274B2 (en) * | 2002-03-27 | 2004-08-03 | Sony Corporation | Graphics and video integration with alpha and video blending |
CA2380105A1 (en) * | 2002-04-09 | 2003-10-09 | Nicholas Routhier | Process and system for encoding and playback of stereoscopic video sequences |
JP4652389B2 (en) * | 2002-04-12 | 2011-03-16 | 三菱電機株式会社 | Metadata processing method |
EP1501316A4 (en) * | 2002-04-25 | 2009-01-21 | Sharp Kk | Multimedia information generation method and multimedia information reproduction device |
JP4183499B2 (en) * | 2002-12-16 | 2008-11-19 | 三洋電機株式会社 | Video file processing method and video processing method |
JP2004246066A (en) * | 2003-02-13 | 2004-09-02 | Fujitsu Ltd | Virtual environment creating method |
JP2004274125A (en) * | 2003-03-05 | 2004-09-30 | Sony Corp | Image processing apparatus and method |
JP4677175B2 (en) * | 2003-03-24 | 2011-04-27 | シャープ株式会社 | Image processing apparatus, image pickup system, image display system, image pickup display system, image processing program, and computer-readable recording medium recording image processing program |
KR100556826B1 (en) * | 2003-04-17 | 2006-03-10 | 한국전자통신연구원 | System and Method of Internet Broadcasting for MPEG4 based Stereoscopic Video |
EP1617684A4 (en) * | 2003-04-17 | 2009-06-03 | Sharp Kk | 3-dimensional image creation device, 3-dimensional image reproduction device, 3-dimensional image processing device, 3-dimensional image processing program, and recording medium containing the program |
JP2004357156A (en) * | 2003-05-30 | 2004-12-16 | Sharp Corp | Video reception apparatus and video playback apparatus |
JP2005026800A (en) * | 2003-06-30 | 2005-01-27 | Konica Minolta Photo Imaging Inc | Image processing method, imaging apparatus, image processing apparatus, and image recording apparatus |
KR100544677B1 (en) * | 2003-12-26 | 2006-01-23 | 한국전자통신연구원 | Apparatus and method for the 3D object tracking using multi-view and depth cameras |
KR100543219B1 (en) * | 2004-05-24 | 2006-01-20 | 한국과학기술연구원 | Method for generating haptic vector field and 3d-height map in 2d-image |
JP4227076B2 (en) * | 2004-05-24 | 2009-02-18 | 株式会社東芝 | Display device for displaying stereoscopic image and display method for displaying stereoscopic image |
KR100708838B1 (en) * | 2004-06-30 | 2007-04-17 | 삼성에스디아이 주식회사 | Stereoscopic display device and driving method thereof |
JP2006041811A (en) * | 2004-07-26 | 2006-02-09 | Kddi Corp | Free visual point picture streaming method |
KR20040077596A (en) * | 2004-07-28 | 2004-09-04 | 손귀연 | Stereoscopic Image Display Device Based on Flat Panel Display |
WO2006028151A1 (en) * | 2004-09-08 | 2006-03-16 | Nippon Telegraph And Telephone Corporation | 3d displaying method, device and program |
KR100656575B1 (en) | 2004-12-31 | 2006-12-11 | 광운대학교 산학협력단 | Three-dimensional display device |
TWI261099B (en) * | 2005-02-17 | 2006-09-01 | Au Optronics Corp | Backlight modules |
KR100828358B1 (en) * | 2005-06-14 | 2008-05-08 | 삼성전자주식회사 | Method and apparatus for converting display mode of video, and computer readable medium thereof |
US7404645B2 (en) * | 2005-06-20 | 2008-07-29 | Digital Display Innovations, Llc | Image and light source modulation for a digital display system |
CA2553473A1 (en) * | 2005-07-26 | 2007-01-26 | Wa James Tam | Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging |
JP4717728B2 (en) * | 2005-08-29 | 2011-07-06 | キヤノン株式会社 | Stereo display device and control method thereof |
KR100780701B1 (en) | 2006-03-28 | 2007-11-30 | (주)오픈브이알 | Apparatus automatically creating three dimension image and method therefore |
KR20070098364A (en) * | 2006-03-31 | 2007-10-05 | (주)엔브이엘소프트 | Apparatus and method for coding and saving a 3d moving image |
KR101137347B1 (en) * | 2006-05-11 | 2012-04-19 | 엘지전자 주식회사 | apparatus for mobile telecommunication and method for displaying an image using the apparatus |
US20070294737A1 (en) * | 2006-06-16 | 2007-12-20 | Sbc Knowledge Ventures, L.P. | Internet Protocol Television (IPTV) stream management within a home viewing network |
KR100761022B1 (en) * | 2006-08-14 | 2007-09-21 | 광주과학기술원 | Haptic rendering method based on depth image, device therefor, and haptic broadcasting system using them |
EP1901474B1 (en) | 2006-09-13 | 2011-11-30 | Stmicroelectronics Sa | System for synchronizing modules in an integrated circuit in mesochronous clock domains |
EP2074832A2 (en) * | 2006-09-28 | 2009-07-01 | Koninklijke Philips Electronics N.V. | 3 menu display |
EP2105032A2 (en) * | 2006-10-11 | 2009-09-30 | Koninklijke Philips Electronics N.V. | Creating three dimensional graphics data |
JP4755565B2 (en) * | 2006-10-17 | 2011-08-24 | シャープ株式会社 | Stereoscopic image processing device |
KR101362941B1 (en) * | 2006-11-01 | 2014-02-17 | 한국전자통신연구원 | Method and Apparatus for decoding metadata used for playing stereoscopic contents |
JP5008677B2 (en) * | 2006-11-29 | 2012-08-22 | パナソニック株式会社 | Video / audio device network and signal reproduction method |
US8213711B2 (en) * | 2007-04-03 | 2012-07-03 | Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada | Method and graphical user interface for modifying depth maps |
CA2627999C (en) * | 2007-04-03 | 2011-11-15 | Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry Through The Communications Research Centre Canada | Generation of a depth map from a monoscopic color image for rendering stereoscopic still and video images |
JP4564512B2 (en) | 2007-04-16 | 2010-10-20 | 富士通株式会社 | Display device, display program, and display method |
WO2008140190A1 (en) * | 2007-05-14 | 2008-11-20 | Samsung Electronics Co, . Ltd. | Method and apparatus for encoding and decoding multi-view image |
JP4462288B2 (en) * | 2007-05-16 | 2010-05-12 | 株式会社日立製作所 | Video display device and three-dimensional video display device using the same |
US8482654B2 (en) * | 2008-10-24 | 2013-07-09 | Reald Inc. | Stereoscopic image format with depth information |
-
2008
- 2008-09-17 KR KR1020080091269A patent/KR20100002032A/en not_active Application Discontinuation
- 2008-09-17 KR KR1020080091268A patent/KR101539935B1/en active IP Right Grant
- 2008-09-19 KR KR1020080092417A patent/KR20100002033A/en not_active Application Discontinuation
- 2008-09-23 KR KR1020080093371A patent/KR20100002035A/en not_active Application Discontinuation
- 2008-09-24 KR KR1020080093866A patent/KR20100002036A/en not_active Application Discontinuation
- 2008-09-24 KR KR1020080093867A patent/KR20100002037A/en not_active Application Discontinuation
- 2008-09-26 KR KR1020080094896A patent/KR20100002038A/en not_active Application Discontinuation
- 2008-10-27 KR KR1020080105485A patent/KR20100002048A/en not_active Application Discontinuation
- 2008-10-28 KR KR1020080105928A patent/KR20100002049A/en not_active Application Discontinuation
-
2009
- 2009-06-08 US US12/479,978 patent/US20090315884A1/en not_active Abandoned
- 2009-06-17 JP JP2011514491A patent/JP2011525743A/en active Pending
- 2009-06-17 WO PCT/KR2009/003235 patent/WO2009157668A2/en active Application Filing
- 2009-06-17 EP EP09770340.9A patent/EP2292019A4/en active Pending
- 2009-06-17 CN CN2009801242749A patent/CN102077600A/en active Pending
- 2009-06-23 US US12/489,758 patent/US20090317061A1/en not_active Abandoned
- 2009-06-23 US US12/489,726 patent/US20090315979A1/en not_active Abandoned
- 2009-06-24 EP EP09770373.0A patent/EP2289247A4/en not_active Withdrawn
- 2009-06-24 JP JP2011514503A patent/JP5547725B2/en not_active Expired - Fee Related
- 2009-06-24 JP JP2011514502A patent/JP2011525745A/en active Pending
- 2009-06-24 WO PCT/KR2009/003406 patent/WO2009157714A2/en active Application Filing
- 2009-06-24 CN CN200980123638.1A patent/CN102067614B/en not_active Expired - Fee Related
- 2009-06-24 MY MYPI2010005488A patent/MY159672A/en unknown
- 2009-06-24 EP EP09770386.2A patent/EP2289248A4/en not_active Withdrawn
- 2009-06-24 CN CN200980123462.XA patent/CN102067613B/en not_active Expired - Fee Related
- 2009-06-24 US US12/490,589 patent/US20090315977A1/en not_active Abandoned
- 2009-06-24 WO PCT/KR2009/003383 patent/WO2009157701A2/en active Application Filing
- 2009-06-24 JP JP2011514504A patent/JP2011525746A/en active Pending
- 2009-06-24 WO PCT/KR2009/003399 patent/WO2009157708A2/en active Application Filing
- 2009-06-24 EP EP09770380.5A patent/EP2279625A4/en not_active Withdrawn
- 2009-06-24 CN CN200980123639.6A patent/CN102067615B/en not_active Expired - Fee Related
- 2009-09-10 US US12/556,699 patent/US8488869B2/en not_active Expired - Fee Related
- 2009-09-22 US US12/564,201 patent/US20100103168A1/en not_active Abandoned
Patent Citations (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4523226A (en) * | 1982-01-27 | 1985-06-11 | Stereographics Corporation | Stereoscopic television system |
US5262879A (en) * | 1988-07-18 | 1993-11-16 | Dimensional Arts. Inc. | Holographic image conversion method for making a controlled holographic grating |
US5058992A (en) * | 1988-09-07 | 1991-10-22 | Toppan Printing Co., Ltd. | Method for producing a display with a diffraction grating pattern and a display produced by the method |
US5132812A (en) * | 1989-10-16 | 1992-07-21 | Toppan Printing Co., Ltd. | Method of manufacturing display having diffraction grating patterns |
US5291317A (en) * | 1990-07-12 | 1994-03-01 | Applied Holographics Corporation | Holographic diffraction grating patterns and methods for creating the same |
US5808664A (en) * | 1994-07-14 | 1998-09-15 | Sanyo Electric Co., Ltd. | Method of converting two-dimensional images into three-dimensional images |
US5986781A (en) * | 1996-10-28 | 1999-11-16 | Pacific Holographics, Inc. | Apparatus and method for generating diffractive element using liquid crystal display |
US20030128273A1 (en) * | 1998-12-10 | 2003-07-10 | Taichi Matsui | Video processing apparatus, control method therefor, and storage medium |
US6968568B1 (en) * | 1999-12-20 | 2005-11-22 | International Business Machines Corporation | Methods and apparatus of disseminating broadcast information to a handheld device |
US20030095177A1 (en) * | 2001-11-21 | 2003-05-22 | Kug-Jin Yun | 3D stereoscopic/multiview video processing system and its method |
US8111758B2 (en) * | 2001-11-21 | 2012-02-07 | Electronics And Telecommunications Research Institute | 3D stereoscopic/multiview video processing system and its method |
US20050030301A1 (en) * | 2001-12-14 | 2005-02-10 | Ocuity Limited | Control of optical switching apparatus |
US7826709B2 (en) * | 2002-04-12 | 2010-11-02 | Mitsubishi Denki Kabushiki Kaisha | Metadata editing apparatus, metadata reproduction apparatus, metadata delivery apparatus, metadata search apparatus, metadata re-generation condition setting apparatus, metadata delivery method and hint information description method |
US20040008893A1 (en) * | 2002-07-10 | 2004-01-15 | Nec Corporation | Stereoscopic image encoding and decoding device |
US20050259147A1 (en) * | 2002-07-16 | 2005-11-24 | Nam Jeho | Apparatus and method for adapting 2d and 3d stereoscopic video signal |
US20040066846A1 (en) * | 2002-10-07 | 2004-04-08 | Kugjin Yun | Data processing system for stereoscopic 3-dimensional video based on MPEG-4 and method thereof |
US20040145655A1 (en) * | 2002-12-02 | 2004-07-29 | Seijiro Tomita | Stereoscopic video image display apparatus and stereoscopic video signal processing circuit |
US20040201888A1 (en) * | 2003-04-08 | 2004-10-14 | Shoji Hagita | Image pickup device and stereoscopic image generation device |
US20050053276A1 (en) * | 2003-07-15 | 2005-03-10 | Stmicroelectronics S.R.I. | Method of obtaining a depth map from a digital image |
US20050046700A1 (en) * | 2003-08-25 | 2005-03-03 | Ive Bracke | Device and method for performing multiple view imaging by means of a plurality of video processing devices |
US7720857B2 (en) * | 2003-08-29 | 2010-05-18 | Sap Ag | Method and system for providing an invisible attractor in a predetermined sector, which attracts a subset of entities depending on an entity type |
US7613344B2 (en) * | 2003-12-08 | 2009-11-03 | Electronics And Telecommunications Research Institute | System and method for encoding and decoding an image using bitstream map and recording medium thereof |
US20050147166A1 (en) * | 2003-12-12 | 2005-07-07 | Shojiro Shibata | Decoding device, electronic apparatus, computer, decoding method, program, and recording medium |
US20080018731A1 (en) * | 2004-03-08 | 2008-01-24 | Kazunari Era | Steroscopic Parameter Embedding Apparatus and Steroscopic Image Reproducer |
US20050259959A1 (en) * | 2004-05-19 | 2005-11-24 | Kabushiki Kaisha Toshiba | Media data play apparatus and system |
US20060117071A1 (en) * | 2004-11-29 | 2006-06-01 | Samsung Electronics Co., Ltd. | Recording apparatus including a plurality of data blocks having different sizes, file managing method using the recording apparatus, and printing apparatus including the recording apparatus |
US20060288081A1 (en) * | 2005-05-26 | 2006-12-21 | Samsung Electronics Co., Ltd. | Information storage medium including application for obtaining metadata and apparatus and method of obtaining metadata |
US8054329B2 (en) * | 2005-07-08 | 2011-11-08 | Samsung Electronics Co., Ltd. | High resolution 2D-3D switchable autostereoscopic display apparatus |
US20070081587A1 (en) * | 2005-09-27 | 2007-04-12 | Raveendran Vijayalakshmi R | Content driven transcoder that orchestrates multimedia transcoding using content information |
US20100165077A1 (en) * | 2005-10-19 | 2010-07-01 | Peng Yin | Multi-View Video Coding Using Scalable Video Coding |
US20070120972A1 (en) * | 2005-11-28 | 2007-05-31 | Samsung Electronics Co., Ltd. | Apparatus and method for processing 3D video signal |
US20070189760A1 (en) * | 2006-02-14 | 2007-08-16 | Lg Electronics Inc. | Display device for storing various sets of configuration data and method for controlling the same |
US7840132B2 (en) * | 2006-02-14 | 2010-11-23 | Lg Electronics Inc. | Display device for storing various sets of configuration data and method for controlling the same |
US7893908B2 (en) * | 2006-05-11 | 2011-02-22 | Nec Display Solutions, Ltd. | Liquid crystal display device and liquid crystal panel drive method |
US7953315B2 (en) * | 2006-05-22 | 2011-05-31 | Broadcom Corporation | Adaptive video processing circuitry and player using sub-frame metadata |
US20090116732A1 (en) * | 2006-06-23 | 2009-05-07 | Samuel Zhou | Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition |
US20100182403A1 (en) * | 2006-09-04 | 2010-07-22 | Enhanced Chip Technology Inc. | File format for encoded stereoscopic image/video data |
US20080198218A1 (en) * | 2006-11-03 | 2008-08-21 | Quanta Computer Inc. | Stereoscopic image format transformation method applied to display system |
US7986283B2 (en) * | 2007-01-02 | 2011-07-26 | Samsung Mobile Display Co., Ltd. | Multi-dimensional image selectable display device |
US8077117B2 (en) * | 2007-04-17 | 2011-12-13 | Samsung Mobile Display Co., Ltd. | Electronic display device and method thereof |
US20100217785A1 (en) * | 2007-10-10 | 2010-08-26 | Electronics And Telecommunications Research Institute | Metadata structure for storing and playing stereoscopic data, and method for storing stereoscopic content file using this metadata |
US20090315981A1 (en) * | 2008-06-24 | 2009-12-24 | Samsung Electronics Co., Ltd. | Image processing method and apparatus |
Cited By (125)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11792538B2 (en) | 2008-05-20 | 2023-10-17 | Adeia Imaging Llc | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US10027901B2 (en) | 2008-05-20 | 2018-07-17 | Fotonation Cayman Limited | Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras |
US11412158B2 (en) | 2008-05-20 | 2022-08-09 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US9712759B2 (en) | 2008-05-20 | 2017-07-18 | Fotonation Cayman Limited | Systems and methods for generating depth maps using a camera arrays incorporating monochrome and color cameras |
US10142560B2 (en) | 2008-05-20 | 2018-11-27 | Fotonation Limited | Capturing and processing of images including occlusions focused on an image sensor by a lens stack array |
US9269154B2 (en) * | 2009-01-13 | 2016-02-23 | Futurewei Technologies, Inc. | Method and system for image processing to classify an object in an image |
US10096118B2 (en) | 2009-01-13 | 2018-10-09 | Futurewei Technologies, Inc. | Method and system for image processing to classify an object in an image |
US20100177969A1 (en) * | 2009-01-13 | 2010-07-15 | Futurewei Technologies, Inc. | Method and System for Image Processing to Classify an Object in an Image |
US20120120200A1 (en) * | 2009-07-27 | 2012-05-17 | Koninklijke Philips Electronics N.V. | Combining 3d video and auxiliary data |
US10021377B2 (en) * | 2009-07-27 | 2018-07-10 | Koninklijke Philips N.V. | Combining 3D video and auxiliary data that is provided when not reveived |
US9832445B2 (en) | 2009-08-31 | 2017-11-28 | Sony Corporation | Stereoscopic image display system, disparity conversion device, disparity conversion method, and program |
US20120176371A1 (en) * | 2009-08-31 | 2012-07-12 | Takafumi Morifuji | Stereoscopic image display system, disparity conversion device, disparity conversion method, and program |
US9094659B2 (en) * | 2009-08-31 | 2015-07-28 | Sony Corporation | Stereoscopic image display system, disparity conversion device, disparity conversion method, and program |
US8614737B2 (en) * | 2009-09-11 | 2013-12-24 | Disney Enterprises, Inc. | System and method for three-dimensional video capture workflow for dynamic rendering |
US20110063410A1 (en) * | 2009-09-11 | 2011-03-17 | Disney Enterprises, Inc. | System and method for three-dimensional video capture workflow for dynamic rendering |
US10306120B2 (en) | 2009-11-20 | 2019-05-28 | Fotonation Limited | Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps |
US20110242099A1 (en) * | 2010-03-30 | 2011-10-06 | Sony Corporation | Image processing apparatus, image processing method, and program |
US20130021445A1 (en) * | 2010-04-12 | 2013-01-24 | Alexandre Cossette-Pacheco | Camera Projection Meshes |
US11558596B2 (en) | 2010-04-16 | 2023-01-17 | Google Technology Holdings LLC | Method and apparatus for distribution of 3D television program materials |
US10893253B2 (en) | 2010-04-16 | 2021-01-12 | Google Technology Holdings LLC | Method and apparatus for distribution of 3D television program materials |
US10368050B2 (en) | 2010-04-16 | 2019-07-30 | Google Technology Holdings LLC | Method and apparatus for distribution of 3D television program materials |
US9237366B2 (en) * | 2010-04-16 | 2016-01-12 | Google Technology Holdings LLC | Method and apparatus for distribution of 3D television program materials |
US20110254917A1 (en) * | 2010-04-16 | 2011-10-20 | General Instrument Corporation | Method and apparatus for distribution of 3d television program materials |
US10455168B2 (en) | 2010-05-12 | 2019-10-22 | Fotonation Limited | Imager array interfaces |
US8994792B2 (en) * | 2010-08-27 | 2015-03-31 | Broadcom Corporation | Method and system for creating a 3D video from a monoscopic 2D video and corresponding depth information |
US20120050481A1 (en) * | 2010-08-27 | 2012-03-01 | Xuemin Chen | Method and system for creating a 3d video from a monoscopic 2d video and corresponding depth information |
US20120075290A1 (en) * | 2010-09-29 | 2012-03-29 | Sony Corporation | Image processing apparatus, image processing method, and computer program |
US9741152B2 (en) * | 2010-09-29 | 2017-08-22 | Sony Corporation | Image processing apparatus, image processing method, and computer program |
TWI477141B (en) * | 2010-09-29 | 2015-03-11 | Sony Corp | Image processing apparatus, image processing method, and computer program |
CN102438164A (en) * | 2010-09-29 | 2012-05-02 | 索尼公司 | Image processing apparatus, image processing method, and computer program |
US20150281670A1 (en) * | 2010-10-01 | 2015-10-01 | Hitachi Maxell, Ltd. | Reproducing Apparatus And Reproducing Method |
CN105208367A (en) * | 2010-10-01 | 2015-12-30 | 日立麦克赛尔株式会社 | Reproducing apparatus and reproducing method |
US20120120204A1 (en) * | 2010-10-01 | 2012-05-17 | Chiyo Ohno | Receiver |
US20120081514A1 (en) * | 2010-10-01 | 2012-04-05 | Minoru Hasegawa | Reproducing apparatus and reproducing method |
US8941724B2 (en) * | 2010-10-01 | 2015-01-27 | Hitachi Maxell Ltd. | Receiver |
CN102572462A (en) * | 2010-10-01 | 2012-07-11 | 日立民用电子株式会社 | Reproducing apparatus and reproducing method |
US20120092327A1 (en) * | 2010-10-14 | 2012-04-19 | Sony Corporation | Overlaying graphical assets onto viewing plane of 3d glasses per metadata accompanying 3d image |
US11423513B2 (en) | 2010-12-14 | 2022-08-23 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US10366472B2 (en) | 2010-12-14 | 2019-07-30 | Fotonation Limited | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
US11875475B2 (en) | 2010-12-14 | 2024-01-16 | Adeia Imaging Llc | Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers |
EP2654306A1 (en) * | 2010-12-16 | 2013-10-23 | JVC KENWOOD Corporation | Image processing device |
CN103262555A (en) * | 2010-12-16 | 2013-08-21 | Jvc建伍株式会社 | Image processing device |
EP2654306A4 (en) * | 2010-12-16 | 2014-06-25 | Jvc Kenwood Corp | Image processing device |
US20120188335A1 (en) * | 2011-01-26 | 2012-07-26 | Samsung Electronics Co., Ltd. | Apparatus and method for processing 3d video |
US9723291B2 (en) * | 2011-01-26 | 2017-08-01 | Samsung Electronics Co., Ltd | Apparatus and method for generating 3D video data |
US10742861B2 (en) | 2011-05-11 | 2020-08-11 | Fotonation Limited | Systems and methods for transmitting and receiving array camera image data |
US10218889B2 (en) | 2011-05-11 | 2019-02-26 | Fotonation Limited | Systems and methods for transmitting and receiving array camera image data |
US9794476B2 (en) | 2011-09-19 | 2017-10-17 | Fotonation Cayman Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures |
US10375302B2 (en) | 2011-09-19 | 2019-08-06 | Fotonation Limited | Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures |
US10275676B2 (en) | 2011-09-28 | 2019-04-30 | Fotonation Limited | Systems and methods for encoding image files containing depth maps stored as metadata |
US10984276B2 (en) | 2011-09-28 | 2021-04-20 | Fotonation Limited | Systems and methods for encoding image files containing depth maps stored as metadata |
US11729365B2 (en) | 2011-09-28 | 2023-08-15 | Adela Imaging LLC | Systems and methods for encoding image files containing depth maps stored as metadata |
US10430682B2 (en) | 2011-09-28 | 2019-10-01 | Fotonation Limited | Systems and methods for decoding image files containing depth maps stored as metadata |
US9811753B2 (en) | 2011-09-28 | 2017-11-07 | Fotonation Cayman Limited | Systems and methods for encoding light field image files |
US9864921B2 (en) | 2011-09-28 | 2018-01-09 | Fotonation Cayman Limited | Systems and methods for encoding image files containing depth maps stored as metadata |
US10019816B2 (en) | 2011-09-28 | 2018-07-10 | Fotonation Cayman Limited | Systems and methods for decoding image files containing depth maps stored as metadata |
US20180197035A1 (en) | 2011-09-28 | 2018-07-12 | Fotonation Cayman Limited | Systems and Methods for Encoding Image Files Containing Depth Maps Stored as Metadata |
US9754422B2 (en) | 2012-02-21 | 2017-09-05 | Fotonation Cayman Limited | Systems and method for performing depth based image editing |
US10311649B2 (en) | 2012-02-21 | 2019-06-04 | Fotonation Limited | Systems and method for performing depth based image editing |
US10445398B2 (en) * | 2012-03-01 | 2019-10-15 | Sony Corporation | Asset management during production of media |
US20130232398A1 (en) * | 2012-03-01 | 2013-09-05 | Sony Pictures Technologies Inc. | Asset management during production of media |
US10334241B2 (en) | 2012-06-28 | 2019-06-25 | Fotonation Limited | Systems and methods for detecting defective camera arrays and optic arrays |
US9807382B2 (en) | 2012-06-28 | 2017-10-31 | Fotonation Cayman Limited | Systems and methods for detecting defective camera arrays and optic arrays |
US11022725B2 (en) | 2012-06-30 | 2021-06-01 | Fotonation Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
US10261219B2 (en) | 2012-06-30 | 2019-04-16 | Fotonation Limited | Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors |
US9858673B2 (en) | 2012-08-21 | 2018-01-02 | Fotonation Cayman Limited | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
US10380752B2 (en) | 2012-08-21 | 2019-08-13 | Fotonation Limited | Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints |
US10462362B2 (en) | 2012-08-23 | 2019-10-29 | Fotonation Limited | Feature based high resolution motion estimation from low resolution images captured using an array source |
US9813616B2 (en) | 2012-08-23 | 2017-11-07 | Fotonation Cayman Limited | Feature based high resolution motion estimation from low resolution images captured using an array source |
US9749568B2 (en) | 2012-11-13 | 2017-08-29 | Fotonation Cayman Limited | Systems and methods for array camera focal plane control |
EP2942949A4 (en) * | 2013-02-20 | 2016-06-01 | Mooovr Inc | System for providing complex-dimensional content service using complex 2d-3d content file, method for providing said service, and complex-dimensional content file therefor |
US9491447B2 (en) * | 2013-02-20 | 2016-11-08 | Mooovr Inc. | System for providing complex-dimensional content service using complex 2D-3D content file, method for providing said service, and complex-dimensional content file therefor |
US10009538B2 (en) | 2013-02-21 | 2018-06-26 | Fotonation Cayman Limited | Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information |
US9774831B2 (en) | 2013-02-24 | 2017-09-26 | Fotonation Cayman Limited | Thin form factor computational array cameras and modular array cameras |
US9917998B2 (en) | 2013-03-08 | 2018-03-13 | Fotonation Cayman Limited | Systems and methods for measuring scene information while capturing images using array cameras |
US10958892B2 (en) | 2013-03-10 | 2021-03-23 | Fotonation Limited | System and methods for calibration of an array camera |
US11570423B2 (en) | 2013-03-10 | 2023-01-31 | Adeia Imaging Llc | System and methods for calibration of an array camera |
US9986224B2 (en) | 2013-03-10 | 2018-05-29 | Fotonation Cayman Limited | System and methods for calibration of an array camera |
US11272161B2 (en) | 2013-03-10 | 2022-03-08 | Fotonation Limited | System and methods for calibration of an array camera |
US10225543B2 (en) | 2013-03-10 | 2019-03-05 | Fotonation Limited | System and methods for calibration of an array camera |
US9888194B2 (en) | 2013-03-13 | 2018-02-06 | Fotonation Cayman Limited | Array camera architecture implementing quantum film image sensors |
US10127682B2 (en) | 2013-03-13 | 2018-11-13 | Fotonation Limited | System and methods for calibration of an array camera |
US10412314B2 (en) | 2013-03-14 | 2019-09-10 | Fotonation Limited | Systems and methods for photometric normalization in array cameras |
US10547772B2 (en) | 2013-03-14 | 2020-01-28 | Fotonation Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US10091405B2 (en) | 2013-03-14 | 2018-10-02 | Fotonation Cayman Limited | Systems and methods for reducing motion blur in images or video in ultra low light with array cameras |
US10674138B2 (en) | 2013-03-15 | 2020-06-02 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
US10182216B2 (en) | 2013-03-15 | 2019-01-15 | Fotonation Limited | Extended color processing on pelican array cameras |
US10455218B2 (en) | 2013-03-15 | 2019-10-22 | Fotonation Limited | Systems and methods for estimating depth using stereo array cameras |
US10638099B2 (en) | 2013-03-15 | 2020-04-28 | Fotonation Limited | Extended color processing on pelican array cameras |
US10122993B2 (en) | 2013-03-15 | 2018-11-06 | Fotonation Limited | Autofocus system for a conventional camera that uses depth information from an array camera |
US10542208B2 (en) | 2013-03-15 | 2020-01-21 | Fotonation Limited | Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information |
US9898856B2 (en) | 2013-09-27 | 2018-02-20 | Fotonation Cayman Limited | Systems and methods for depth-assisted perspective distortion correction |
US10540806B2 (en) | 2013-09-27 | 2020-01-21 | Fotonation Limited | Systems and methods for depth-assisted perspective distortion correction |
US10767981B2 (en) | 2013-11-18 | 2020-09-08 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US11486698B2 (en) | 2013-11-18 | 2022-11-01 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US10119808B2 (en) | 2013-11-18 | 2018-11-06 | Fotonation Limited | Systems and methods for estimating depth from projected texture using camera arrays |
US10708492B2 (en) | 2013-11-26 | 2020-07-07 | Fotonation Limited | Array camera configurations incorporating constituent array cameras and constituent cameras |
US10089740B2 (en) | 2014-03-07 | 2018-10-02 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
US10574905B2 (en) | 2014-03-07 | 2020-02-25 | Fotonation Limited | System and methods for depth regularization and semiautomatic interactive matting using RGB-D images |
US10983588B2 (en) | 2014-08-06 | 2021-04-20 | Apple Inc. | Low power mode |
US10545569B2 (en) | 2014-08-06 | 2020-01-28 | Apple Inc. | Low power mode |
US11088567B2 (en) | 2014-08-26 | 2021-08-10 | Apple Inc. | Brownout avoidance |
US10250871B2 (en) | 2014-09-29 | 2019-04-02 | Fotonation Limited | Systems and methods for dynamic calibration of array cameras |
US11546576B2 (en) | 2014-09-29 | 2023-01-03 | Adeia Imaging Llc | Systems and methods for dynamic calibration of array cameras |
US11722753B2 (en) | 2014-09-30 | 2023-08-08 | Apple Inc. | Synchronizing out-of-band content with a media stream |
US10708391B1 (en) | 2014-09-30 | 2020-07-07 | Apple Inc. | Delivery of apps in a media stream |
US11190856B2 (en) | 2014-09-30 | 2021-11-30 | Apple Inc. | Synchronizing content and metadata |
US10231033B1 (en) * | 2014-09-30 | 2019-03-12 | Apple Inc. | Synchronizing out-of-band content with a media stream |
US11562498B2 (en) | 2017-08-21 | 2023-01-24 | Adela Imaging LLC | Systems and methods for hybrid depth regularization |
US10482618B2 (en) | 2017-08-21 | 2019-11-19 | Fotonation Limited | Systems and methods for hybrid depth regularization |
US10818026B2 (en) | 2017-08-21 | 2020-10-27 | Fotonation Limited | Systems and methods for hybrid depth regularization |
US10817307B1 (en) | 2017-12-20 | 2020-10-27 | Apple Inc. | API behavior modification based on power source health |
US11363133B1 (en) | 2017-12-20 | 2022-06-14 | Apple Inc. | Battery health-based power management |
US11270110B2 (en) | 2019-09-17 | 2022-03-08 | Boston Polarimetrics, Inc. | Systems and methods for surface modeling using polarization cues |
US11699273B2 (en) | 2019-09-17 | 2023-07-11 | Intrinsic Innovation Llc | Systems and methods for surface modeling using polarization cues |
US11525906B2 (en) | 2019-10-07 | 2022-12-13 | Intrinsic Innovation Llc | Systems and methods for augmentation of sensor systems and imaging systems with polarization |
US11842495B2 (en) | 2019-11-30 | 2023-12-12 | Intrinsic Innovation Llc | Systems and methods for transparent object segmentation using polarization cues |
US11302012B2 (en) | 2019-11-30 | 2022-04-12 | Boston Polarimetrics, Inc. | Systems and methods for transparent object segmentation using polarization cues |
US11580667B2 (en) | 2020-01-29 | 2023-02-14 | Intrinsic Innovation Llc | Systems and methods for characterizing object pose detection and measurement systems |
US11797863B2 (en) | 2020-01-30 | 2023-10-24 | Intrinsic Innovation Llc | Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images |
US11683594B2 (en) | 2021-04-15 | 2023-06-20 | Intrinsic Innovation Llc | Systems and methods for camera exposure control |
US11290658B1 (en) | 2021-04-15 | 2022-03-29 | Boston Polarimetrics, Inc. | Systems and methods for camera exposure control |
US11954886B2 (en) | 2021-04-15 | 2024-04-09 | Intrinsic Innovation Llc | Systems and methods for six-degree of freedom pose estimation of deformable objects |
US11953700B2 (en) | 2021-05-27 | 2024-04-09 | Intrinsic Innovation Llc | Multi-aperture polarization optical systems using beam splitters |
US11689813B2 (en) | 2021-07-01 | 2023-06-27 | Intrinsic Innovation Llc | Systems and methods for high dynamic range imaging using crossed polarizers |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090317061A1 (en) | Image generating method and apparatus and image processing method and apparatus | |
EP2483750B1 (en) | Selecting viewpoints for generating additional views in 3d video | |
JP5241500B2 (en) | Multi-view video encoding and decoding apparatus and method using camera parameters, and recording medium on which a program for performing the method is recorded | |
JP4755565B2 (en) | Stereoscopic image processing device | |
US8780173B2 (en) | Method and apparatus for reducing fatigue resulting from viewing three-dimensional image display, and method and apparatus for generating data stream of low visual fatigue three-dimensional image | |
US20090315980A1 (en) | Image processing method and apparatus | |
US8922629B2 (en) | Image processing apparatus, image processing method, and program | |
KR101362941B1 (en) | Method and Apparatus for decoding metadata used for playing stereoscopic contents | |
US20090315981A1 (en) | Image processing method and apparatus | |
US20130063576A1 (en) | Stereoscopic intensity adjustment device, stereoscopic intensity adjustment method, program, integrated circuit and recording medium | |
RU2462771C2 (en) | Device and method to generate and display media files | |
JP2009044722A (en) | Pseudo-3d-image generating device, image-encoding device, image-encoding method, image transmission method, image-decoding device and image image-decoding method | |
US20090199100A1 (en) | Apparatus and method for generating and displaying media files | |
US8824857B2 (en) | Method and apparatus for the varied speed reproduction of video images | |
EP2949121B1 (en) | Method of encoding a video data signal for use with a multi-view stereoscopic display device | |
US20130070052A1 (en) | Video procesing device, system, video processing method, and video processing program capable of changing depth of stereoscopic video images | |
US10037335B1 (en) | Detection of 3-D videos | |
US20090317062A1 (en) | Image processing method and apparatus | |
AU2013216395A1 (en) | Encoding device and encoding method, and decoding device and decoding method | |
CN112204960A (en) | Method of transmitting three-dimensional 360-degree video data, display apparatus using the same, and video storage apparatus using the same | |
EP2688303A1 (en) | Recording device, recording method, playback device, playback method, program, and recording/playback device | |
US9210406B2 (en) | Apparatus and method for converting 2D content into 3D content |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SAMSUNG ELECTRONICS CO., LTD., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JUNG, KIL-SOO;CHUNG, HYUN-KWON;LEE, DAE-JONG;REEL/FRAME:022876/0090 Effective date: 20090622 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |