WO1998047097A1 - Method for performing stereo matching to recover depths, colors and opacities of surface elements - Google Patents

Method for performing stereo matching to recover depths, colors and opacities of surface elements Download PDF

Info

Publication number
WO1998047097A1
WO1998047097A1 PCT/US1998/007297 US9807297W WO9847097A1 WO 1998047097 A1 WO1998047097 A1 WO 1998047097A1 US 9807297 W US9807297 W US 9807297W WO 9847097 A1 WO9847097 A1 WO 9847097A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
estimates
values
cells
space
Prior art date
Application number
PCT/US1998/007297
Other languages
French (fr)
Inventor
Richard Stephen Szeliski
Polina Golland
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Publication of WO1998047097A1 publication Critical patent/WO1998047097A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/111Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
    • H04N13/117Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • H04N13/289Switching between monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/398Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Definitions

  • the invention relates generally to an image processing field called computer vision and more specifically relates to stereo matching within this field.
  • Stereo matching refers generally to a method for processing two or more images in an attempt to recover information about the objects portrayed in the images. Since each image is only two dimensional, it does not convey the depth of the objects portrayed in the image relative to the camera position. However, it is possible to recover this depth information by processing two or more images of the same object taken from cameras located at different positions around the object. There are two primary elements to extracting depth information: 1) finding picture elements (pixels) in each image that correspond to the same surface element on an object depicted in each image; and 2) using triangulation to compute the distance between the surface element and one of the cameras.
  • the difficult part of this method is finding matching picture elements in two or more input images.
  • this problem is referred to as stereo matching or stereo correspondence.
  • Finding matching picture elements or "pixels" is difficult because many pixels in each image have the same color.
  • Stereo matching is relevant to these applications because it can be used to compute the distances or "depths" of visible surface elements relative to a camera from two or more input images. These depth values are analogous to the depths of surface elements on a 3D object (sometimes referred to as the z coordinate in an (x,y,z) coordinate system) in the field of computer graphics. Depth or "z" buffers are a common part of 3D graphics rendering systems used to determine which surface elements of 3D objects are visible while rendering a 3D scene into a two-dimensional image.
  • disparity is often used in the computer vision field and represents the change in position of a surface element on an object when viewed through different cameras positioned around the object. Since disparity is mathematically related to depth from the camera, it can be used interchangeably with depth. In other words, once one has determined disparity, it is trivial to convert it into a depth value.
  • a typical stereo matching algorithm will attempt to compute the disparities for visible surface elements. These disparities can be converted into depths to compute a depth map, an array of depth values representing the depth of visible surface elements depicted in an image.
  • a "texture map” is another term commonly used in computer graphics referring to a method for mapping an image to the surface of 3D objects.
  • This type of stereo matching application can be used to compute a 3D virtual environment from a video sequence.
  • this technology could be used to create the effect of "walking through” a virtual environment and viewing objects depicted in a video sequence from different viewing perspectives using a technique called view interpolation.
  • View Interpolation refers to a method for taking one image and simulating what it would look like from a different viewpoint.
  • this technology can be used to extract depth layers of video objects and then insert graphical objects between the depth layers. For example, z-keying can be used to insert computer-generated animation in a live video sequence.
  • Pixels lying near occlusion boundaries will typically be “mixed” in the sense that they contain a blend of colors contributed by the foreground and background surfaces.
  • objectionable “halos” or “color bleeding” may be visible in the final image.
  • blue screen generally refers to a method for extracting an image representing a foreground object from the rest of an image.
  • a common application of this technique is to extract the foreground image and then superimpose it onto another image to create special effects.
  • a video sequence of a spaceship can be shot against a blue background so that the spaceship's image can be extracted from the blue background and superimposed onto another image (e.g., an image depicting a space scene).
  • the background or "blue screen" is comprised of a known, uniform color, and therefore, can be easily distinguished from the foreground image.
  • blue screen techniques still suffer from the same problem of mixed pixels at the occlusion boundary of the foreground object (e.g., the perimeter of the spaceship in the previous example).
  • researchers in these fields have developed techniques for modeling mixed pixels as combinations of foreground and background colors.
  • opacity (sometimes referred to "transparency” or “translucency”) refers to the extent to which an occluded background pixel is visible through the occluding foreground pixel at the same pixel location.
  • An image comprises a finite number of pixels arranged in a rectangular array. Each pixel, therefore, covers an area in two-dimensional screen coordinates. It is possible for sub-pixel regions of pixels at occlusion boundaries to map to surface elements at different depths (e.g., a foreground object and a background object). It is also possible for a pixel to represent a translucent surface such as window that reflects some light and also allows light reflected from a background object to pass through it. In order for a pixel to represent the foreground and background colors accurately, it should represent the proper proportion of foreground and background colors in its final color values.
  • the opacity value can be used to represent the extent to which a pixel is composed of colors from foreground and background surface elements.
  • one way to approximate opacity is merely to assume some predefined blending factor for computing colors of mixed pixels. While this type of blending foreground and background colors can make errors at the occlusion boundaries less visible for some applications, it does not remove the errors and is insufficient for demanding applications such as z-keying.
  • a stereo matching method has to attempt to distinguish background and foreground colors before "mixed" pixels can be computed.
  • the invention provides an improved stereo method that simultaneously recovers disparity, color, and opacity from two or more input images.
  • the objective is to use the input images to reconstruct 3D surfaces of the objects depicted in the input images. We sometimes refer to these surfaces as a collection of visible (or partially visible) surface elements.
  • Our stereo method is designed to recover the disparities, colors and opacities of the visible surface elements. While we often refer to "colors,” it is important to note that the invention applies to gray-scale images as well as color images using any of a variety of known color spaces.
  • the first stage of the method is to formulate a general disparity space. This generally involves selecting a projective sampling of a 3D working volume.
  • the working volume is a space containing the objects depicted in the input images.
  • this stage includes selecting the position and orientation of a virtual camera, and selecting the spacing and orientation of disparity planes.
  • the end result is an (x.y.d) general disparity space, where d represents disparity (also referred to as a disparity layer or plane).
  • the general disparity space serves as a common reference for each of the input cameras that generated the input images.
  • the next stage is to make some initial estimates of colors and opacities in the general disparity space. This stage includes transforming the input images from their screen coordinates to the cells in the general disparity space.
  • This stage can be implemented by sampling the input images to collect k colors corresponding to the k input images for each of the cells in the general disparity space. Within this stage, there are a number of ways to arrive at the initial estimates. In general, this stage computes statistics on the k colors for each cell, and then uses these statistics to arrive at initial color estimates. This stage also estimates disparities and opacities by using the statistics to pick cells that are most likely to reside on a visible surface element. For better results, this stage can also use evidence aggregated from neighboring samples to improve the statistical analysis (aggregating evidence).
  • One specific implementation computes statistics on k colors at each cell, including the mean and variance. This implementation then estimates a cell's color as the mean color. It selects a winning disparity for each (x,y) column in disparity space that has a variance below a threshold and is clearly more likely to be located at a visible surface element relative to other cells in the column. Finally, it estimates initial opacities by assigning binary opacities to cells, where cells that are clearly more likely to be on visible surface elements are initially set to totally opaque and other cells are set to totally transparent.
  • the next stage is to refine the initial estimates.
  • One way to accomplish this is to compute visibility values and then use these visibility values to make better estimates of colors, opacities and disparities.
  • Visibility is the extent to which an element in 3D space is visible from a given input camera location.
  • One implementation of the method computes visibility values by projecting opacity estimates from general disparity space to the (u,v,d) space of each input camera, and then determining visibility at the (u,v,d) coordinates from the opacity values.
  • the (u,v,d) space of an input camera is a 3D disparity space from the perspective of a kt input camera, where u and v are pixel coordinates in the input image and d is the disparity.
  • the disparity d is untransformed when mapped backward or forward between general disparity space and the disparity space of an input camera.
  • the visibility information can then be associated with color values of the input images, mapped into the general disparity space.
  • the visibility information can be used to compute weighted statistics, where a color sample has less weight if the location of the sample is not visible from the camera it comes from.
  • Another way to refine the estimates is to use the initial estimates as input to an iterative refining process.
  • the estimates can be projected back into the input cameras to compute re-projected images.
  • the objective in this particular approach is to compute the error between the re-projected images and the input images, and use this error to adjust the current estimates.
  • other costs or constraints can be used to determine how to adjust the current color and opacity estimates and make them more accurate.
  • the process of re-projecting the estimates can include: 1) transforming disparity planes to the (u,v,d) coordinate space of the input cameras; and 2) compositing the layers to compute the re-projected image.
  • the disparity planes are comprised of a rectangular array (x,y) of the current color and opacity estimates for the cells in a disparity plane, d.
  • the iterative refining process can be implemented as a cost minimization problem, using the error and possibly other constraints as cost functions to determine how to adjust the current estimates.
  • This stereo matching method addresses many of the problems with conventional stereo methods outlined above. For example, it provides an improved method for recovering colors and disparities in partially occluded regions. It also deals with pixels containing mixtures of foreground and background colors more effectively. This method can also provide more accurate color and opacity estimates, which can be used to extract foreground objects, and mix live and synthetic imagery with fewer visible errors.
  • Fig. 1 is a general block diagram of a computer system that serves as an operating environment for our stereo matching method.
  • Fig. 2 is a general flow diagram illustrating a stereo method for estimating disparities, colors and opacities.
  • Figs. 3A-C are diagrams illustrating an example of the orientation and position of input cameras relative to a general disparity space.
  • Fig. 4 is a more detailed flow diagram illustrating methods for estimating a disparity surface.
  • Fig. 5 is a general flow diagram illustrating a method for refining estimates of a disparity surface.
  • Fig. 6 is a flow diagram illustrating a method for re-projecting values in disparity space to the input cameras.
  • Fig. 7 is a flow diagram illustrating a method for improving disparity estimates using a visibility map.
  • Fig. 8 is a flow diagram illustrating a method for re-projecting values from disparity space to the input cameras to compute error.
  • Fig. 9 is a flow diagram illustrating a method for refining color and transparency estimates.
  • the invention provides a method for simultaneously recovering disparities, colors and opacities of visible surface elements from two or more input images.
  • Fig. 1 is a general block diagram of a computer system that serves as an operating environment for an implementation of the invention.
  • the computer system 20 includes as its basic elements a computer 22, one or more input devices 24 and one or more output device 26 including a display device.
  • Computer 22 has a conventional system architecture including a central processing unit (CPU) 28 and a memory system 30, which communicate through a bus structure 32.
  • CPU 28 includes an arithmetic logic unit (ALU) 33 for performing computations, registers 34 for temporary storage of data and instructions and a control unit 36 for controlling the operation of computer system 20 in response to instructions from a computer program such as an application or an operating system.
  • ALU arithmetic logic unit
  • the computer can be implemented using any of a variety of known architectures and processors including an x86 microprocessor from Intel and others, such as Cyrix, AMD, and Nexgen, and the PowerPC from IBM and Motorola.
  • Input device 24 and output device 26 are typically peripheral devices connected by bus structure 32 to computer 22.
  • Input device 24 may be a keyboard, pointing device, pen, joystick, head tracking device or other device for providing input data to the computer.
  • a computer system for implementing our stereo matching methods receives input images from input cameras connected to the computer via a digitizer that converts pictures into a digital format.
  • the output device 26 represents a display device for displaying images on a display screen as well as a display controller for controlling the display device.
  • the output device may also include a printer, sound device or other device for providing output data from the computer.
  • Memory system 30 generally includes high speed main memory 38 implemented using conventional memory medium such as random access memory (RAM) and read only memory (ROM) semiconductor devices, and secondary storage 40 implemented in mediums such as floppy disks, hard disks, tape, CD ROM, etc. or other devices that use optical, magnetic or other recording material.
  • Main memory 38 stores programs such as a computer's operating system and currently running application programs.
  • the operating system is the set of software which controls the computer system's operation and the allocation of resources.
  • the application programs are the set of software that performs a task desired by the user, making use of computer resources made available through the operating system.
  • portions of main memory 38 may also be used as a frame buffer for storing digital image data displayed on a display device connected to the computer 22.
  • the operating system commonly provides a number of functions such as process/thread synchronization, memory management, file management through a file system, etc.
  • Fig. 1 is a block diagram illustrating the basic elements of a computer system; the figure is not intended to illustrate a specific architecture for a computer system 20. For example, no particular bus structure is shown because various bus structures known in the field of computer design may be used to interconnect the elements of the computer system in a number of ways, as desired.
  • CPU 28 may be comprised of a discrete ALU 33, registers 34 and control unit 36 or may be a single device in which one or more of these parts of the CPU are integrated together, such as in a microprocessor. Moreover, the number and arrangement of the elements of the computer system may be varied from what is shown and described in ways known in the computer industry.
  • our stereo method can be implemented using digital logic circuitry.
  • steps in our method could be implemented in special purpose digital logic circuitry to compute results more quickly and efficiently.
  • the first step 100 of the method is to formulate a representation of the 3D working volume of interest.
  • This representation is called a generalized disparity space, a projective sampling of 3D space represented by an array of (x.y.d) cells.
  • the x and y axes represent a two- dimensional array of samples in each disparity plane, d.
  • Fig. 3A is an example of a working volume 110 showing a 3D foreground object 112 in front of a flat background object 114.
  • Figs. 3B-C are top and side views of the working volume, showing the position of the foreground object 112 relative to the background object.
  • To produce the input images several cameras 116-122 are positioned around the object. The cameras each capture an input image depicting the object against the flat background. The cameras transfer this input image to a computer via a camera/computer interface. Depending on the camera, this interface may convert the image into a format compatible with the computer and the stereo method running in the computer.
  • the images can be monochrome or color. In both cases, the digitized form of each input image consists of a two dimensional array of pixels. For monochrome images, each pixel represents a gray-scale value, while for color images, each pixel represents a color triplet such as Red, Green, and Blue values.
  • Figs. 3 A-C show an example of a generalized disparity space, superimposed onto the working volume.
  • the first step is to choose a virtual camera position and orientation.
  • the second step is to choose the orientation and spacing of the disparity planes.
  • the virtual camera 130 provides a frame of reference for the working volume and defines how each of the images map into the cells of the generalized disparity space.
  • the generalized disparity space can be viewed as a series of disparity planes, each forming an (x,y) coordinate system, and each projecting into the virtual camera 130.
  • the generalized disparity space is comprised of regularly spaced (x,y,d) cells, shown as the intersections of the 3D grid.
  • the (x,y,d) axes are orthogonal and evenly sampled.
  • the spacing of the cells is more clearly illustrated in the top and side views (Figs. 3B-C).
  • the virtual camera 130 is considered to be spaced at an infinite distance away from the working volume such that the rays emanating from the virtual camera and all (x,y) columns in the d dimension are parallel.
  • This drawing shows only one example of a generalized disparity space. It is important to emphasize that this space can be any projective sampling of 3D space in the working volume of interest.
  • the virtual camera can be located at the same position (coincident) with any of the input cameras, or at some other location. As another alternative, one can also choose a skewed camera model.
  • d planes The relationship between -/ and disparity space can be projective. For example, one could choose d o be inversely proportional to depth, which is the usual meaning of disparity.
  • the virtual camera's position and the disparity plane spacing and orientation can be represented in a single 4 x 4 matrix M 0 , which represents a mapping from world coordinates
  • X:(X, Y, 2, 1) to general disparity coordinates X 0 :(x, y, d, 1), x 0 M 0 X .
  • X A :(-.,V,1) be the screen coordinates of the kt input camera.
  • the quantities on each side of the expression are "equal" in the sense that they are equal up to a scale.
  • the stereo matching method shown in Fig. 2 re-samples the input images by mapping the pixels in each of the input images to the cells in the generalized disparity space. This part of the method is illustrated as step 140 in Fig. 2.
  • mapping of k input images to a given cell produces k values for each cell.
  • k values For color images for example, there are k color triplets for each cell. These color values can be thought of as the color distributions at a given location in the working volume. The color distributions with the smallest variance are more likely to represent matching pixels in the input images, and thus, are more likely to be located at a visible surface element on the object surface depicted in the input images.
  • mapping from screen coordinates to world coordinates we can define a mapping between a pixel in an input image and an (x,y,d) cell as:
  • x 0 7x, y, 1) is the 2D disparity space coordinate without the d component
  • t k is the image of the virtual camera's center of projection in image k, (i.e., the epipole).
  • This mapping can be implemented so that it first rectifies an input image and then re- projects it into a new disparity plane d using:
  • t A . H ⁇ t k is the focus of expansion, and the new homography H A represents a simple shift and scale. It has been shown that the first two terms of t k depend on the horizontal and vertical displacements between the virtual camera 130 and the kt camera, whereas the third element is proportional to the displacement in depth ( pe ⁇ endicular to the d plane). Thus, if all the cameras are coplanar (regardless of their vergence), and if the d planes are parallel to the common plane, then the re-mappings of the rectified images to a new disparity correspond to pure shifts.
  • the current implementation of this method uses bilinear inte ⁇ olation of the pixel colors and opacities. More precisely, for each location (x, y, d, k), the value of X k : (u, v,l) is computed using equation (1 ).
  • x k is generally a floating point number
  • the 4 color values surrounding the x k location are extracted and blended using the bilinear formula: (1 - ⁇ )(l - ⁇ )c(i,j) + (1 - )( ⁇ )c(i,j + 1) + ( ⁇ )(l - ⁇ )c(i + ⁇ ,j) + cc ⁇ c(i + I + 1) (5)
  • i andj are the largest integers less than or equal to u and v and and ⁇ are the fractional parts of u and v.
  • opacities are selected based on an initial estimate of the disparity surface and then refined.
  • the bilinear inte ⁇ olation of opacities refers to both the initial estimates and subsequent refined values.
  • the next step 142 is to compute statistics on the k samples of each cell in an attempt to find the cell in each (x,y) column that is most likely to lie on a visible surface element. Stated generally, this step computes the probabilities that cells represent visible surface elements based on the distribution of the pixel values at each cell.
  • the method looks at the statistics (e.g., the mean and variance) computed for the k color samples at each (x,y,d) cell in the (x,y) column to find a cell that is most likely to reside on a visible surface element.
  • the method includes computing the mean color and variance for the k samples.
  • One way to select the "winning" cell or winning disparity is to select the cell with the lowest variance. Certainly other statistical analyses can be used to select a winning disparity value.
  • One way to improve the likelihood that the method does choose the correct disparity at each (x,y) column is to perform additional processing.
  • the method looks at other evidence such as the statistics at neighboring cells to increase the likelihood of finding the correct disparity.
  • the color values of the k input images at the winning disparity should have zero variance.
  • image noise, fractional disparity shifts, and photometric variations make it unlikely that a cell will have zero variance.
  • the variance will also be arbitrarily high in occluded regions, such as the area where the foreground object occludes the background object in Fig. 2. In this portion of the working volume, occluded pixels will have an impact on the selection of a disparity level, leading to gross errors.
  • One way to disambiguate matches is to aggregate evidence.
  • Conventional techniques for aggregating evidence can use either two-dimensional support regions at a fixed disparity (favoring front-to-parallel surfaces), or three-dimensional in (x,y,d) space (allowing slanted surfaces).
  • Two- dimensional evidence aggregation has been done using both fixed square windows (traditional) and windows with adaptive sizes.
  • Three-dimensional support functions include a limited disparity gradient, Prazdny's coherence principle (which can be implemented using two diffusion processes), and iterative (non-linear) evidence aggregation. Selecting disparities, colors and opacities
  • the next step 146 in the method is to select disparities, colors and opacities based on the statistical analysis from step 142 and/or aggregating evidence of step 144. It is significant to note that the method selects opacity values, in addition to disparities and colors.
  • the defined criteria measures how likely a cell is located at a visible surface element.
  • This criteria is a test to determine whether the variance
  • scatter in color values is below a threshold. Another part of this criteria can be the extent to which the disparity of the cell is more likely to represent a visible surface element relative to other cells in the column.
  • the initial estimates do not have to be binary opacities.
  • the initial estimate can be set to 1 if some defined criteria is satisfied and some value less than one (between 1 and 0) based on how close the cell is to the defined criteria, for those cells that do not pass the criteria.
  • the initial estimates on color values are also based on the statistics.
  • the method can include selecting the mean color computed from the k pixels for each cell as the initial color estimate for a cell.
  • the method can create a new (x,y,d) volume based on the statistics computed for the k pixels at each cell and the aggregating evidence. To set up the volume for a refinement stage, one implementation of the method sets the colors to the mean value and sets the opacity to one for cells meeting the criteria, and 0 otherwise.
  • one alternative is to stop processing and use these elements as the output of the stereo method.
  • a preferred approach is to use the disparities, colors and opacities as initial estimates and then refine these estimates as generally reflected in step 148.
  • the initial opacity values can be used in a refinement process that simultaneously estimates disparities, colors, and opacities which best match the input images while conforming to some prior expectations on smoothness.
  • An alternative refining process is take the binary opacities and pass them through a low pass filter to smooth the discontinuities between opaque and transparent portions. Another possibility is to recover the opacity information by looking at the magnitude of the intensity gradient, assuming that the stereo method can sufficiently isolate regions which belong to different disparity levels.
  • Fig. 4 is a flow diagram illustrating this technique.
  • the slanted blocks correspond to a representation of data in the computer, whereas the rectangular blocks (operator blocks) correspond to an operation on the data.
  • the technique for estimating an initial disparity surface begins by taking each kt input image cifu.v) 160 as input and performing a wa ⁇ on it to transform the pixels in the input image to general disparity space.
  • the wa ⁇ operator 162 samples the pixels in the k input images to populate the 4D (x,y,d,k) space. This operation produces color triplets c(x,y,d,k) 164 in the 4D (x,y,d,k) space.
  • the next step 166 in this method is to compute the mean and variance for the k color triplets at each cell in generalized disparity space.
  • This particular implementation of the method uses the variance 170 as input to a stage for aggregating evidence (172).
  • the method then aggregates evidence using a variation of the technique described in D. Scharstein and R. Szeliski. Stereo matching with non-linear diffusion. In Computer Vision and Pattern Recognition (CVPR'96), pages 343-350, San Francisco, California, June 1996.
  • this technique for aggregating evidence involves looking at evidence computed for neighboring pixels to improve the confidence of selecting a d value in each (x,y) column that lies on a visible surface element.
  • One possible implementation can be represented as follows:
  • the method selects binary opacities in each (x,y) column based on criteria indicating whether or not a given cell is likely to correspond to a visible surface element.
  • the objective is to find a clear winner in each column by using a demanding test that a cell must satisfy in order to be assigned an initial opacity of one (totally opaque).
  • the criteria in this implementation includes both a threshold on variance and a requirement that the disparity is a clear winner with respect to other disparities in the same (x,y) column.
  • the threshold can be made proportional to the local variation within an n x n window (e.g., 3 x 3).
  • the initial estimates include: (x,y,d) : the mean colors for each cell (178); d(x,y): the winning disparity for each column (180); and (x,y,d): binary opacities in the 3D general disparity space (182).
  • Fig. 5 is a flow diagram generally depicting a method for refining the initial estimates of the disparities, colors and opacities.
  • the first step 200 summarizes the approach for arriving at the initial estimates as described in detail above.
  • M k X M k M- l x 0 , between a camera's screen coordinate and a coordinate (location of a cell) in general disparity space.
  • Re-projecting the initial estimates generally includes a transformation (or wa ⁇ ) and a compositing operation.
  • a disparity plane can be wa ⁇ ed into a given input camera's frame and then composited with other wa ⁇ ed data using the estimated opacities to compute accumulated color values in the camera's frame. This re-projection step is generally reflected in block 202 of Fig. 5.
  • the re-projected values can then be compared with the pixels in the original input images 206 to compute the difference between re-projected pixels and the original pixels at corresponding pixel locations in the input images.
  • the error computed in this step can then be used to adjust the estimates as shown generally in step 208.
  • the specific criteria used to adjust the estimates can vary. Another criteria for adjusting the estimates is to make adjustments to the color and opacity values to improve continuity of the color and opacity values in screen space.
  • the stereo matching method can either stop with the current estimates or repeat the steps of re-projecting current estimates, computing the error, and then adjusting estimates to reduce the error.
  • This iterative process is one form of refining the estimates and simultaneously computing disparities, opacities and colors.
  • Fig. 6 is a flow diagram illustrating a specific implementation of the re-projection stage
  • the input to the re- projection stage includes both color estimates, c(x,y,d) (220), and opacity estimates, a(x,y,d) (222), in general disparity space (the (x,y,d) volume).
  • the re-projection stage views the (x,y,d) volume as a set of potentially transparent acetates stacked at different d levels.
  • k (u,v,d) W h ( (x,y,d); ⁇ k + t k [0,0,d])
  • a k (u,v,d) W h (a(x,y,d);H k + t k [0,0,d])
  • c is the current color estimate [R G B] and is the current opacity estimate at a given (x,y,d) cell
  • c k and a k are the resampled layer d in camera k's coordinate system
  • W h is the resampling operator derived from the homography in the previous paragraph.
  • the wa ⁇ ing function is linear in the colors and opacities being resampled, i.e, the resampled colors and opacities can be expressed as a linear function of the current color and opacity estimates through sparse matrix multiplication.
  • the wa ⁇ portion of the re-projection stage is represented as block 224.
  • the resampled layer is represented by c ⁇ and k (226,228).
  • the reprojection stage composites the resampled layers in back-to- front order, namely, from the minimum d layer (maximum depth) to the maximum d layer (minimum depth) relative to the camera, where the maximum d layer is closest to the camera.
  • the result of the compositing operation 230 is a re-projected image layer, including opacity (232).
  • One way to refine the disparity estimates is to prevent visible surface pixels from voting for potential disparities in the regions they occlude. To accomplish this, we build a (x, y, d, k) visibility map, which indicates whether a given camera k can see a voxel (cell in disparity space) at location (x, y, d).
  • the process of computing the visibility includes finding the top most opaque pixel for each (u,v) column in the o resampled layers per input camera.
  • the visibility and opacity values can be inte ⁇ reted as follows:
  • a visibility map can be computed by taking each layer of resampled opacity in front to back order and computing visibility as follows:
  • V k (u,v,d - - a k (u,v,d')) (9) with the initial visibilities set to 1 , V k (u,v,d max ) 1 .
  • the compositing operation can be expressed as:
  • Fig. 7 is a flow diagram illustrating how the visibility map can be used to refine the initial disparity estimates.
  • the first step is to take the resampled opacity 240 and compute the visibility (242) to construct a visibility map V (u,v,d) (244) for each input camera.
  • the visibility data for each cell can be used to make more accurate estimates of colors and disparities.
  • the visibility data determines the weight that a color sample will have in computing the color distribution at an (x,y,d) cell. For example, a color sample with a higher visibility value will contribute more than a color sample with a low visibility value.
  • the visibility values provide additional information when computing the statistics for the color samples at a cell. Without the visibility data, pixel samples from occluding surfaces can cause gross errors. With visibility, these pixel samples have little or no contribution because the visibility value is very low or zero for this cell. This use of visibility tends to decrease the variance, even for mixed pixels, because the visibility controls the extent to which color from occluding and occluded surfaces contribute to the color distribution at a cell. Partially occluded surface elements will receive color contribution from input pixels that are not already assigned to nearer surfaces. Since the variance is lower, the mean colors are more accurate, and it is easier to identify a disparity in each column that most likely represents a visible surface element.
  • Fig. 8 is a flow diagram illustrating a method for computing error between the re-projected color samples and the input color samples at corresponding pixel locations. This diagram is quite similar to Fig. 6, and therefore does not require much elaboration.
  • the task of computing error begins with the current color (260) and opacity estimates (262) in disparity space. These estimates are first wa ⁇ ed (264) into (u.v.d) space of the respective input cameras to produce resampled d layers with color and opacity (266, 268). Next, a compositor operator (over, 270) combines the resampled layers into a resampled image layer, including color (272) and opacity (274).
  • the resampled color values 272 are then compared with the input color values 276 at corresponding pixel locations (u,v) to compute error values at the pixel locations (u,v). This step is represented as the color difference block 278 in Fig. 8, which produces error values 280 for each (u,v) location for each of the input cameras, k.
  • Fig. 9 illustrates an implementation of adjusting color and opacity elements in the refining stage using a gradient descent approach.
  • the refining stage computes the first element of the cost function from the error values, e (u,v) (290), the accumulated color and opacity in each layer d, ⁇ k (u,v,d) (292), and the visibility in each - layer, V k (u,v,d) (294).
  • Fig. 8 illustrates how to compute these error values in more detail.
  • the refining stage computes the accumulated color and opacity values 292 for each d layer by multiplying the color and opacity values for each layer by the corresponding visibility for that layer.
  • Fig. 7 and the accompanying description above provide more detail on how to compute the visibility for each layer.
  • the refining stage computes the gradient and Hessian using the error values, the accumulated colors and opacities, and the visibilities for each resampled layer, k. More specifically, the refining stage first computes the gradient and the diagonal of the Hessian for the cost Cj with respect to the resampled colors in (u.v.d) space.
  • the derivative of Ci can be computed by expressing the resampled colors and opacities in
  • the gradient and Hessian are of C- in (u,v,d) space are illustrated as data representations 300 and 302 in Fig. 9.
  • the refining stage computes the derivatives with respect to the wa ⁇ ed predicted (resampled estimates) color values, it then transforms these values into disparity space. This can be computed by using the transpose of the linear mapping induced by the backward wa ⁇ used in step 224 of Fig. 6. For certain cases the result is the same as wa ⁇ ing the gradient and Hessian using the forward wa ⁇ W f . For many other cases ( moderate scaling or shear ), the forward wa ⁇ is still a good approximation. As such, we can represent the Wa ⁇ operator 304 using the following expressions: g.
  • the Wa ⁇ operator transforms the gradient and Hessian of C, in (u,v,d) space to general disparity space.
  • the gradient and Hessian in general disparity space , , h, , are illustrated by data representations 306, 308 in Fig. 9.
  • h 3 (x, y,d) [0 0 0 1] .
  • the next step is to combine the gradients for each of the cost functions as shown in step 326.
  • the expressions for the combined gradients 328, 330 are as follows:
  • K g(x,y,d) ⁇ g.(x,y,d,k) + ⁇ 2 g 2 (x,y,d) + ⁇ 3 g 3 (x,y,d),
  • a gradient step can then be performed as follows: c(x,y,d) ⁇ - (x,y,d) + ⁇ .g(x,y,d) I (h(x,y,d) + ⁇ 2 ) . (23)
  • This step adjusts the estimated color and opacity values to produce adjusted color and opacity values.
  • the adjustment values are then combined with the previous estimates of color and opacity to compute the adjusted color and opacity estimates.
  • the adjusted color and opacities can then be used as input to the re-projection stage, which computes estimated images from the adjusted color and opacity values.
  • computing the error between the re-projected images and input images can be repeated a fixed number of times or until some defined constraint is satisfied such as reducing the error below a threshold or achieving some predefined level of continuity in the colors and/or opacities.
  • the initial estimates of opacity do not have to be binary opacities, but instead, can be selected in a range from fully opaque to fully transparent based on, for example, the statistics (e.g., variances) or confidence values produced by aggregating evidence. Even assuming that the initial estimates are binary, these estimates can be refined using a variety of techniques such as passing the binary estimates through a low pass filter or using an iterative approach to reduce errors between re-projected images and the input images.

Abstract

A stereo matching method simultaneously recovers disparities, colors and opacities from input images to reconstruct 3-dimensional surfaces depicted in the input images. The method includes formulating a general disparity space by selecting a projective sampling of a 3D working volume (112), and then mapping the input images into cells in the general disparity space. After computing statistics on the color samples at each cell, the method computes visibility and then uses this visibility information to make more accurate estimates by giving samples not visible from a given camera less or no weight. The method uses the initial estimates as input to refining process which tries to match re-projected image layers to the input images.

Description

METHOD FOR PERFORMING STEREO MATCHING TO RECOVER DEPTHS, COLORS AND OPACITIES OF SURFACE ELEMENTS
FIELD OF THE INVENTION The invention relates generally to an image processing field called computer vision and more specifically relates to stereo matching within this field.
BACKGROUND OF THE INVENTION
Stereo matching refers generally to a method for processing two or more images in an attempt to recover information about the objects portrayed in the images. Since each image is only two dimensional, it does not convey the depth of the objects portrayed in the image relative to the camera position. However, it is possible to recover this depth information by processing two or more images of the same object taken from cameras located at different positions around the object. There are two primary elements to extracting depth information: 1) finding picture elements (pixels) in each image that correspond to the same surface element on an object depicted in each image; and 2) using triangulation to compute the distance between the surface element and one of the cameras. Knowing the camera position and the corresponding picture elements, one can trace a ray from each camera through corresponding picture elements to find the intersection point of the rays, which gives the location of a surface element in three-dimensional (3D) space. After computing this intersection point, one can then compute the distance or "depth" of the surface element relative to one of the cameras.
The difficult part of this method is finding matching picture elements in two or more input images. In the field of computer vision, this problem is referred to as stereo matching or stereo correspondence. Finding matching picture elements or "pixels" is difficult because many pixels in each image have the same color.
In the past, researchers have studied the stereo matching problem in attempt to recover depth maps and shape models for robotics and object recognition applications. Stereo matching is relevant to these applications because it can be used to compute the distances or "depths" of visible surface elements relative to a camera from two or more input images. These depth values are analogous to the depths of surface elements on a 3D object (sometimes referred to as the z coordinate in an (x,y,z) coordinate system) in the field of computer graphics. Depth or "z" buffers are a common part of 3D graphics rendering systems used to determine which surface elements of 3D objects are visible while rendering a 3D scene into a two-dimensional image.
The term "disparity" is often used in the computer vision field and represents the change in position of a surface element on an object when viewed through different cameras positioned around the object. Since disparity is mathematically related to depth from the camera, it can be used interchangeably with depth. In other words, once one has determined disparity, it is trivial to convert it into a depth value.
A typical stereo matching algorithm will attempt to compute the disparities for visible surface elements. These disparities can be converted into depths to compute a depth map, an array of depth values representing the depth of visible surface elements depicted in an image.
Recently, depth maps recovered from stereo images have been painted with texture maps extracted from the input images to create realistic 3D scenes and environments for virtual reality and virtual studio applications. A "texture map" is another term commonly used in computer graphics referring to a method for mapping an image to the surface of 3D objects. This type of stereo matching application can be used to compute a 3D virtual environment from a video sequence. In a game, for example, this technology could be used to create the effect of "walking through" a virtual environment and viewing objects depicted in a video sequence from different viewing perspectives using a technique called view interpolation. View Interpolation refers to a method for taking one image and simulating what it would look like from a different viewpoint. In another application called z-keying, this technology can be used to extract depth layers of video objects and then insert graphical objects between the depth layers. For example, z-keying can be used to insert computer-generated animation in a live video sequence.
Unfortunately, the quality and resolution of most stereo algorithms is insufficient for these types of applications. Even isolated errors in the depth map become readily visible when synthetic graphical objects are inserted between extracted foreground and background video objects.
One of the most common types of errors occurs in stereo algorithms when they attempt to compute depth values at the boundary where a foreground object occludes a background object (the occlusion boundary). Some stereo algorithms tend to "fatten" depth layers near these boundaries, which causes errors in the depth map. Stereo algorithms based on variable window sizes or iterative evidence aggregation can in many cases reduce these types of errors. (T. Kanade and M. Okutomi. A stereo matching algorithm with an adaptive window: Theory and experiment. IEEE Tram. Patt. Anal. Machine Intel., 16(9):920-932, September 1994) (D. Scharstein and R. Szeliski. Stereo matching with non-linear diffusion. In Computer Vision and Pattern Recognition (CVPR '96), pages 343-350, San Franciso, California, June 1996). Another problem is that stereo algorithms typically only estimate disparity values to the nearest pixel, which is often not sufficiently accurate for tasks such as view interpolation.
While pixel level accuracy is sufficient for some stereo applications, it is not sufficient for challenging applications such as z-keying. Pixels lying near occlusion boundaries will typically be "mixed" in the sense that they contain a blend of colors contributed by the foreground and background surfaces. When mixed pixels are composited with other images or graphical objects, objectionable "halos" or "color bleeding" may be visible in the final image.
The computer graphics and special effects industries have faced similar problems extracting foreground objects in video using blue screen techniques. The term blue screen generally refers to a method for extracting an image representing a foreground object from the rest of an image. A common application of this technique is to extract the foreground image and then superimpose it onto another image to create special effects. For example, a video sequence of a spaceship can be shot against a blue background so that the spaceship's image can be extracted from the blue background and superimposed onto another image (e.g., an image depicting a space scene). The key to this approach is that the background or "blue screen" is comprised of a known, uniform color, and therefore, can be easily distinguished from the foreground image.
Despite the fact that the background color is known, blue screen techniques still suffer from the same problem of mixed pixels at the occlusion boundary of the foreground object (e.g., the perimeter of the spaceship in the previous example). To address the problems of mixed pixels in blue screen techniques, researchers in these fields have developed techniques for modeling mixed pixels as combinations of foreground and background colors. However, it is insufficient to merely label pixels as foreground and background because this approach does not represent a pixel's true color and opacity. The term "opacity" (sometimes referred to "transparency" or "translucency") refers to the extent to which an occluded background pixel is visible through the occluding foreground pixel at the same pixel location. An image comprises a finite number of pixels arranged in a rectangular array. Each pixel, therefore, covers an area in two-dimensional screen coordinates. It is possible for sub-pixel regions of pixels at occlusion boundaries to map to surface elements at different depths (e.g., a foreground object and a background object). It is also possible for a pixel to represent a translucent surface such as window that reflects some light and also allows light reflected from a background object to pass through it. In order for a pixel to represent the foreground and background colors accurately, it should represent the proper proportion of foreground and background colors in its final color values. The opacity value can be used to represent the extent to which a pixel is composed of colors from foreground and background surface elements.
As alluded to above, one way to approximate opacity is merely to assume some predefined blending factor for computing colors of mixed pixels. While this type of blending foreground and background colors can make errors at the occlusion boundaries less visible for some applications, it does not remove the errors and is insufficient for demanding applications such as z-keying.
Moreover, in the context of stereo matching, the background colors are usually not known. A stereo matching method has to attempt to distinguish background and foreground colors before "mixed" pixels can be computed.
SUMMARY OF THE INVENTION
The invention provides an improved stereo method that simultaneously recovers disparity, color, and opacity from two or more input images. In general, the objective is to use the input images to reconstruct 3D surfaces of the objects depicted in the input images. We sometimes refer to these surfaces as a collection of visible (or partially visible) surface elements. Our stereo method is designed to recover the disparities, colors and opacities of the visible surface elements. While we often refer to "colors," it is important to note that the invention applies to gray-scale images as well as color images using any of a variety of known color spaces.
The first stage of the method is to formulate a general disparity space. This generally involves selecting a projective sampling of a 3D working volume. The working volume is a space containing the objects depicted in the input images. In one particular implementation, this stage includes selecting the position and orientation of a virtual camera, and selecting the spacing and orientation of disparity planes. The end result is an (x.y.d) general disparity space, where d represents disparity (also referred to as a disparity layer or plane). The general disparity space serves as a common reference for each of the input cameras that generated the input images. The next stage is to make some initial estimates of colors and opacities in the general disparity space. This stage includes transforming the input images from their screen coordinates to the cells in the general disparity space. This stage can be implemented by sampling the input images to collect k colors corresponding to the k input images for each of the cells in the general disparity space. Within this stage, there are a number of ways to arrive at the initial estimates. In general, this stage computes statistics on the k colors for each cell, and then uses these statistics to arrive at initial color estimates. This stage also estimates disparities and opacities by using the statistics to pick cells that are most likely to reside on a visible surface element. For better results, this stage can also use evidence aggregated from neighboring samples to improve the statistical analysis (aggregating evidence).
One specific implementation computes statistics on k colors at each cell, including the mean and variance. This implementation then estimates a cell's color as the mean color. It selects a winning disparity for each (x,y) column in disparity space that has a variance below a threshold and is clearly more likely to be located at a visible surface element relative to other cells in the column. Finally, it estimates initial opacities by assigning binary opacities to cells, where cells that are clearly more likely to be on visible surface elements are initially set to totally opaque and other cells are set to totally transparent.
The next stage is to refine the initial estimates. One way to accomplish this is to compute visibility values and then use these visibility values to make better estimates of colors, opacities and disparities. Visibility is the extent to which an element in 3D space is visible from a given input camera location. One implementation of the method computes visibility values by projecting opacity estimates from general disparity space to the (u,v,d) space of each input camera, and then determining visibility at the (u,v,d) coordinates from the opacity values. The (u,v,d) space of an input camera is a 3D disparity space from the perspective of a kt input camera, where u and v are pixel coordinates in the input image and d is the disparity. The disparity d is untransformed when mapped backward or forward between general disparity space and the disparity space of an input camera.
The visibility information can then be associated with color values of the input images, mapped into the general disparity space. When associated with color samples collected at each cell, the visibility information can be used to compute weighted statistics, where a color sample has less weight if the location of the sample is not visible from the camera it comes from.
Another way to refine the estimates, which can be used in conjunction with visibility, is to use the initial estimates as input to an iterative refining process. In each iteration, the estimates can be projected back into the input cameras to compute re-projected images. The objective in this particular approach is to compute the error between the re-projected images and the input images, and use this error to adjust the current estimates. In addition to error, other costs or constraints can be used to determine how to adjust the current color and opacity estimates and make them more accurate. In particular, the process of re-projecting the estimates can include: 1) transforming disparity planes to the (u,v,d) coordinate space of the input cameras; and 2) compositing the layers to compute the re-projected image. The disparity planes are comprised of a rectangular array (x,y) of the current color and opacity estimates for the cells in a disparity plane, d. The iterative refining process can be implemented as a cost minimization problem, using the error and possibly other constraints as cost functions to determine how to adjust the current estimates.
This stereo matching method addresses many of the problems with conventional stereo methods outlined above. For example, it provides an improved method for recovering colors and disparities in partially occluded regions. It also deals with pixels containing mixtures of foreground and background colors more effectively. This method can also provide more accurate color and opacity estimates, which can be used to extract foreground objects, and mix live and synthetic imagery with fewer visible errors.
Further advantages and features will become apparent with reference to the following detailed description and accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a general block diagram of a computer system that serves as an operating environment for our stereo matching method.
Fig. 2 is a general flow diagram illustrating a stereo method for estimating disparities, colors and opacities. Figs. 3A-C are diagrams illustrating an example of the orientation and position of input cameras relative to a general disparity space.
Fig. 4 is a more detailed flow diagram illustrating methods for estimating a disparity surface.
Fig. 5 is a general flow diagram illustrating a method for refining estimates of a disparity surface.
Fig. 6 is a flow diagram illustrating a method for re-projecting values in disparity space to the input cameras.
Fig. 7 is a flow diagram illustrating a method for improving disparity estimates using a visibility map. Fig. 8 is a flow diagram illustrating a method for re-projecting values from disparity space to the input cameras to compute error.
Fig. 9 is a flow diagram illustrating a method for refining color and transparency estimates.
DETAILED DESCRIPTION The invention provides a method for simultaneously recovering disparities, colors and opacities of visible surface elements from two or more input images. Before describing this method, we begin by describing an operating environment for software implementations of the method. We then describe the implementation details of the method.
Computer Overview
Fig. 1 is a general block diagram of a computer system that serves as an operating environment for an implementation of the invention. The computer system 20 includes as its basic elements a computer 22, one or more input devices 24 and one or more output device 26 including a display device.
Computer 22 has a conventional system architecture including a central processing unit (CPU) 28 and a memory system 30, which communicate through a bus structure 32. CPU 28 includes an arithmetic logic unit (ALU) 33 for performing computations, registers 34 for temporary storage of data and instructions and a control unit 36 for controlling the operation of computer system 20 in response to instructions from a computer program such as an application or an operating system. The computer can be implemented using any of a variety of known architectures and processors including an x86 microprocessor from Intel and others, such as Cyrix, AMD, and Nexgen, and the PowerPC from IBM and Motorola.
Input device 24 and output device 26 are typically peripheral devices connected by bus structure 32 to computer 22. Input device 24 may be a keyboard, pointing device, pen, joystick, head tracking device or other device for providing input data to the computer. A computer system for implementing our stereo matching methods receives input images from input cameras connected to the computer via a digitizer that converts pictures into a digital format.
The output device 26 represents a display device for displaying images on a display screen as well as a display controller for controlling the display device. In addition to the display device, the output device may also include a printer, sound device or other device for providing output data from the computer.
Some peripherals such as modems and network adapters are both input and output devices, and therefore, incorporate both elements 24 and 26 in Fig. 1. Memory system 30 generally includes high speed main memory 38 implemented using conventional memory medium such as random access memory (RAM) and read only memory (ROM) semiconductor devices, and secondary storage 40 implemented in mediums such as floppy disks, hard disks, tape, CD ROM, etc. or other devices that use optical, magnetic or other recording material. Main memory 38 stores programs such as a computer's operating system and currently running application programs. The operating system is the set of software which controls the computer system's operation and the allocation of resources. The application programs are the set of software that performs a task desired by the user, making use of computer resources made available through the operating system. In addition to storing executable software and data, portions of main memory 38 may also be used as a frame buffer for storing digital image data displayed on a display device connected to the computer 22.
The operating system commonly provides a number of functions such as process/thread synchronization, memory management, file management through a file system, etc.
Below we describe software implementations of a stereo matching method in some detail. This software can be implemented in a variety of programming languages, which when compiled, comprises a series of machine-executable instructions stored on a storage medium readable by a computer ("computer readable medium"). The computer readable medium can be any of the conventional memory devices described above in connection with main memory and secondary storage. It should be understood that Fig. 1 is a block diagram illustrating the basic elements of a computer system; the figure is not intended to illustrate a specific architecture for a computer system 20. For example, no particular bus structure is shown because various bus structures known in the field of computer design may be used to interconnect the elements of the computer system in a number of ways, as desired. CPU 28 may be comprised of a discrete ALU 33, registers 34 and control unit 36 or may be a single device in which one or more of these parts of the CPU are integrated together, such as in a microprocessor. Moreover, the number and arrangement of the elements of the computer system may be varied from what is shown and described in ways known in the computer industry.
As an alternative to using a general purpose computer, our stereo method, or parts of it, can be implemented using digital logic circuitry. For example, steps in our method could be implemented in special purpose digital logic circuitry to compute results more quickly and efficiently.
Implementation of the Stereo Method Having described the operating environment, we now focus on the implementation of our stereo method. We begin with an overview of the method, as illustrated in Fig. 2. Below, we introduce each of the steps shown in Fig. 2 and then describe each step in more detail.
Formulating a Generalized Disparity Space In general, the first step 100 of the method is to formulate a representation of the 3D working volume of interest. This representation is called a generalized disparity space, a projective sampling of 3D space represented by an array of (x.y.d) cells. The x and y axes represent a two- dimensional array of samples in each disparity plane, d.
Before describing this general disparity space in more detail, it is helpful to consider an example of a working volume.
Fig. 3A is an example of a working volume 110 showing a 3D foreground object 112 in front of a flat background object 114. Figs. 3B-C are top and side views of the working volume, showing the position of the foreground object 112 relative to the background object. To produce the input images, several cameras 116-122 are positioned around the object. The cameras each capture an input image depicting the object against the flat background. The cameras transfer this input image to a computer via a camera/computer interface. Depending on the camera, this interface may convert the image into a format compatible with the computer and the stereo method running in the computer. The images can be monochrome or color. In both cases, the digitized form of each input image consists of a two dimensional array of pixels. For monochrome images, each pixel represents a gray-scale value, while for color images, each pixel represents a color triplet such as Red, Green, and Blue values.
Figs. 3 A-C show an example of a generalized disparity space, superimposed onto the working volume. To formulate a disparity space, the first step is to choose a virtual camera position and orientation. The second step is to choose the orientation and spacing of the disparity planes. The virtual camera 130 provides a frame of reference for the working volume and defines how each of the images map into the cells of the generalized disparity space. As shown, the generalized disparity space can be viewed as a series of disparity planes, each forming an (x,y) coordinate system, and each projecting into the virtual camera 130.
In this particular example, the generalized disparity space is comprised of regularly spaced (x,y,d) cells, shown as the intersections of the 3D grid. The (x,y,d) axes are orthogonal and evenly sampled. The spacing of the cells is more clearly illustrated in the top and side views (Figs. 3B-C). The virtual camera 130 is considered to be spaced at an infinite distance away from the working volume such that the rays emanating from the virtual camera and all (x,y) columns in the d dimension are parallel.
This drawing shows only one example of a generalized disparity space. It is important to emphasize that this space can be any projective sampling of 3D space in the working volume of interest. The virtual camera can be located at the same position (coincident) with any of the input cameras, or at some other location. As another alternative, one can also choose a skewed camera model.
Having chosen a virtual camera position, one can also choose the orientation and spacing of the constant d planes ("disparity planes"). The relationship between -/ and disparity space can be projective. For example, one could choose d o be inversely proportional to depth, which is the usual meaning of disparity.
The virtual camera's position and the disparity plane spacing and orientation can be represented in a single 4 x 4 matrix M0 , which represents a mapping from world coordinates
X:(X, Y, 2, 1) to general disparity coordinates X0 :(x, y, d, 1), x0 = M0X . The inverse of the camera matrix maps coordinates in general disparity space to world coordinates, X = Mo' 0 . Let XA :(-.,V,1) be the screen coordinates of the kt input camera. The camera matrix
MA is the mapping from world coordinates to the Ath camera's screen coordinates, xk = MAX . In equations where the variables are expressed in homogenous coordinates, the quantities on each side of the expression are "equal" in the sense that they are equal up to a scale.
Transforming the Input Images into Generalized Disparity Space
In order to compute the k values for each cell, the stereo matching method shown in Fig. 2 re-samples the input images by mapping the pixels in each of the input images to the cells in the generalized disparity space. This part of the method is illustrated as step 140 in Fig. 2.
If one assumes the k input cameras are being sampled along a fictitious k dimension, general disparity space can be extended to a 4D space, (x,y,d,k) with k being the fourth dimension. The mapping of k input images to a given cell produces k values for each cell. For color images for example, there are k color triplets for each cell. These color values can be thought of as the color distributions at a given location in the working volume. The color distributions with the smallest variance are more likely to represent matching pixels in the input images, and thus, are more likely to be located at a visible surface element on the object surface depicted in the input images.
Using the expressions for mapping from screen coordinates to world coordinates, and from general disparity coordinates to world coordinates, we can define a mapping between a pixel in an input image and an (x,y,d) cell as:
xk = MkX = MkMo-% = nkx0 + tkd = [nk + tk[0 0 d ]x0 (1)
where x07x, y, 1) is the 2D disparity space coordinate without the d component, and Η.k is the homography mapping relating the rectified and non-rectified version of the input image k (i.e., the homography mapping for d=0), and tk is the image of the virtual camera's center of projection in image k, (i.e., the epipole).
This mapping can be implemented so that it first rectifies an input image and then re- projects it into a new disparity plane d using:
xk = Hkx0' = llkx0 + tkd (2)
where XQ is the new coordinate corresponding to x0 at d= 0. From this, x0' = x0 + ikd = (l + ik[0 0 d])x0 = Ukx0 (3)
Λ — 1 Λ where tA. = H^ tk is the focus of expansion, and the new homography HA represents a simple shift and scale. It has been shown that the first two terms of tk depend on the horizontal and vertical displacements between the virtual camera 130 and the kt camera, whereas the third element is proportional to the displacement in depth ( peφendicular to the d plane). Thus, if all the cameras are coplanar (regardless of their vergence), and if the d planes are parallel to the common plane, then the re-mappings of the rectified images to a new disparity correspond to pure shifts. These expressions defining the mapping from an input image to a cell in disparity space can be used to populate each cell in disparity space with k color triplets. The computation of the colors at a given cell contributed by an input image k can be expressed as: c(x,y,d,k) = Wf (ck(u,v);Hk + tk[0,0.d]) (4) where c(x,y,d, k) is the pixel mapped into the generalized disparity space for input image k, ck (-., v) is the Λth input image, and W , is the forward waφing operator. Note that the color intensity values can be replaced with gray-scale values. The current implementation of this method uses bilinear inteφolation of the pixel colors and opacities. More precisely, for each location (x, y, d, k), the value of Xk : (u, v,l) is computed using equation (1 ). Since xk is generally a floating point number, the 4 color values surrounding the xk location are extracted and blended using the bilinear formula: (1 - α)(l - β)c(i,j) + (1 - )(β)c(i,j + 1) + (α)(l - β)c(i + \,j) + ccβc(i + I + 1) (5) where i andj are the largest integers less than or equal to u and v and and β are the fractional parts of u and v. As explained further below, opacities are selected based on an initial estimate of the disparity surface and then refined. The bilinear inteφolation of opacities refers to both the initial estimates and subsequent refined values.
Computing Statistics Used to Estimate Colors, Disparities and Opacities As shown in Fig. 2, the next step 142 is to compute statistics on the k samples of each cell in an attempt to find the cell in each (x,y) column that is most likely to lie on a visible surface element. Stated generally, this step computes the probabilities that cells represent visible surface elements based on the distribution of the pixel values at each cell. In the context of color images, the method looks at the statistics (e.g., the mean and variance) computed for the k color samples at each (x,y,d) cell in the (x,y) column to find a cell that is most likely to reside on a visible surface element. In one implementation, the method includes computing the mean color and variance for the k samples. One way to select the "winning" cell or winning disparity (i.e., the disparity value for the cell that is most likely to reside on a visible surface element based on the statistics) is to select the cell with the lowest variance. Certainly other statistical analyses can be used to select a winning disparity value.
Aggregating Evidence
One way to improve the likelihood that the method does choose the correct disparity at each (x,y) column is to perform additional processing. The method looks at other evidence such as the statistics at neighboring cells to increase the likelihood of finding the correct disparity. In theory, the color values of the k input images at the winning disparity should have zero variance. However, in practice, image noise, fractional disparity shifts, and photometric variations (e.g., specularities) make it unlikely that a cell will have zero variance. The variance will also be arbitrarily high in occluded regions, such as the area where the foreground object occludes the background object in Fig. 2. In this portion of the working volume, occluded pixels will have an impact on the selection of a disparity level, leading to gross errors. One way to disambiguate matches is to aggregate evidence. There are a number of known techniques for aggregating evidence to disambiguate matches. Conventional techniques for aggregating evidence can use either two-dimensional support regions at a fixed disparity (favoring front-to-parallel surfaces), or three-dimensional in (x,y,d) space (allowing slanted surfaces). Two- dimensional evidence aggregation has been done using both fixed square windows (traditional) and windows with adaptive sizes. Three-dimensional support functions include a limited disparity gradient, Prazdny's coherence principle (which can be implemented using two diffusion processes), and iterative (non-linear) evidence aggregation. Selecting disparities, colors and opacities
The next step 146 in the method is to select disparities, colors and opacities based on the statistical analysis from step 142 and/or aggregating evidence of step 144. It is significant to note that the method selects opacity values, in addition to disparities and colors.
One way to make the initial estimate of opacities is to start with binary opacities such that: α=l , (totally opaque) corresponds to a cell in an (x,y) column meeting defined criteria, and α=0 (totally transparent) for all other cells. The defined criteria measures how likely a cell is located at a visible surface element. One example of this criteria is a test to determine whether the variance
(scatter in color values) is below a threshold. Another part of this criteria can be the extent to which the disparity of the cell is more likely to represent a visible surface element relative to other cells in the column.
The initial estimates do not have to be binary opacities. For example, the initial estimate can be set to 1 if some defined criteria is satisfied and some value less than one (between 1 and 0) based on how close the cell is to the defined criteria, for those cells that do not pass the criteria. The initial estimates on color values are also based on the statistics. For example, the method can include selecting the mean color computed from the k pixels for each cell as the initial color estimate for a cell. At the end of this step 146, the method can create a new (x,y,d) volume based on the statistics computed for the k pixels at each cell and the aggregating evidence. To set up the volume for a refinement stage, one implementation of the method sets the colors to the mean value and sets the opacity to one for cells meeting the criteria, and 0 otherwise.
Refining Estimates of Disparities, Colors and Opacities
After selecting the disparities, colors and opacities, one alternative is to stop processing and use these elements as the output of the stereo method. A preferred approach, especially in view of the problem with mixed pixels at occlusion boundaries, is to use the disparities, colors and opacities as initial estimates and then refine these estimates as generally reflected in step 148. The initial opacity values can be used in a refinement process that simultaneously estimates disparities, colors, and opacities which best match the input images while conforming to some prior expectations on smoothness.
An alternative refining process is take the binary opacities and pass them through a low pass filter to smooth the discontinuities between opaque and transparent portions. Another possibility is to recover the opacity information by looking at the magnitude of the intensity gradient, assuming that the stereo method can sufficiently isolate regions which belong to different disparity levels.
Having described the steps of the stereo method in general, we now describe a specific technique for computing initial estimates of colors, disparities, and opacities (steps 140-146 of Fig. 2). Fig. 4 is a flow diagram illustrating this technique. In this diagram, the slanted blocks correspond to a representation of data in the computer, whereas the rectangular blocks (operator blocks) correspond to an operation on the data.
The technique for estimating an initial disparity surface begins by taking each kt input image cifu.v) 160 as input and performing a waφ on it to transform the pixels in the input image to general disparity space. The waφ operator 162 samples the pixels in the k input images to populate the 4D (x,y,d,k) space. This operation produces color triplets c(x,y,d,k) 164 in the 4D (x,y,d,k) space.
The next step 166 in this method is to compute the mean and variance for the k color triplets at each cell in generalized disparity space. The mean calculation yields a color estimate c(x,y,d) = [R G B] for each of the cells in the general disparity space (represented as item 168 in Fig. 4).
This particular implementation of the method uses the variance 170 as input to a stage for aggregating evidence (172). The method then aggregates evidence using a variation of the technique described in D. Scharstein and R. Szeliski. Stereo matching with non-linear diffusion. In Computer Vision and Pattern Recognition (CVPR'96), pages 343-350, San Francisco, California, June 1996. In general, this technique for aggregating evidence involves looking at evidence computed for neighboring pixels to improve the confidence of selecting a d value in each (x,y) column that lies on a visible surface element. One possible implementation can be represented as follows:
Figure imgf000014_0001
where eτ| is the variance of a pixel ;' at iteration t, σι = min(<τ; , <Tmax ) is a more robust (limited) version of the variance, and -Y, represents the four nearest neighbors. In a current implementation, we have chosen (a,b,c) = (0.1, 0.15, 0.3) and σmax — \ 6 . The result of aggregating evidence is a confidence value 174 for each cell in general disparity space, p(x,y,d).
At this point, the method selects binary opacities in each (x,y) column based on criteria indicating whether or not a given cell is likely to correspond to a visible surface element. The objective is to find a clear winner in each column by using a demanding test that a cell must satisfy in order to be assigned an initial opacity of one (totally opaque). The criteria in this implementation includes both a threshold on variance and a requirement that the disparity is a clear winner with respect to other disparities in the same (x,y) column. To account for resampling errors which occur near rapid color luminance changes, the threshold can be made proportional to the local variation within an n x n window (e.g., 3 x 3). One example expression for the threshold is θ = θ^ + θaVarM .
After picking winners as described above, the initial estimates include: (x,y,d) : the mean colors for each cell (178); d(x,y): the winning disparity for each column (180); and (x,y,d): binary opacities in the 3D general disparity space (182).
These initial estimates can then be used in a refinement stage to improve upon the accuracy of the initial estimates.
Fig. 5 is a flow diagram generally depicting a method for refining the initial estimates of the disparities, colors and opacities. The first step 200 summarizes the approach for arriving at the initial estimates as described in detail above. Once we have computed the initial estimates, we have an initial (x,y,d) volume with cells having estimated color and opacity (e.g., binary opacity in the specific example above), [R G B α].
The initial estimates can then be projected back into each of the input cameras using the known transformation, xk =. MkX = MkM-lx0, between a camera's screen coordinate and a coordinate (location of a cell) in general disparity space. Re-projecting the initial estimates generally includes a transformation (or waφ) and a compositing operation. A disparity plane can be waφed into a given input camera's frame and then composited with other waφed data using the estimated opacities to compute accumulated color values in the camera's frame. This re-projection step is generally reflected in block 202 of Fig. 5. As shown in step 204, the re-projected values can then be compared with the pixels in the original input images 206 to compute the difference between re-projected pixels and the original pixels at corresponding pixel locations in the input images. The error computed in this step can then be used to adjust the estimates as shown generally in step 208. In one implementation explained further below, we adjust the color and opacity estimates so that the re-projected pixels more closely match the input images. The specific criteria used to adjust the estimates can vary. Another criteria for adjusting the estimates is to make adjustments to the color and opacity values to improve continuity of the color and opacity values in screen space.
The stereo matching method can either stop with the current estimates or repeat the steps of re-projecting current estimates, computing the error, and then adjusting estimates to reduce the error. This iterative process is one form of refining the estimates and simultaneously computing disparities, opacities and colors. Within the scope of our stereo matching method, there are a number of alternative ways of refining estimates of colors, disparities and opacities. Below, we describe implementation details of the re-projection step and describe methods for refining estimates in more detail. Fig. 6 is a flow diagram illustrating a specific implementation of the re-projection stage
202 of Fig. 5. This diagram depicts only one example method for re-projecting estimates into the frames of the input cameras. Assuming the same conventions as Fig. 4, the input to the re- projection stage includes both color estimates, c(x,y,d) (220), and opacity estimates, a(x,y,d) (222), in general disparity space (the (x,y,d) volume). In this implementation, the re-projection stage views the (x,y,d) volume as a set of potentially transparent acetates stacked at different d levels. Each acetate is first waφed into a given input camera's frame using the known homography: A = Htx0 + tkd = [Hk + tk [0,0, d]]x0 (7) and then the waφed layers are composited back-to-front. This combination of a waφ and composite operations is referred to as a waφ-shear. It is important to note that other methods for transforming and compositing translucent image layers can be used to re-project the color values to the input camera frames.
The resampling operation for a given layer d into the frame of a camera k can be written as: k(u,v,d) = Wh( (x,y,d);ϋk + tk[0,0,d]) ak (u,v,d) = Wh(a(x,y,d);Hk + tk[0,0,d]) where c is the current color estimate [R G B] and is the current opacity estimate at a given (x,y,d) cell, "ck and ak are the resampled layer d in camera k's coordinate system, and Wh is the resampling operator derived from the homography in the previous paragraph. Note that the waφing function is linear in the colors and opacities being resampled, i.e, the resampled colors and opacities can be expressed as a linear function of the current color and opacity estimates through sparse matrix multiplication.
In Fig. 6, the waφ portion of the re-projection stage is represented as block 224. The resampled layer is represented by c^ and k (226,228).
After resampling a layer, the re-projection stage composites it with another resampled layer using the standard Over operator (foreground Over background layer = foreground color + ( 1 - opacity of foreground pixel)(background color) (230). Each subsequent layer is composited with the accumulated layers from previous Over operations.
In this implementation, the reprojection stage composites the resampled layers in back-to- front order, namely, from the minimum d layer (maximum depth) to the maximum d layer (minimum depth) relative to the camera, where the maximum d layer is closest to the camera. The result of the compositing operation 230 is a re-projected image layer, including opacity (232). One way to refine the disparity estimates is to prevent visible surface pixels from voting for potential disparities in the regions they occlude. To accomplish this, we build a (x, y, d, k) visibility map, which indicates whether a given camera k can see a voxel (cell in disparity space) at location (x, y, d). One way to construct such a visibility map is to record the disparity value for each (u, v) pixel which corresponds to the topmost opaque pixel seen during the compositing step. Note that it is not possible to compute visibility in (x, y, d) disparity space since several opaque pixels may project to the same input camera pixel.
In this example, the process of computing the visibility includes finding the top most opaque pixel for each (u,v) column in the o resampled layers per input camera. The visibility and opacity values can be inteφreted as follows:
Vk = \, ak = 0: free space (i.e., no objects in the working volume);
Vk = 1 , ak = 1 : surface voxel visible in image k;
Vk = \, ak = ?: voxel not visible in image k.
Another way to define visibility is to take into account partially opaque voxels when constructing a visibility map for each input camera. A visibility map can be computed by taking each layer of resampled opacity in front to back order and computing visibility as follows:
Vk(u,v,d - - ak(u,v,d')) (9)
Figure imgf000017_0001
with the initial visibilities set to 1 , Vk (u,v,dmax) = 1 .
Using the visibilities, the compositing operation can be expressed as:
-max ck ( ,v) = ∑ ct (w,v,-f)Fλ (-.,v, ) . (10)
Figure imgf000017_0002
Fig. 7 is a flow diagram illustrating how the visibility map can be used to refine the initial disparity estimates. The first step is to take the resampled opacity 240 and compute the visibility (242) to construct a visibility map V (u,v,d) (244) for each input camera.
Next, the list of color samples in an input image can be updated using the visibility map corresponding to the camera: ck(u,v,d) = ck( ,v)Vk(u,v,d) . (11)
Substituting Ck (u,V,d) for Ck (u,v) in the expression for mapping an input image into disparity space, we obtain a distribution of colors in (x,y,d,k) space where each color triplet has an associated visibility value. The updating of color samples ck 246 and subsequent mapping to disparity space is represented by the Waφ operator block 248 in Fig. 7. The Waφ (248) populates the cells in general disparity space with color samples mapped from the input images, c(x,y,d,k) (250). These color samples have an associated visibility, V(x,y,d,k) (252), which determines the contribution of each color sample to the local color distribution at a cell.
The visibility data for each cell can be used to make more accurate estimates of colors and disparities. The visibility data determines the weight that a color sample will have in computing the color distribution at an (x,y,d) cell. For example, a color sample with a higher visibility value will contribute more than a color sample with a low visibility value.
As shown in block 254, the visibility values provide additional information when computing the statistics for the color samples at a cell. Without the visibility data, pixel samples from occluding surfaces can cause gross errors. With visibility, these pixel samples have little or no contribution because the visibility value is very low or zero for this cell. This use of visibility tends to decrease the variance, even for mixed pixels, because the visibility controls the extent to which color from occluding and occluded surfaces contribute to the color distribution at a cell. Partially occluded surface elements will receive color contribution from input pixels that are not already assigned to nearer surfaces. Since the variance is lower, the mean colors are more accurate, and it is easier to identify a disparity in each column that most likely represents a visible surface element.
While the use of visibility improves the quality of the disparity map and color estimates (mean colors), it does not fully address the problem of recovering accurate color and opacity data for mixed pixels, i.e, pixels near rapid depth discontinuities or translucent pixels. To more accurately compute color and opacity for mixed pixels, we have developed an approach for refining initial estimates of color and opacity.
Above, in Fig. 5, we gave an overview of this method for refining initial estimates. In general, this method includes computing estimates, re-projecting the estimates back into the input cameras, computing the error, and then adjusting the estimates. In Fig. 6, we described a specific implementation of the re-projection step. We now proceed to describe how to compute error in the color estimates.
Fig. 8 is a flow diagram illustrating a method for computing error between the re-projected color samples and the input color samples at corresponding pixel locations. This diagram is quite similar to Fig. 6, and therefore does not require much elaboration. The task of computing error begins with the current color (260) and opacity estimates (262) in disparity space. These estimates are first waφed (264) into (u.v.d) space of the respective input cameras to produce resampled d layers with color and opacity (266, 268). Next, a compositor operator (over, 270) combines the resampled layers into a resampled image layer, including color (272) and opacity (274). The resampled color values 272 are then compared with the input color values 276 at corresponding pixel locations (u,v) to compute error values at the pixel locations (u,v). This step is represented as the color difference block 278 in Fig. 8, which produces error values 280 for each (u,v) location for each of the input cameras, k.
Fig. 9 illustrates a detailed method for adjusting the estimates. This particular method adjusts the estimates based on the error values and two additional constraints: 1) continuities on colors and opacities; and 2) priors on opacities. Specifically, we adjust estimates using a cost minimization function having three parts: 1) a weighted error norm on the difference between re- projected images and the original input images: = ∑ Λ (A θ,v) - CA (-.,v)) ; (12)
(«,v)
2) a smoothness constraint on the colors and opacities:
Figure imgf000019_0001
3) a prior distribution on the opacities:
C3 = ∑ (a(x,y,d)) (14)
(x-y-d)
In the above equations, , and p are either quadratic functions or robust penalty functions, and φ is a function which encourages opacities to be 0 or 1, e.g., φ(x) = x(l — x) . (15)
To minimize the total cost function, C = λ.Cl + λ2C2 + λ,C3 , (16) we use a preconditioned gradient descent algorithm in one implementation of the refining stage.
Other conventional techniques for minimizing a cost function can be used as well, and this particular technique is just one example.
Referring to Fig. 9, we now describe the gradient descent approach. Fig. 9 illustrates an implementation of adjusting color and opacity elements in the refining stage using a gradient descent approach. The refining stage computes the first element of the cost function from the error values, e (u,v) (290), the accumulated color and opacity in each layer d, Αk(u,v,d) (292), and the visibility in each - layer, Vk(u,v,d) (294). Fig. 8 illustrates how to compute these error values in more detail. As illustrated in block 296, the refining stage computes the accumulated color and opacity values 292 for each d layer by multiplying the color and opacity values for each layer by the corresponding visibility for that layer. Fig. 7 and the accompanying description above provide more detail on how to compute the visibility for each layer.
Next as shown in block 298, the refining stage computes the gradient and Hessian using the error values, the accumulated colors and opacities, and the visibilities for each resampled layer, k. More specifically, the refining stage first computes the gradient and the diagonal of the Hessian for the cost Cj with respect to the resampled colors in (u.v.d) space.
The derivative of Ci can be computed by expressing the resampled colors and opacities in
(u,v,d) space as follows: ck(w,v) = ∑ (u,v,d')Vk(u,v,d') + (1 - k( ,v,d))ak(u,v,d - 1) d ak(u,v,d) = T ck(-.,v,-R)Ft(-.,v,-R) d'=dm
(17) -k( ,v) _ <%*(-., v) _ bk( ,v)
Vk(u,v,d) -k(u,v,d) δgk(u,v,d) 7bk(u,v,d) %AU> V± _ [0 o o Vk(u,v,d)]τ - a7u,v,d - l) dαk(u,v,d)
In the computation of Cb the error values can be weighted by the position of the camera k
II l|2 relative to the virtual camera. However, assuming that the weights are 1 , and p. (ek ) = \\ek , then the gradient and Hessian of in (u,v,d) space are: gk(u,v,d) = Vk(u,v,d)(ek( ,v) -[0 0 0 ek(u,v) -ak(u,v,d- l)]r) hk (u,v,d) = Vk (u,v,d)[l 1 1 l - ||a,(-., - l)|2]7'
The gradient and Hessian are of C- in (u,v,d) space are illustrated as data representations 300 and 302 in Fig. 9.
Once the refining stage computes the derivatives with respect to the waφed predicted (resampled estimates) color values, it then transforms these values into disparity space. This can be computed by using the transpose of the linear mapping induced by the backward waφ used in step 224 of Fig. 6. For certain cases the result is the same as waφing the gradient and Hessian using the forward waφ Wf. For many other cases ( moderate scaling or shear ), the forward waφ is still a good approximation. As such, we can represent the Waφ operator 304 using the following expressions: g. (x,y,d) = Wf (gk(u,v,d);Ilk + tk[0,0,d]) „ (19) h](x,y,d) = W/(hk(u,v,d);Uk + tk[0,0,d])
The Waφ operator transforms the gradient and Hessian of C, in (u,v,d) space to general disparity space. The gradient and Hessian in general disparity space , , h, , are illustrated by data representations 306, 308 in Fig. 9.
We now refer to the top of Fig. 9, illustrating the cost function for the spatial difference. As shown in block 310. the refining stage computes the spatial difference from the current estimates of color and opacity 312, 314 as follows: g2 (x,y,d) = ∑ ?2 (c(x',/,- ) - c(x,; )) (20)
(x',y',<J')eNA (x,y,d) where / , is applied to each color component separately. The Hessian is a constant for a quadratic penalty function. For non-quadratic functions, the secant approximation p(f) I r can be used. The gradient and Hessian for the cost function C2 are shown as data representations 316 and 318 in Fig. 9. Finally, the derivative of the opacity penalty function can be computed for φ = x(l — x), as: g3(x,y,d) = [0 0 0 (l - 2a(x,y,d))Y . (21)
To ensure that the Hessian is positive, we set h3(x, y,d) = [0 0 0 1] . The computation of the opacity penalty function, shown in Fig. 9 as block 320, gives the gradient and Hessian g3 , h3 , for the cost function C3 (shown as data representations 322, 324 in Fig. 9).
The next step is to combine the gradients for each of the cost functions as shown in step 326. The expressions for the combined gradients 328, 330 are as follows:
K g(x,y,d) = λ^g.(x,y,d,k) + λ2g2(x,y,d) + λ3g3(x,y,d),
A=l K h(x,y,d) = λl2_Jh](x,y,d,k) + λ2h2(x,y,d) + λ3 :i(x,y, d)
A=l A gradient step can then be performed as follows: c(x,y,d) <- (x,y,d) + ε.g(x,y,d) I (h(x,y,d) + ε2) . (23)
This step adjusts the estimated color and opacity values to produce adjusted color and opacity values. In a current implementation, we have set -?, = ε2 = 0.5. In Fig. 9, the gradient step block 332 computes an adjustment value for the colors and opacities, Ac(x,y,d) = ε}g(x,y,d) / (h(x,y, d) + ε2 ) (334). The adjustment values are then combined with the previous estimates of color and opacity to compute the adjusted color and opacity estimates.
The adjusted color and opacities can then be used as input to the re-projection stage, which computes estimated images from the adjusted color and opacity values. The steps of: 1) adjusting the color and opacity estimates,
2) re-projecting the color, disparity and opacity estimates from disparity space to the input cameras, and
3) computing the error between the re-projected images and input images can be repeated a fixed number of times or until some defined constraint is satisfied such as reducing the error below a threshold or achieving some predefined level of continuity in the colors and/or opacities.
While we have described our stereo matching methods in the context of several specific implementations and optional features, it is important to note that our invention is not limited to these implementations. For example, we have illustrated one example of general disparity space. but the position and orientation of the virtual camera and disparity planes can vary depending on the application and the working volume of interest. We have described some optional techniques for disambiguating matches by aggregating evidence and some specific techniques for computing statistics for the local color distributions. However, it is not necessary to use these specific techniques to implement the invention. Other methods for aggregating evidence and performing statistical analyses can be used as well.
We explained a specific method for using visibility to improve color and disparity estimates. While this does improve the accuracy of estimating the depths and colors of visible surface elements, it is not required in all implementations of the invention. For example, it is possible to skip directly to refining initial color and opacity elements by computing the error values for the estimates and then adjusting the estimates based at least in part on the error between the estimated images and the input images.
We have described alternative methods for simultaneously computing color, disparity and opacity estimates from K input images, but we do not intend the specific implementations described above to be an exclusive list. The initial estimates of opacity do not have to be binary opacities, but instead, can be selected in a range from fully opaque to fully transparent based on, for example, the statistics (e.g., variances) or confidence values produced by aggregating evidence. Even assuming that the initial estimates are binary, these estimates can be refined using a variety of techniques such as passing the binary estimates through a low pass filter or using an iterative approach to reduce errors between re-projected images and the input images.
In view of the many possible embodiments to which the principles of our invention may be applied, it should be recognized that the illustrated embodiments are only specific examples illustrating how to implement the invention and should not be taken as a limitation on the scope of the invention. Rather, the scope of the invention is defined by the following claims. We therefore claim as our invention all that comes within the scope and spirit of these claims.

Claims

We claim:
1. A method for performing stereo matching comprising: selecting a general disparity space representing a projective sampling of 3D space, where the projective sampling includes an array of cells in the 3D space; transforming k input images into the general disparity space, where k is two or more and the input images are comprised of pixels each having color values; estimating colors for the cells in the general disparity space based on the color values of the pixels from the k input images that map to the cells in the general disparity space; computing probabilities that the cells represent visible surface elements of an object depicted in the input images; and estimating opacities for the cells based on the probabilities.
2. The method of claim 1 including aggregating evidence from neighboring cells in the disparity space to compute confidence values for the cells indicating likelihood that the cells represent visible surface elements of the object depicted in the input images, and using the confidence values to estimate opacities for the cells.
3. The method of claim 1 including: using the estimated opacities to compute a visibility map indicating whether the cells are visible from an input camera corresponding to an input image; and using the visibility map to refine the color estimates.
4. The method of claim 1 wherein the step of estimating opacities includes: assigning totally opaque opacity values to cells that have a color variance below a threshold; and assigning totally transparent opacity values to cells that have a color variance above the threshold.
5. The method of claim 4 wherein the estimated opacities comprise binary opacities for (x,y) columns in the disparity space; and further including: passing the binary opacities through a low pass filter to refine the opacity estimates.
6. The method of claim 4 wherein the estimated opacities comprise binary opacities for (x,y) columns in the disparity space; and further including: refining the binary opacities by changing at least some of the binary opacities to non- binary opacities based on an intensity gradient of the estimated color values.
7. The method of claim 1 including: re-projecting the color estimates from general disparity space to input cameras for the mput images to compute re-projected images, computing error values between the re-projected images and the input images by comparing the color values in the re-projected images with color values in the input images at corresponding pixel locations
8 The method of claim 7 wherein the re-projecting step includes transforming disparity planes in general disparity space to the input cameras, where each of the disparity planes comprise an array of estimated color and opacity values, and compositing the transformed disparity planes into the re-projected images for each of the input cameras using the estimated opacity values
9 The method of claim 7 including adjusting the color estimates based at least in part on the error values to compute current color estimates
10 The method of claim 9 including adjusting the color and opacity estimates based at least m part on the error values to compute current color and opacity estimates
1 1 The method of claim 9 including adjusting the color and opacity estimates based at least in part on the error values and a smoothness constraint on the color estimates
12 The method of claim 9 including re-projecting the current color estimates from general disparity space to each of the input cameras to compute a new set of re-projected images, computing new error values between the new set of re-projected images and the input images, and adjusting the current color estimates based on the new error values
13 The method of claim 10 including re-projectmg the current color and opacity estimates from general disparity space to each of the input cameras to compute a new set of re-projected images, computing new error values between the new set of re-projected images and the input images, and adjusting the current color and opacity estimates based on the new error values
14 The method of claim 9 including adjusting the color estimates using a cost minimization function based at least in part on minimizing the error values.
15. The method of claim 10 including adjusting the color and opacity estimates using a cost minimization function based at least in part on minimizing the error values.
16. The method of claim 13 wherein the step of adjusting the color estimates includes using a gradient descent method.
17. The method of claim 16 wherein the step of adjusting the color and opacity estimates includes using a gradient descent method.
18. The method of claim 1 wherein the cells are located at (x,y,d) coordinates in the 3D space, and wherein (x,y) represents a rectangular coordinate on a disparity plane d, and d represents disparity.
19. The method of claim 1 including selecting a subset of cells that are likely to lie on a visible surface based on the probabilities.
20. The method of claim 19 wherein the probabilities are derived from the mean and variance of color values at each cell in the general disparity space.
21. A computer readable medium having computer-executable instructions for performing the steps recited in claim 1.
22. The method of claim 1 wherein the color values are gray scale values.
23. A method for performing stereo matching comprising: selecting a general disparity space representing a projective sampling of 3D space, where the projective sampling includes an array of cells in the 3D space; transforming k input images from screen coordinates of corresponding input cameras to the general disparity space, where k is two or more and the input images are comprised of pixels each having color values; from the color values that map from the input images to the cells in the general disparity space, computing probabilities that the cells represent visible surface elements of an object depicted in the input images; computing initial estimates of color at the cells based on the probabilities; computing initial estimates of disparities of the visible surface elements based on the probabilities; computing initial estimates of opacities at the cells based on the probabilities; using the opacities to compute visibility values for the cells indicating whether the cells are visible with respect to the input cameras; and revising the initial color and disparity estimates based on the visibility values at the cells.
24. The method of claim 23 wherein the color values are gray scale values.
25. The method of claim 23 wherein the color values are color triplets.
26. The method of claim 23 wherein the probabilities are derived from mean and variance of k color values at the cells in general disparity space.
27. The method of claim 26 wherein the step of estimating the initial disparities includes comparing the variance of cells in an (x,y) column in the general disparity space to assess likelihood that one of the cells in the (x,y) column lies on a visible surface element.
28. The method of claim 23 including: projecting the initial opacity estimates from the general disparity space to a (u,v,d) space of each of the input cameras; computing visibilities at the (u,v,d) coordinates of each of the input cameras; and transforming colors from the input images and the visibilities to the general disparity space such that updated colors in general disparity space have an associated visibility value; and using the associated visibility values to weight statistics on the updated colors at the cells in the general disparity space.
29. A method for performing stereo matching comprising: selecting a general disparity space representing a projective sampling of 3D space, where the projective sampling includes an array of cells at (x,y,d) coordinates in the 3D space, and d represents disparity planes comprised of (x,y) coordinates; transforming k input images from screen coordinates of corresponding input cameras to the general disparity space, where k is two or more and the input images are comprised of pixels each having color values; from the color values that map from the input images to the cells in the general disparity space, computing mean and variance of the color values at each cell; computing initial estimates of color at the cells based on the mean at each cell; computing initial estimates of disparities of visible surface elements based at least in part on the variance; computing initial estimates of opacities at the cells based at least in part on the variance at each cell; using the opacities to compute visibility values for the cells indicating whether the cells are visible with respect to the input cameras; revising the initial color and disparity estimates based on the visibility values; and refining estimates of color and opacity comprising: a) transforming current estimates of color and opacity from general disparity space to the (u,v,d) coordinate space of each of the input cameras; b) compositing the transformed, current estimates of color and opacity into re-projected images for each input camera; c) comparing the re-projected images with the input images to compute error values for each of the input cameras; d) adjusting the current estimates of color and opacity based on the error values; and e) repeating steps a-d to minimize the error values.
PCT/US1998/007297 1997-04-15 1998-04-10 Method for performing stereo matching to recover depths, colors and opacities of surface elements WO1998047097A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/843,326 US5917937A (en) 1997-04-15 1997-04-15 Method for performing stereo matching to recover depths, colors and opacities of surface elements
US08/843,326 1997-04-15

Publications (1)

Publication Number Publication Date
WO1998047097A1 true WO1998047097A1 (en) 1998-10-22

Family

ID=25289653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1998/007297 WO1998047097A1 (en) 1997-04-15 1998-04-10 Method for performing stereo matching to recover depths, colors and opacities of surface elements

Country Status (2)

Country Link
US (1) US5917937A (en)
WO (1) WO1998047097A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6275335B1 (en) 1999-07-16 2001-08-14 Sl3D, Inc. Single-lens 3D method, microscope, and video adapter
CN102096919A (en) * 2010-12-31 2011-06-15 北京航空航天大学 Real-time three-dimensional matching method based on two-way weighted polymerization
CN104272731A (en) * 2012-05-10 2015-01-07 三星电子株式会社 Apparatus and method for processing 3d information
WO2016183464A1 (en) * 2015-05-13 2016-11-17 Google Inc. Deepstereo: learning to predict new views from real world imagery
US9756312B2 (en) 2014-05-01 2017-09-05 Ecole polytechnique fédérale de Lausanne (EPFL) Hardware-oriented dynamically adaptive disparity estimation algorithm and its real-time hardware

Families Citing this family (143)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6445814B2 (en) * 1996-07-01 2002-09-03 Canon Kabushiki Kaisha Three-dimensional information processing apparatus and method
US7098435B2 (en) * 1996-10-25 2006-08-29 Frederick E. Mueller Method and apparatus for scanning three-dimensional objects
US6858826B2 (en) * 1996-10-25 2005-02-22 Waveworx Inc. Method and apparatus for scanning three-dimensional objects
US5864640A (en) * 1996-10-25 1999-01-26 Wavework, Inc. Method and apparatus for optically scanning three dimensional objects using color information in trackable patches
US6693666B1 (en) * 1996-12-11 2004-02-17 Interval Research Corporation Moving imager camera for track and range capture
US6786420B1 (en) 1997-07-15 2004-09-07 Silverbrook Research Pty. Ltd. Data distribution mechanism in the form of ink dots on cards
US6046763A (en) * 1997-04-11 2000-04-04 Nec Research Institute, Inc. Maximum flow method for stereo correspondence
US6215898B1 (en) * 1997-04-15 2001-04-10 Interval Research Corporation Data processing system and method
EP1455303B1 (en) * 1997-05-22 2006-03-15 Kabushiki Kaisha TOPCON Apparatus for determining correspondences in a pair of images
US6618117B2 (en) 1997-07-12 2003-09-09 Silverbrook Research Pty Ltd Image sensing apparatus including a microcontroller
US6803989B2 (en) 1997-07-15 2004-10-12 Silverbrook Research Pty Ltd Image printing apparatus including a microcontroller
US7724282B2 (en) 1997-07-15 2010-05-25 Silverbrook Research Pty Ltd Method of processing digital image to correct for flash effects
AUPO793897A0 (en) * 1997-07-15 1997-08-07 Silverbrook Research Pty Ltd Image processing method and apparatus (ART25)
US6624848B1 (en) 1997-07-15 2003-09-23 Silverbrook Research Pty Ltd Cascading image modification using multiple digital cameras incorporating image processing
US7110024B1 (en) 1997-07-15 2006-09-19 Silverbrook Research Pty Ltd Digital camera system having motion deblurring means
US7551201B2 (en) 1997-07-15 2009-06-23 Silverbrook Research Pty Ltd Image capture and processing device for a print on demand digital camera system
US6985207B2 (en) 1997-07-15 2006-01-10 Silverbrook Research Pty Ltd Photographic prints having magnetically recordable media
US6879341B1 (en) 1997-07-15 2005-04-12 Silverbrook Research Pty Ltd Digital camera system containing a VLIW vector processor
AUPO850597A0 (en) 1997-08-11 1997-09-04 Silverbrook Research Pty Ltd Image processing method and apparatus (art01a)
US6690419B1 (en) 1997-07-15 2004-02-10 Silverbrook Research Pty Ltd Utilising eye detection methods for image processing in a digital image camera
AUPO802797A0 (en) 1997-07-15 1997-08-07 Silverbrook Research Pty Ltd Image processing method and apparatus (ART54)
JP2991163B2 (en) * 1997-07-23 1999-12-20 日本電気株式会社 Camera calibration device
KR20000068660A (en) * 1997-07-29 2000-11-25 요트.게.아. 롤페즈 Method of reconstruction of tridimensional scenes and corresponding reconstruction device and decoding system
US6301370B1 (en) * 1998-04-13 2001-10-09 Eyematic Interfaces, Inc. Face recognition from video images
US6272231B1 (en) 1998-11-06 2001-08-07 Eyematic Interfaces, Inc. Wavelet-based facial motion capture for avatar animation
DE69910757T2 (en) 1998-04-13 2004-06-17 Eyematic Interfaces, Inc., Santa Monica WAVELET-BASED FACIAL MOTION DETECTION FOR AVATAR ANIMATION
US6125197A (en) * 1998-06-30 2000-09-26 Intel Corporation Method and apparatus for the processing of stereoscopic electronic images into three-dimensional computer models of real-life objects
AUPP702098A0 (en) 1998-11-09 1998-12-03 Silverbrook Research Pty Ltd Image creation method and apparatus (ART73)
US7050655B2 (en) * 1998-11-06 2006-05-23 Nevengineering, Inc. Method for generating an animated three-dimensional video head
US6714661B2 (en) 1998-11-06 2004-03-30 Nevengineering, Inc. Method and system for customizing facial feature tracking using precise landmark finding on a neutral face image
US7050624B2 (en) * 1998-12-04 2006-05-23 Nevengineering, Inc. System and method for feature location and tracking in multiple dimensions including depth
WO2000034919A1 (en) 1998-12-04 2000-06-15 Interval Research Corporation Background estimation and segmentation based on range and color
US6496597B1 (en) * 1999-03-03 2002-12-17 Autodesk Canada Inc. Generating image data
DE10080012B4 (en) * 1999-03-19 2005-04-14 Matsushita Electric Works, Ltd., Kadoma Three-dimensional method of detecting objects and system for picking up an object from a container using the method
AUPQ056099A0 (en) 1999-05-25 1999-06-17 Silverbrook Research Pty Ltd A method and apparatus (pprint01)
US6606404B1 (en) * 1999-06-19 2003-08-12 Microsoft Corporation System and method for computing rectifying homographies for stereo vision processing of three dimensional objects
US6608923B1 (en) * 1999-06-19 2003-08-19 Microsoft Corporation System and method for rectifying images of three dimensional objects
US6556704B1 (en) * 1999-08-25 2003-04-29 Eastman Kodak Company Method for forming a depth image from digital image data
US6297844B1 (en) 1999-11-24 2001-10-02 Cognex Corporation Video safety curtain
US6678394B1 (en) 1999-11-30 2004-01-13 Cognex Technology And Investment Corporation Obstacle detection system
US6990228B1 (en) * 1999-12-17 2006-01-24 Canon Kabushiki Kaisha Image processing apparatus
US6674877B1 (en) * 2000-02-03 2004-01-06 Microsoft Corporation System and method for visually tracking occluded objects in real time
AU2001233865A1 (en) * 2000-02-16 2001-08-27 P C Multimedia Limited 3d image processing system and method
US6701005B1 (en) 2000-04-29 2004-03-02 Cognex Corporation Method and apparatus for three-dimensional object segmentation
US6469734B1 (en) 2000-04-29 2002-10-22 Cognex Corporation Video safety detector with shadow elimination
US7167575B1 (en) 2000-04-29 2007-01-23 Cognex Corporation Video safety detector with projected pattern
US7224357B2 (en) * 2000-05-03 2007-05-29 University Of Southern California Three-dimensional modeling based on photographic images
US6795068B1 (en) * 2000-07-21 2004-09-21 Sony Computer Entertainment Inc. Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program
US7227526B2 (en) * 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7071914B1 (en) 2000-09-01 2006-07-04 Sony Computer Entertainment Inc. User input device and method for interaction with graphic images
US7085409B2 (en) * 2000-10-18 2006-08-01 Sarnoff Corporation Method and apparatus for synthesizing new video and/or still imagery from a collection of real video and/or still imagery
CA2326087A1 (en) * 2000-11-16 2002-05-16 Craig Summers Inward-looking imaging system
US6819779B1 (en) 2000-11-22 2004-11-16 Cognex Corporation Lane detection system and apparatus
US6664961B2 (en) 2000-12-20 2003-12-16 Rutgers, The State University Of Nj Resample and composite engine for real-time volume rendering
US7027083B2 (en) * 2001-02-12 2006-04-11 Carnegie Mellon University System and method for servoing on a moving fixation point within a dynamic scene
US6751345B2 (en) 2001-02-12 2004-06-15 Koninklijke Philips Electronics N.V. Method and apparatus for improving object boundaries extracted from stereoscopic images
WO2002065763A2 (en) * 2001-02-12 2002-08-22 Carnegie Mellon University System and method for manipulating the point of interest in a sequence of images
GB2372659A (en) * 2001-02-23 2002-08-28 Sharp Kk A method of rectifying a stereoscopic image
JP2002250607A (en) * 2001-02-27 2002-09-06 Optex Co Ltd Object detection sensor
US6917703B1 (en) 2001-02-28 2005-07-12 Nevengineering, Inc. Method and apparatus for image analysis of a gabor-wavelet transformed image using a neural network
US7392287B2 (en) 2001-03-27 2008-06-24 Hemisphere Ii Investment Lp Method and apparatus for sharing information using a handheld device
US7444013B2 (en) * 2001-08-10 2008-10-28 Stmicroelectronics, Inc. Method and apparatus for recovering depth using multi-plane stereo and spatial propagation
US6853379B2 (en) * 2001-08-13 2005-02-08 Vidiator Enterprises Inc. Method for mapping facial animation values to head mesh positions
US6834115B2 (en) 2001-08-13 2004-12-21 Nevengineering, Inc. Method for optimizing off-line facial feature tracking
US6876364B2 (en) 2001-08-13 2005-04-05 Vidiator Enterprises Inc. Method for mapping facial animation values to head mesh positions
GB2381429B (en) * 2001-09-28 2005-07-27 Canon Europa Nv 3D computer model processing apparatus
US20030076413A1 (en) * 2001-10-23 2003-04-24 Takeo Kanade System and method for obtaining video of multiple moving fixation points within a dynamic scene
US7715591B2 (en) * 2002-04-24 2010-05-11 Hrl Laboratories, Llc High-performance sensor fusion architecture
AU2002952382A0 (en) * 2002-10-30 2002-11-14 Canon Kabushiki Kaisha Method of Background Colour Removal for Porter and Duff Compositing
US7103212B2 (en) 2002-11-22 2006-09-05 Strider Labs, Inc. Acquisition of three-dimensional images by an active stereo technique using locally unique patterns
AU2003280180A1 (en) * 2002-12-03 2004-06-23 Koninklijke Philips Electronics N.V. Method and apparatus to display 3d rendered ultrasound data on an ultrasound cart in stereovision
US8855405B2 (en) * 2003-04-30 2014-10-07 Deere & Company System and method for detecting and analyzing features in an agricultural field for vehicle guidance
US8712144B2 (en) * 2003-04-30 2014-04-29 Deere & Company System and method for detecting crop rows in an agricultural field
US8737720B2 (en) * 2003-04-30 2014-05-27 Deere & Company System and method for detecting and analyzing features in an agricultural field
US20040223640A1 (en) * 2003-05-09 2004-11-11 Bovyrin Alexander V. Stereo matching using segmentation of image columns
FR2857131A1 (en) * 2003-07-01 2005-01-07 Thomson Licensing Sa METHOD FOR AUTOMATICALLY REPLACING A GEOMETRIC MODEL OF A SCENE ON A PICTURE OF THE SCENE, DEVICE FOR IMPLEMENTING THE SAME, AND PROGRAMMING MEDIUM
US8133115B2 (en) 2003-10-22 2012-03-13 Sony Computer Entertainment America Llc System and method for recording and displaying a graphical path in a video game
US8326084B1 (en) 2003-11-05 2012-12-04 Cognex Technology And Investment Corporation System and method of auto-exposure control for image acquisition hardware using three dimensional information
JP4162095B2 (en) * 2003-12-11 2008-10-08 ストライダー ラブス,インコーポレイテッド A technique for predicting the surface of a shielded part by calculating symmetry.
US7292735B2 (en) * 2004-04-16 2007-11-06 Microsoft Corporation Virtual image artifact detection
US7257272B2 (en) * 2004-04-16 2007-08-14 Microsoft Corporation Virtual image generation
US7015926B2 (en) * 2004-06-28 2006-03-21 Microsoft Corporation System and process for generating a two-layer, 3D representation of a scene
KR100601958B1 (en) 2004-07-15 2006-07-14 삼성전자주식회사 Method for estimting disparity for 3D object recognition
GB2417628A (en) * 2004-08-26 2006-03-01 Sharp Kk Creating a new image from two images of a scene
GB2418314A (en) * 2004-09-16 2006-03-22 Sharp Kk A system for combining multiple disparity maps
CA2511040A1 (en) * 2004-09-23 2006-03-23 The Governors Of The University Of Alberta Method and system for real time image rendering
US20060071933A1 (en) 2004-10-06 2006-04-06 Sony Computer Entertainment Inc. Application binary interface for multi-pass shaders
US7532214B2 (en) * 2005-05-25 2009-05-12 Spectra Ab Automated medical image visualization using volume rendering with local histograms
US7636126B2 (en) 2005-06-22 2009-12-22 Sony Computer Entertainment Inc. Delay matching in audio/video systems
US8111904B2 (en) 2005-10-07 2012-02-07 Cognex Technology And Investment Corp. Methods and apparatus for practical 3D vision system
US7880746B2 (en) 2006-05-04 2011-02-01 Sony Computer Entertainment Inc. Bandwidth management through lighting control of a user environment via a display device
US7965859B2 (en) 2006-05-04 2011-06-21 Sony Computer Entertainment Inc. Lighting control of a user environment via a display device
US8041129B2 (en) 2006-05-16 2011-10-18 Sectra Ab Image data set compression based on viewing parameters for storing medical image data from multidimensional data sets, related systems, methods and computer products
KR100850931B1 (en) * 2006-06-29 2008-08-07 성균관대학교산학협력단 Rectification System ? Method of Stereo Image in Real Time
US8013870B2 (en) 2006-09-25 2011-09-06 Adobe Systems Incorporated Image masks generated from local color models
US7830381B2 (en) * 2006-12-21 2010-11-09 Sectra Ab Systems for visualizing images using explicit quality prioritization of a feature(s) in multidimensional image data sets, related methods and computer products
CN100550051C (en) * 2007-04-29 2009-10-14 威盛电子股份有限公司 image deformation method
US8126260B2 (en) * 2007-05-29 2012-02-28 Cognex Corporation System and method for locating a three-dimensional object using machine vision
US8933876B2 (en) 2010-12-13 2015-01-13 Apple Inc. Three dimensional user interface session control
WO2009101798A1 (en) * 2008-02-12 2009-08-20 Panasonic Corporation Compound eye imaging device, distance measurement device, parallax calculation method and distance measurement method
US9659382B2 (en) * 2008-05-28 2017-05-23 Thomson Licensing System and method for depth extraction of images with forward and backward depth prediction
JP5317169B2 (en) * 2008-06-13 2013-10-16 洋 川崎 Image processing apparatus, image processing method, and program
KR20100084718A (en) * 2009-01-19 2010-07-28 삼성전자주식회사 Mobile terminal for generating 3 dimensional image
US20100235786A1 (en) * 2009-03-13 2010-09-16 Primesense Ltd. Enhanced 3d interfacing for remote devices
WO2010108024A1 (en) * 2009-03-20 2010-09-23 Digimarc Coporation Improvements to 3d data representation, conveyance, and use
US10097843B2 (en) 2009-11-13 2018-10-09 Koninklijke Philips Electronics N.V. Efficient coding of depth transitions in 3D (video)
US20110164032A1 (en) * 2010-01-07 2011-07-07 Prime Sense Ltd. Three-Dimensional User Interface
US10786736B2 (en) 2010-05-11 2020-09-29 Sony Interactive Entertainment LLC Placement of user information in a game space
CN102959616B (en) 2010-07-20 2015-06-10 苹果公司 Interactive reality augmentation for natural interaction
US9201501B2 (en) 2010-07-20 2015-12-01 Apple Inc. Adaptive projector
US8959013B2 (en) 2010-09-27 2015-02-17 Apple Inc. Virtual keyboard for a non-tactile three dimensional user interface
US8872762B2 (en) 2010-12-08 2014-10-28 Primesense Ltd. Three dimensional user interface cursor control
CN103347437B (en) 2011-02-09 2016-06-08 苹果公司 Gaze detection in 3D mapping environment
TWI476403B (en) * 2011-04-22 2015-03-11 Pai Chi Li Automated ultrasonic scanning system and scanning method thereof
US9459758B2 (en) 2011-07-05 2016-10-04 Apple Inc. Gesture-based interface with enhanced features
US8881051B2 (en) 2011-07-05 2014-11-04 Primesense Ltd Zoom-based gesture user interface
US9377865B2 (en) 2011-07-05 2016-06-28 Apple Inc. Zoom-based gesture user interface
US9342817B2 (en) 2011-07-07 2016-05-17 Sony Interactive Entertainment LLC Auto-creating groups for sharing photos
JP5762211B2 (en) * 2011-08-11 2015-08-12 キヤノン株式会社 Image processing apparatus, image processing method, and program
US9030498B2 (en) 2011-08-15 2015-05-12 Apple Inc. Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface
US9122311B2 (en) 2011-08-24 2015-09-01 Apple Inc. Visual feedback for tactile and non-tactile user interfaces
US9218063B2 (en) 2011-08-24 2015-12-22 Apple Inc. Sessionless pointing user interface
US9571810B2 (en) 2011-12-23 2017-02-14 Mediatek Inc. Method and apparatus of determining perspective model for depth map generation by utilizing region-based analysis and/or temporal smoothing
US20130162763A1 (en) * 2011-12-23 2013-06-27 Chao-Chung Cheng Method and apparatus for adjusting depth-related information map according to quality measurement result of the depth-related information map
KR101316196B1 (en) * 2011-12-23 2013-10-08 연세대학교 산학협력단 Apparatus and method for enhancing stereoscopic image, recording medium thereof
US9229534B2 (en) 2012-02-28 2016-01-05 Apple Inc. Asymmetric mapping for tactile and non-tactile user interfaces
CN104246682B (en) 2012-03-26 2017-08-25 苹果公司 Enhanced virtual touchpad and touch-screen
KR20130120730A (en) * 2012-04-26 2013-11-05 한국전자통신연구원 Method for processing disparity space image
WO2013170040A1 (en) 2012-05-11 2013-11-14 Intel Corporation Systems and methods for row causal scan-order optimization stereo matching
CN103679127B (en) * 2012-09-24 2017-08-04 株式会社理光 The method and apparatus for detecting the wheeled region of pavement of road
US9519968B2 (en) * 2012-12-13 2016-12-13 Hewlett-Packard Development Company, L.P. Calibrating visual sensors using homography operators
US9436358B2 (en) 2013-03-07 2016-09-06 Cyberlink Corp. Systems and methods for editing three-dimensional video
JP2014203017A (en) 2013-04-09 2014-10-27 ソニー株式会社 Image processing device, image processing method, display, and electronic apparatus
KR102135770B1 (en) * 2014-02-10 2020-07-20 한국전자통신연구원 Method and apparatus for reconstructing 3d face with stereo camera
EP3111299A4 (en) 2014-02-28 2017-11-22 Hewlett-Packard Development Company, L.P. Calibration of sensors and projector
US9390508B2 (en) 2014-03-03 2016-07-12 Nokia Technologies Oy Method, apparatus and computer program product for disparity map estimation of stereo images
US10237531B2 (en) 2016-06-22 2019-03-19 Microsoft Technology Licensing, Llc Discontinuity-aware reprojection
US10129523B2 (en) 2016-06-22 2018-11-13 Microsoft Technology Licensing, Llc Depth-aware reprojection
US10026014B2 (en) * 2016-10-26 2018-07-17 Nxp Usa, Inc. Method and apparatus for data set classification based on generator features
JP7233150B2 (en) * 2018-04-04 2023-03-06 日本放送協会 Depth estimation device and its program
US20190310373A1 (en) * 2018-04-10 2019-10-10 Rosemount Aerospace Inc. Object ranging by coordination of light projection with active pixel rows of multiple cameras
US20200137380A1 (en) * 2018-10-31 2020-04-30 Intel Corporation Multi-plane display image synthesis mechanism
US11816855B2 (en) * 2020-02-11 2023-11-14 Samsung Electronics Co., Ltd. Array-based depth estimation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5179441A (en) * 1991-12-18 1993-01-12 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Near real-time stereo vision system
US5381518A (en) * 1986-04-14 1995-01-10 Pixar Method and apparatus for imaging volume data using voxel values

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4745562A (en) * 1985-08-16 1988-05-17 Schlumberger, Limited Signal processing disparity resolution
US5309356A (en) * 1987-09-29 1994-05-03 Kabushiki Kaisha Toshiba Three-dimensional reprojected image forming apparatus
US5016173A (en) * 1989-04-13 1991-05-14 Vanguard Imaging Ltd. Apparatus and method for monitoring visually accessible surfaces of the body
US5555352A (en) * 1991-04-23 1996-09-10 International Business Machines Corporation Object-based irregular-grid volume rendering
JPH07332970A (en) * 1994-06-02 1995-12-22 Canon Inc Image processing method
JP3242529B2 (en) * 1994-06-07 2001-12-25 松下通信工業株式会社 Stereo image matching method and stereo image parallax measurement method
US5582173A (en) * 1995-09-18 1996-12-10 Siemens Medical Systems, Inc. System and method for 3-D medical imaging using 2-D scan data

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5381518A (en) * 1986-04-14 1995-01-10 Pixar Method and apparatus for imaging volume data using voxel values
US5179441A (en) * 1991-12-18 1993-01-12 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Near real-time stereo vision system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HE T., KAUFMAN A.: "FAST STEREO VOLUME RENDERING.", VISUALIZATION '96. PROCEEDINGS OF THE VISUALIZATION CONFERENCE. SAN FRANCISCO, OCT. 27 - NOV. 1, 1996., NEW YORK, IEEE/ACM., US, 1 January 1996 (1996-01-01), US, pages 49 - 56 + 466., XP002911492, ISBN: 978-0-7803-3673-5 *
SCHARSTEIN D., SZELISKI R.: "STEREO MATCHING WITH NON-LINEAR DIFFUSION.", PROCEEDINGS OF THE 1996 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. SAN FRANCISCO, JUNE 18 - 20, 1996., LOS ALAMITOS, IEEE COMP. SOC. PRESS, US, 1 June 1996 (1996-06-01), US, pages 343 - 350., XP002911491, ISBN: 978-0-8186-7258-3, DOI: 10.1109/CVPR.1996.517095 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6275335B1 (en) 1999-07-16 2001-08-14 Sl3D, Inc. Single-lens 3D method, microscope, and video adapter
US6683716B1 (en) 1999-07-16 2004-01-27 Sl3D, Inc. Stereoscopic video/film adapter
CN102096919A (en) * 2010-12-31 2011-06-15 北京航空航天大学 Real-time three-dimensional matching method based on two-way weighted polymerization
CN104272731A (en) * 2012-05-10 2015-01-07 三星电子株式会社 Apparatus and method for processing 3d information
EP2848002A4 (en) * 2012-05-10 2016-01-20 Samsung Electronics Co Ltd Apparatus and method for processing 3d information
US9323977B2 (en) 2012-05-10 2016-04-26 Samsung Electronics Co., Ltd. Apparatus and method for processing 3D information
US9756312B2 (en) 2014-05-01 2017-09-05 Ecole polytechnique fédérale de Lausanne (EPFL) Hardware-oriented dynamically adaptive disparity estimation algorithm and its real-time hardware
WO2016183464A1 (en) * 2015-05-13 2016-11-17 Google Inc. Deepstereo: learning to predict new views from real world imagery
CN107438866A (en) * 2015-05-13 2017-12-05 谷歌公司 Depth is three-dimensional:Study predicts new view from real world image
US9916679B2 (en) 2015-05-13 2018-03-13 Google Llc Deepstereo: learning to predict new views from real world imagery
CN107438866B (en) * 2015-05-13 2020-12-01 谷歌公司 Depth stereo: learning to predict new views from real world imagery

Also Published As

Publication number Publication date
US5917937A (en) 1999-06-29

Similar Documents

Publication Publication Date Title
US5917937A (en) Method for performing stereo matching to recover depths, colors and opacities of surface elements
Szeliski et al. Stereo matching with transparency and matting
Szeliski A multi-view approach to motion and stereo
US6424351B1 (en) Methods and systems for producing three-dimensional images using relief textures
US6215496B1 (en) Sprites with depth
US5613048A (en) Three-dimensional image synthesis using view interpolation
US9843776B2 (en) Multi-perspective stereoscopy from light fields
Tauber et al. Review and preview: Disocclusion by inpainting for image-based rendering
Guillou et al. Using vanishing points for camera calibration and coarse 3D reconstruction from a single image
US6487304B1 (en) Multi-view approach to motion and stereo
US6778173B2 (en) Hierarchical image-based representation of still and animated three-dimensional object, method and apparatus for using this representation for the object rendering
Kang et al. Extracting view-dependent depth maps from a collection of images
JP5133418B2 (en) Method and apparatus for rendering a virtual object in a real environment
US9165401B1 (en) Multi-perspective stereoscopy from light fields
Irani et al. What does the scene look like from a scene point?
WO1999026198A2 (en) System and method for merging objects into an image sequence without prior knowledge of the scene in the image sequence
Wang et al. Second-depth shadow mapping
US20030146922A1 (en) System and method for diminished reality
Xu et al. Scalable image-based indoor scene rendering with reflections
Szeliski Stereo Algorithms and Representations for Image-based Rendering.
Hofsetz et al. Image-based rendering of range data with estimated depth uncertainty
Doggett et al. Displacement mapping using scan conversion hardware architectures
Fu et al. Triangle-based view Interpolation without depth-buffering
Kolhatkar et al. Real-time virtual viewpoint generation on the GPU for scene navigation
Lechlek et al. Interactive hdr image-based rendering from unstructured ldr photographs

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 1998544128

Format of ref document f/p: F

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA