US20020130889A1 - System, method, and computer program product for real time transparency-based compositing - Google Patents

System, method, and computer program product for real time transparency-based compositing Download PDF

Info

Publication number
US20020130889A1
US20020130889A1 US10/145,110 US14511002A US2002130889A1 US 20020130889 A1 US20020130889 A1 US 20020130889A1 US 14511002 A US14511002 A US 14511002A US 2002130889 A1 US2002130889 A1 US 2002130889A1
Authority
US
United States
Prior art keywords
input pixel
pixel
streams
computer program
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/145,110
Inventor
David Blythe
James Foran
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Graphics Properties Holdings Inc
Morgan Stanley and Co LLC
Original Assignee
Silicon Graphics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Silicon Graphics Inc filed Critical Silicon Graphics Inc
Priority to US10/145,110 priority Critical patent/US20020130889A1/en
Assigned to SILICON GRAPHICS, INC. reassignment SILICON GRAPHICS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FORAN, JAMES L., BLYTHE, DAVID
Publication of US20020130889A1 publication Critical patent/US20020130889A1/en
Assigned to WELLS FARGO FOOTHILL CAPITAL, INC. reassignment WELLS FARGO FOOTHILL CAPITAL, INC. SECURITY AGREEMENT Assignors: SILICON GRAPHICS, INC. AND SILICON GRAPHICS FEDERAL, INC. (EACH A DELAWARE CORPORATION)
Assigned to GENERAL ELECTRIC CAPITAL CORPORATION reassignment GENERAL ELECTRIC CAPITAL CORPORATION SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SILICON GRAPHICS, INC.
Assigned to MORGAN STANLEY & CO., INCORPORATED reassignment MORGAN STANLEY & CO., INCORPORATED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GENERAL ELECTRIC CAPITAL CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the invention described herein relates to computer graphics, and more particularly to compositing of images.
  • a common problem in computer graphics is the efficient compositing of two or more rendered images to produce a single image.
  • the compositing process typically involves the combination of the images, pixel by pixel, and takes into account the respective color coordinates of each pixel.
  • the process can also take into account opacity, intensity, and the relative distances of the images from a viewer.
  • One way in which compositing has been accomplished in the past is through software-based readback from frame buffers.
  • two or more graphics processors each send frames of rendered graphics data to their respective frame buffers.
  • the contents of each frame buffer are then read back, into a compositor module.
  • the compositor module can be a compositing frame buffer or a graphics host.
  • software-based compositing is performed on corresponding pixels, one pixel from each frame buffer.
  • a pixel from one frame buffer is composited with a pixel from the second frame buffer. This continues until all appropriate pixels in each frame buffer have been composited.
  • a final output, comprising the resultant composited pixels is then available from the compositor module.
  • the invention described herein is a system, method, and computer program product for compositing rendered image data in real time or near real time.
  • the input pixel streams that constitute the rendered image data can be video streams, for example.
  • Each input pixel stream can originate from its own graphics processing unit.
  • Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, and can also include an alpha value.
  • Compositing is performed by an image combiner that can be implemented in either hardware or software.
  • the image combiner accepts two or more input pixel streams, and performs compositing on corresponding pixels from each input pixel stream.
  • the compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values or intensity values.
  • the compositing process also uses depth information that defines whether a given pixel is in the foreground or in the background relative to another corresponding pixel.
  • the result of the compositing process is a resultant pixel based on the corresponding pixels of each input pixel stream.
  • FIG. 1 is a block diagram illustrating a system for real time or near real time combination of graphics images, according to an embodiment of the invention.
  • FIG. 2 is a block diagram illustrating an image combiner, according to an embodiment of the invention.
  • FIG. 3 is a flowchart illustrating a method for real time or near real time compositing of graphics images, according to an embodiment of the invention.
  • FIG. 4 is a flowchart illustrating the compositing process, according to an embodiment of the invention.
  • FIG. 5 is a flowchart illustrating a method for real time or near real time compositing of graphics images wherein inputs represent adjacent volumes of a three-dimensional scene, according to an embodiment of the invention.
  • the invention described herein is a system, method, and computer program product for the real time compositing of two or more input pixel streams.
  • the input pixel streams can be video streams, for example.
  • Each input pixel stream can originate from its own graphics processing unit.
  • Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, plus an alpha value that defines the opacity of the pixel.
  • Compositing is performed by an image combiner that can be implemented in either hardware or software.
  • the image combiner accepts the input pixel streams, and performs compositing on corresponding pixels from each input pixel stream.
  • the compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values or the intensity values.
  • the compositing process also uses depth information that defines whether a given pixel stream is in the foreground or in the background relative to another corresponding pixel stream.
  • the result of the compositing process is a resultant pixel stream based on the corresponding pixels of each input pixel stream.
  • FIG. 1 An exemplary context for the invention is illustrated in FIG. 1.
  • processors 105 and 110 Two graphics processors are shown, processors 105 and 110 .
  • Graphics processor 105 produces a frame of image data; likewise, graphics processor 110 produces a frame of image data.
  • the image data of graphics processor 105 is then sent, as input 121 , to image combiner 130 .
  • the rendered image data produced by graphics processor 110 is sent as input 122 to combiner 130 .
  • Combiner 130 also receives depth information 132 .
  • Depth information 132 indicates the depth order of the inputs 121 and 122 .
  • the inputs can be understood as images to be combined; depth information 132 indicates which is in the foreground and which is in the background, relative to a viewer. Depth information 132 generally does not change in the context of a given frame.
  • Combiner 130 performs compositing on corresponding pixels of inputs 121 and 122 based on the color coordinates of the corresponding pixels, the alpha values of the corresponding pixels, and the depth information 132 . The compositing process is described in greater detail in section III below.
  • a resultant pixel stream 135 is then produced by combiner 130 .
  • the pixels of pixel stream 135 that constitute a frame are stored in a frame buffer 140 .
  • the output of system 100 is a frame of image data, output 145 .
  • FIG. 1 shows two graphics processors and two associated inputs
  • the compositor performs compositing on all the inputs, taking into account the relative depth information of all inputs.
  • inputs 121 and 122 of FIG. 1 are received by combiner 130 from respective graphics processors 105 and 110 .
  • inputs to combiner 130 can include the outputs of other image combiners.
  • Image combiner 130 is illustrated in greater detail in FIG. 2.
  • Combiner 130 includes a depth determination module 205 .
  • Depth determination module 205 receives depth information 132 .
  • depth information 132 indicates the depth order of the images represented by the input pixel streams.
  • Depth determination module 205 coverts this information to a format usable for blending purposes.
  • Output 210 of depth determination module 205 therefore conveys which input pixel stream is “over” another.
  • Output 210 is sent to one or more blending modules, shown in FIG. 2 as blending modules 215 through 225 .
  • each blending module is associated with a specific color coordinate.
  • blending modules 215 , 220 , and 225 are associated with red, green, and blue coordinates, respectively.
  • Each blending module performs blending of a color from corresponding pixels from respective input pixel streams.
  • blending module 215 blends red coordinates R 1 and R 2 from corresponding pixels.
  • the alpha values of the corresponding pixels, ⁇ 1 , and ⁇ 2 are also input to blending module 215 , along with output 210 of depth determination module 205 .
  • Inputs to blending modules 220 and 225 are analogous.
  • the invention can implement any of several well known blending operations.
  • blending is performed in depth order, taking into account the opacity of input pixels.
  • colors are blended linearly according to alpha values.
  • a blended red coordinate for example, has the value ⁇ 1 R 1 +(1 ⁇ 1 ) ⁇ 2 R 2 , assuming that input 1 is over input 2 .
  • the other color coordinates are blended analogously.
  • the value of the resultant red coordinate is the maximum of the red coordinates of the corresponding pixels.
  • the other color coordinates would be blended analogously. Compositing is described more fully in Computer Graphics, Principle and Practice , Foley et al., Addison-Wesley, 1990, pp.835-843 (included herein by reference in its entirety).
  • the composited color coordinates from blending modules 215 through 225 are then sent to an output module 230 for formatting as resultant pixel 235 .
  • FIG. 2 shows the blending of color coordinates in parallel
  • alternative embodiments can perform blending of color coordinates in serial using a single blending module.
  • the method of the invention begins at step 305 .
  • steps 310 and 315 two inputs, shown here as inputs 1 and 2 are received at a compositor.
  • step 320 depth information is received by the compositor.
  • step 325 the compositing of the two inputs is performed in depth order, so as to take into account the depth information received in step 320 .
  • the process concludes at step 335 .
  • Compositing step 325 is illustrated in greater detail in FIG. 4, according to an embodiment of the invention.
  • the compositing process begins at step 405 .
  • step 410 the depth order of the input pixel streams is determined for a frame. This determination is made based on the received depth information.
  • step 415 the first color coordinates from corresponding pixels are blended, based on the depth order and the pixels' alpha values.
  • steps 420 and 425 the second and third color coordinates are blended.
  • a resultant pixel is output.
  • a determination is made as to whether additional pixels are to be blended for the current frame. If so, the process repeats from step 415 with a new set of corresponding pixels. Otherwise, the process ends at step 440 .
  • FIG. 5 A particular embodiment of the method of the invention is shown in FIG. 5.
  • the method begins with step 505 .
  • steps 510 and 515 inputs 1 and 2 are received respectively.
  • each input represents a sub-volume of three dimensional space from a scene that has been rendered.
  • the sub-volumes represented by inputs 1 and 2 can therefore be thought of as a back “slab” and a front slab, respectively, where the terms front and back refer to the relative positions of each volume from the current frame perspective of the viewer.
  • the depth information is received, as before.
  • composition is performed on the slabs in depth order, and the resultant pixel stream is output.
  • the process concludes at step 535 .
  • FIGS. 3 through 5 shows two graphics processors and two associated inputs
  • the compositor performs compositing on all the inputs, taking into account depth information relating to the inputs.
  • compositor 130 may be implemented using hardware, software or a combination thereof.
  • compositor 130 may be implemented using a computer program, and execute on a computer system or other processing system.
  • An example of such a computer system 600 is shown in FIG. 6.
  • the computer system 600 includes one or more processors, such as processor 604 .
  • the processor 604 is connected to a communication infrastructure 606 (e.g., a bus or network).
  • a communication infrastructure 606 e.g., a bus or network.
  • Various software embodiments can be described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures.
  • Computer system 600 also includes a main memory 608 , preferably random access memory (RAM), and may also include a secondary memory 610 .
  • the secondary memory 610 may include, for example, a hard disk drive 612 and/or a removable storage drive 614 , representing a magnetic medium drive, an optical disk drive, etc.
  • the removable storage drive 614 reads from and/or writes to a removable storage unit 618 in a well known manner.
  • Removable storage unit 618 represents a magnetic medium, optical disk, etc.
  • the removable storage unit 618 includes a computer usable storage medium having stored therein computer software and/or data.
  • Secondary memory 610 can also include other similar means for allowing computer programs or input data to be loaded into computer system 600 .
  • Such means may include, for example, a removable storage unit 622 and an interface 620 .
  • Examples of such means also include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 622 and interfaces 620 which allow software and data to be transferred from the removable storage unit 622 to computer system 600 .
  • Computer system 600 may also include a communications interface 624 .
  • Communications interface 624 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 624 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc.
  • Software and data transferred via communications interface 624 are in the form of signals 628 which may be electronic, electromagnetic, optical or other signals capable of being received by communications interface 624 . These signals 628 are provided to communications interface 624 via a communications path (i.e., channel) 626 .
  • This channel 626 carries signals 628 into and out of computer system 600 , and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels.
  • signals 628 can comprise image data (such as inputs 221 and 222 ) and depth information (such as depth information 232 ).
  • computer program medium and “computer usable medium” are used to generally refer to media such as removable storage drive 614 , a hard disk installed in hard disk drive 612 , and signals 628 .
  • These computer program products are means for providing software to computer system 600 .
  • the invention is directed in part to such computer program products.
  • Computer programs are stored in main memory 608 and/or secondary memory 610 . Computer programs may also be received via communications interface 624 . Such computer programs, when executed, enable the computer system 600 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 604 to perform the features of the present invention. Accordingly, such computer programs represent controllers of the computer system 600 .

Abstract

A system, method, and computer program product for compositing rendered image data in near real time. The input pixel streams that constitute the rendered image data can be video streams, for example. Each input pixel stream can originate from its own graphics processing unit. Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, plus an alpha value. Compositing is performed by an image combiner implemented in either hardware or software. The image combiner accepts two or more input pixel streams, and performs compositing on corresponding pixels from each input pixel stream. The compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values. The compositing process also uses depth information that defines whether a given pixel is in the foreground or in the background relative to another corresponding pixel. The result of the compositing process is a resultant pixel stream based on corresponding pixels of each input pixel stream.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation-in-part of U.S. patent application Ser. No. 09/888,438, filed Jun. 26, 2001, which claims priority to U.S. Provisional Application No. 60/219,006, filed Jul. 18, 2000. U.S. patent application Ser. Nos. 09/888,438 and 60/219,006 are both incorporated herein by reference in their entireties.[0001]
  • STATEMENT REGARDING FEDERALLY-SPONSORED RESEARCH AND DEVELOPMENT
  • Not applicable. [0002]
  • REFERENCE TO MICROFICHE APPENDIX/SEQUENCE LISTING/TABLE/COMPUTER PROGRAM LISTING APPENDIX (submitted on a compact disc and an incorporation-by-reference of the material on the compact disc)
  • Not applicable. [0003]
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0004]
  • The invention described herein relates to computer graphics, and more particularly to compositing of images. [0005]
  • 2. Background Art [0006]
  • A common problem in computer graphics is the efficient compositing of two or more rendered images to produce a single image. The compositing process typically involves the combination of the images, pixel by pixel, and takes into account the respective color coordinates of each pixel. The process can also take into account opacity, intensity, and the relative distances of the images from a viewer. [0007]
  • One way in which compositing has been accomplished in the past is through software-based readback from frame buffers. In such arrangements, two or more graphics processors each send frames of rendered graphics data to their respective frame buffers. The contents of each frame buffer are then read back, into a compositor module. The compositor module can be a compositing frame buffer or a graphics host. At the compositor module, software-based compositing is performed on corresponding pixels, one pixel from each frame buffer. A pixel from one frame buffer is composited with a pixel from the second frame buffer. This continues until all appropriate pixels in each frame buffer have been composited. A final output, comprising the resultant composited pixels, is then available from the compositor module. [0008]
  • In some applications such a system can be adequate. In applications requiring faster compositing, however, such a system may not be fast enough. In video applications, for example, where the final output must be produced in real time, such compositing entails unacceptable delay. [0009]
  • Hence there is a need for a system and method for fast compositing, where the compositing can take place at near real time rates. [0010]
  • BRIEF SUMMARY OF THE INVENTION
  • The invention described herein is a system, method, and computer program product for compositing rendered image data in real time or near real time. The input pixel streams that constitute the rendered image data can be video streams, for example. Each input pixel stream can originate from its own graphics processing unit. Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, and can also include an alpha value. Compositing is performed by an image combiner that can be implemented in either hardware or software. The image combiner accepts two or more input pixel streams, and performs compositing on corresponding pixels from each input pixel stream. The compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values or intensity values. The compositing process also uses depth information that defines whether a given pixel is in the foreground or in the background relative to another corresponding pixel. At the pixel level, the result of the compositing process is a resultant pixel based on the corresponding pixels of each input pixel stream. [0011]
  • The foregoing and other features and advantages of the invention will be apparent from the following, more particular description of a preferred embodiment of the invention, as illustrated in the accompanying drawings.[0012]
  • BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
  • FIG. 1 is a block diagram illustrating a system for real time or near real time combination of graphics images, according to an embodiment of the invention. [0013]
  • FIG. 2 is a block diagram illustrating an image combiner, according to an embodiment of the invention. [0014]
  • FIG. 3 is a flowchart illustrating a method for real time or near real time compositing of graphics images, according to an embodiment of the invention. [0015]
  • FIG. 4 is a flowchart illustrating the compositing process, according to an embodiment of the invention. [0016]
  • FIG. 5 is a flowchart illustrating a method for real time or near real time compositing of graphics images wherein inputs represent adjacent volumes of a three-dimensional scene, according to an embodiment of the invention.[0017]
  • DETAILED DESCRIPTION OF THE INVENTION I. Overview
  • The invention described herein is a system, method, and computer program product for the real time compositing of two or more input pixel streams. The input pixel streams can be video streams, for example. Each input pixel stream can originate from its own graphics processing unit. Each pixel includes a set of color coordinates, such as red, green, and blue (RGB) coordinates, plus an alpha value that defines the opacity of the pixel. Compositing is performed by an image combiner that can be implemented in either hardware or software. The image combiner accepts the input pixel streams, and performs compositing on corresponding pixels from each input pixel stream. The compositing process takes into account the color coordinates of each corresponding pixel as well as the alpha values or the intensity values. The compositing process also uses depth information that defines whether a given pixel stream is in the foreground or in the background relative to another corresponding pixel stream. At the pixel level, the result of the compositing process is a resultant pixel stream based on the corresponding pixels of each input pixel stream. [0018]
  • II. System
  • An exemplary context for the invention is illustrated in FIG. 1. Two graphics processors are shown, [0019] processors 105 and 110. Graphics processor 105 produces a frame of image data; likewise, graphics processor 110 produces a frame of image data. The image data of graphics processor 105 is then sent, as input 121, to image combiner 130. Similarly, the rendered image data produced by graphics processor 110 is sent as input 122 to combiner 130. Combiner 130 also receives depth information 132. Depth information 132 indicates the depth order of the inputs 121 and 122. The inputs can be understood as images to be combined; depth information 132 indicates which is in the foreground and which is in the background, relative to a viewer. Depth information 132 generally does not change in the context of a given frame.
  • [0020] Combiner 130 performs compositing on corresponding pixels of inputs 121 and 122 based on the color coordinates of the corresponding pixels, the alpha values of the corresponding pixels, and the depth information 132. The compositing process is described in greater detail in section III below. A resultant pixel stream 135 is then produced by combiner 130. In an embodiment of the invention, the pixels of pixel stream 135 that constitute a frame are stored in a frame buffer 140. In this embodiment, the output of system 100 is a frame of image data, output 145.
  • Note that while the embodiment of FIG. 1 shows two graphics processors and two associated inputs, other embodiments of the invention could feature more than two inputs. In such a case, the compositor performs compositing on all the inputs, taking into account the relative depth information of all inputs. Also, [0021] inputs 121 and 122 of FIG. 1 are received by combiner 130 from respective graphics processors 105 and 110. In an alternative context, however, inputs to combiner 130 can include the outputs of other image combiners.
  • [0022] Image combiner 130 is illustrated in greater detail in FIG. 2. Combiner 130 includes a depth determination module 205. Depth determination module 205 receives depth information 132. As described above, depth information 132 indicates the depth order of the images represented by the input pixel streams. Depth determination module 205 coverts this information to a format usable for blending purposes. Output 210 of depth determination module 205 therefore conveys which input pixel stream is “over” another.
  • [0023] Output 210 is sent to one or more blending modules, shown in FIG. 2 as blending modules 215 through 225. In an embodiment of the invention, each blending module is associated with a specific color coordinate. In the embodiment of FIG. 3, blending modules 215, 220, and 225 are associated with red, green, and blue coordinates, respectively. Each blending module performs blending of a color from corresponding pixels from respective input pixel streams. Hence, blending module 215 blends red coordinates R1 and R2 from corresponding pixels. The alpha values of the corresponding pixels, α1, and α2, are also input to blending module 215, along with output 210 of depth determination module 205. Inputs to blending modules 220 and 225 are analogous.
  • The invention can implement any of several well known blending operations. In a preferred embodiment, blending is performed in depth order, taking into account the opacity of input pixels. Here, colors are blended linearly according to alpha values. In one embodiment, a blended red coordinate, for example, has the value α[0024] 1R1+(1−α12R2, assuming that input 1 is over input 2. The other color coordinates are blended analogously. Alternatively, if compositing is based on maximum intensity, the value of the resultant red coordinate is the maximum of the red coordinates of the corresponding pixels. The other color coordinates would be blended analogously. Compositing is described more fully in Computer Graphics, Principle and Practice, Foley et al., Addison-Wesley, 1990, pp.835-843 (included herein by reference in its entirety).
  • The composited color coordinates from blending [0025] modules 215 through 225 are then sent to an output module 230 for formatting as resultant pixel 235.
  • While the embodiment of FIG. 2 shows the blending of color coordinates in parallel, alternative embodiments can perform blending of color coordinates in serial using a single blending module. [0026]
  • III. Method
  • The method of the invention is illustrated in FIG. 3. The method begins at [0027] step 305. In steps 310 and 315, two inputs, shown here as inputs 1 and 2 are received at a compositor. In step 320, depth information is received by the compositor. In step 325, the compositing of the two inputs is performed in depth order, so as to take into account the depth information received in step 320. The process concludes at step 335.
  • Compositing [0028] step 325 is illustrated in greater detail in FIG. 4, according to an embodiment of the invention. The compositing process begins at step 405. In step 410, the depth order of the input pixel streams is determined for a frame. This determination is made based on the received depth information. In step 415, the first color coordinates from corresponding pixels are blended, based on the depth order and the pixels' alpha values. Similarly, in steps 420 and 425, the second and third color coordinates are blended. In step 430, a resultant pixel is output. In step 435, a determination is made as to whether additional pixels are to be blended for the current frame. If so, the process repeats from step 415 with a new set of corresponding pixels. Otherwise, the process ends at step 440.
  • As mentioned above, the process of compositing pixels based on transparency is known to persons of ordinary skill in the art, and is documented in [0029] Computer Graphics, Principle and Practice, Foley et al., supra. Moreover, while the embodiment of FIG. 4 shows the blending of color coordinates in parallel, alternative embodiments can perform blending of color coordinates in serial.
  • A particular embodiment of the method of the invention is shown in FIG. 5. The method begins with [0030] step 505. In steps 510 and 515, inputs 1 and 2 are received respectively. In this embodiment however, each input represents a sub-volume of three dimensional space from a scene that has been rendered. The sub-volumes represented by inputs 1 and 2 can therefore be thought of as a back “slab” and a front slab, respectively, where the terms front and back refer to the relative positions of each volume from the current frame perspective of the viewer. In step 520, the depth information is received, as before. In step 525, composition is performed on the slabs in depth order, and the resultant pixel stream is output. The process concludes at step 535.
  • Note that while the embodiments of FIGS. 3 through 5 shows two graphics processors and two associated inputs, other embodiments of the invention could feature more than two inputs. In such a case, the compositor performs compositing on all the inputs, taking into account depth information relating to the inputs. [0031]
  • IV. Computing Environment
  • Referring to FIG. 2, [0032] compositor 130 may be implemented using hardware, software or a combination thereof. In particular, compositor 130 may be implemented using a computer program, and execute on a computer system or other processing system. An example of such a computer system 600 is shown in FIG. 6. The computer system 600 includes one or more processors, such as processor 604. The processor 604 is connected to a communication infrastructure 606 (e.g., a bus or network). Various software embodiments can be described in terms of this exemplary computer system. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the invention using other computer systems and/or computer architectures.
  • [0033] Computer system 600 also includes a main memory 608, preferably random access memory (RAM), and may also include a secondary memory 610. The secondary memory 610 may include, for example, a hard disk drive 612 and/or a removable storage drive 614, representing a magnetic medium drive, an optical disk drive, etc. The removable storage drive 614 reads from and/or writes to a removable storage unit 618 in a well known manner. Removable storage unit 618 represents a magnetic medium, optical disk, etc. As will be appreciated, the removable storage unit 618 includes a computer usable storage medium having stored therein computer software and/or data.
  • [0034] Secondary memory 610 can also include other similar means for allowing computer programs or input data to be loaded into computer system 600. Such means may include, for example, a removable storage unit 622 and an interface 620. Examples of such means also include a program cartridge and cartridge interface (such as that found in video game devices), a removable memory chip (such as an EPROM, or PROM) and associated socket, and other removable storage units 622 and interfaces 620 which allow software and data to be transferred from the removable storage unit 622 to computer system 600.
  • [0035] Computer system 600 may also include a communications interface 624. Communications interface 624 allows software and data to be transferred between computer system 600 and external devices. Examples of communications interface 624 may include a modem, a network interface (such as an Ethernet card), a communications port, a PCMCIA slot and card, etc. Software and data transferred via communications interface 624 are in the form of signals 628 which may be electronic, electromagnetic, optical or other signals capable of being received by communications interface 624. These signals 628 are provided to communications interface 624 via a communications path (i.e., channel) 626. This channel 626 carries signals 628 into and out of computer system 600, and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an RF link or other communications channels. In an embodiment of the invention, signals 628 can comprise image data (such as inputs 221 and 222) and depth information (such as depth information 232).
  • In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as [0036] removable storage drive 614, a hard disk installed in hard disk drive 612, and signals 628. These computer program products are means for providing software to computer system 600. The invention is directed in part to such computer program products.
  • Computer programs (also called computer control logic) are stored in [0037] main memory 608 and/or secondary memory 610. Computer programs may also be received via communications interface 624. Such computer programs, when executed, enable the computer system 600 to perform the features of the present invention as discussed herein. In particular, the computer programs, when executed, enable the processor 604 to perform the features of the present invention. Accordingly, such computer programs represent controllers of the computer system 600.
  • V. Conclusion
  • While various embodiments of the present invention have been described above, it should be understood that they have been presented by way of example, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in detail can be made therein without departing from the spirit and scope of the invention. Thus the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. [0038]

Claims (15)

What is claimed is:
1. A method of combining a plurality of input pixel streams to form a resultant pixel stream, comprising the steps of:
a) receiving the plurality of input pixel streams;
b) receiving depth information relating to the relative depth of the input pixel streams;
c) compositing, in depth order, corresponding pixels from each input pixel stream to form the resultant pixel stream in approximately real time.
2. The method of claim 1, wherein each pixel comprises color coordinates and an alpha value.
3. The method of claim 1, wherein step c) comprises the steps of:
i) blending the corresponding pixels according to the alpha values of the corresponding pixels and the depth information;
ii) outputting the resultant pixel; and
iii) if the input pixel streams contain additional corresponding pixels, repeating steps (i) and (ii) for the additional corresponding pixels.
4. The method of claim 1, wherein the input pixel streams and resultant pixel stream are video streams.
5. The method of claim 1, wherein the input pixel streams each represent renderings of adjacent sub-volumes, such that the resultant pixel stream represents a rendering of the adjacent sub-volumes viewed collectively.
6. The method of claim 1, wherein the depth information can vary for each frame.
7. An image combiner for combining a plurality of input pixel streams to form a single resultant pixel stream, comprising:
a depth determination module for converting depth information to an indication as to depth order of the input pixel streams; and
one or more blending modules that perform a blending operation on color coordinates of corresponding input pixels, on the basis of said depth order and alpha values associated with said corresponding input pixels.
8. The system of claim 7, wherein the input pixel streams comprise image data output from a graphics processor.
9. The system of claim 7, wherein the input pixel streams comprise image data output from another image combiner.
10. A computer program product comprising a computer usable medium having computer readable program code means embodied in said medium for causing a program to execute on a computer that combines a plurality of input pixel streams to form a single resultant pixel stream, said computer readable program code means comprising:
a first computer program code means for causing the computer to receive the plurality of input pixel streams;
a second computer program code means for causing the computer to receive depth information relating to the relative depth of the input pixel streams; and
a third computer program code means for causing the computer to composite, in depth order, corresponding pixels from each input pixel stream to form the resultant pixel stream in approximately real time.
11. The computer program product of claim 10, wherein each pixel comprises color coordinates and an alpha value.
12. The computer program product of claim 10, wherein said third computer program code means comprises:
i) computer program code means for combining the corresponding pixels according to the alpha values of the corresponding pixels and the depth information;
ii) computer program code means for outputting the resultant pixel; and
iii) computer program code means for repeating execution of code means (i) and (ii) for additional corresponding pixels, if the input pixel streams contain additional corresponding pixels.
13. The computer program product of claim 10, wherein the input pixel streams and resultant pixel stream are video streams.
14. The computer program product of claim 10, wherein the input pixel streams each represent renderings of adjacent sub-volumes, such that the resultant pixel stream represents a rendering of the adjacent sub-volumes viewed collectively.
15. The computer program product of claim 10, wherein the depth information can vary for each frame.
US10/145,110 2000-07-18 2002-05-15 System, method, and computer program product for real time transparency-based compositing Abandoned US20020130889A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/145,110 US20020130889A1 (en) 2000-07-18 2002-05-15 System, method, and computer program product for real time transparency-based compositing

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US21900600P 2000-07-18 2000-07-18
US09/888,438 US7405734B2 (en) 2000-07-18 2001-06-26 Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US10/145,110 US20020130889A1 (en) 2000-07-18 2002-05-15 System, method, and computer program product for real time transparency-based compositing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/888,438 Continuation-In-Part US7405734B2 (en) 2000-07-18 2001-06-26 Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units

Publications (1)

Publication Number Publication Date
US20020130889A1 true US20020130889A1 (en) 2002-09-19

Family

ID=26913470

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/888,438 Expired - Fee Related US7405734B2 (en) 2000-07-18 2001-06-26 Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US10/145,110 Abandoned US20020130889A1 (en) 2000-07-18 2002-05-15 System, method, and computer program product for real time transparency-based compositing

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/888,438 Expired - Fee Related US7405734B2 (en) 2000-07-18 2001-06-26 Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units

Country Status (2)

Country Link
US (2) US7405734B2 (en)
WO (1) WO2002007092A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020015055A1 (en) * 2000-07-18 2002-02-07 Silicon Graphics, Inc. Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US20040218100A1 (en) * 2003-05-02 2004-11-04 Staker Allan Robert Interactive system and method for video compositing
US20060282781A1 (en) * 2005-06-10 2006-12-14 Diamond Michael B Using a graphics system to enable a multi-user computer system
US20090164908A1 (en) * 2005-06-10 2009-06-25 Nvidia Corporation Using a scalable graphics system to enable a general-purpose multi-user computer system
US20100027961A1 (en) * 2008-07-01 2010-02-04 Yoostar Entertainment Group, Inc. Interactive systems and methods for video compositing
US10332560B2 (en) 2013-05-06 2019-06-25 Noo Inc. Audio-video compositing and effects
KR20230094827A (en) * 2021-12-21 2023-06-28 동아대학교 산학협력단 Generating apparatus and method of image data for fire detection training, and learning apparatus and method using the same

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030132291A1 (en) * 2002-01-11 2003-07-17 Metrologic Instruments, Inc. Point of sale (POS) station having bar code reading system with integrated internet-enabled customer-kiosk terminal
US8042740B2 (en) * 2000-11-24 2011-10-25 Metrologic Instruments, Inc. Method of reading bar code symbols on objects at a point-of-sale station by passing said objects through a complex of stationary coplanar illumination and imaging planes projected into a 3D imaging volume
DE60235989D1 (en) * 2001-06-26 2010-05-27 Amgen Fremont Inc ANTIBODIES AGAINST OPGL
GB2378108B (en) 2001-07-24 2005-08-17 Imagination Tech Ltd Three dimensional graphics system
US20060038009A1 (en) 2002-01-11 2006-02-23 Metrologic Instruments, Inc. Point of sale (POS) based bar code reading and cash register systems with integrated internet-enabled customer-kiosk terminals
KR100454508B1 (en) * 2002-07-05 2004-11-03 허명준 Natural water having deodorization ability and sterilization effect against resistent bacteria, and produce method thereof
JP4467267B2 (en) * 2002-09-06 2010-05-26 株式会社ソニー・コンピュータエンタテインメント Image processing method, image processing apparatus, and image processing system
US20080094402A1 (en) * 2003-11-19 2008-04-24 Reuven Bakalash Computing system having a parallel graphics rendering system employing multiple graphics processing pipelines (GPPLS) dynamically controlled according to time, image and object division modes of parallel operation during the run-time of graphics-based applications running on the computing system
US8497865B2 (en) 2006-12-31 2013-07-30 Lucid Information Technology, Ltd. Parallel graphics system employing multiple graphics processing pipelines with multiple graphics processing units (GPUS) and supporting an object division mode of parallel graphics processing using programmable pixel or vertex processing resources provided with the GPUS
US20070291040A1 (en) * 2005-01-25 2007-12-20 Reuven Bakalash Multi-mode parallel graphics rendering system supporting dynamic profiling of graphics-based applications and automatic control of parallel modes of operation
US7961194B2 (en) * 2003-11-19 2011-06-14 Lucid Information Technology, Ltd. Method of controlling in real time the switching of modes of parallel operation of a multi-mode parallel graphics processing subsystem embodied within a host computing system
US20080094403A1 (en) * 2003-11-19 2008-04-24 Reuven Bakalash Computing system capable of parallelizing the operation graphics processing units (GPUs) supported on a CPU/GPU fusion-architecture chip and one or more external graphics cards, employing a software-implemented multi-mode parallel graphics rendering subsystem
US8085273B2 (en) 2003-11-19 2011-12-27 Lucid Information Technology, Ltd Multi-mode parallel graphics rendering system employing real-time automatic scene profiling and mode control
US20090027383A1 (en) 2003-11-19 2009-01-29 Lucid Information Technology, Ltd. Computing system parallelizing the operation of multiple graphics processing pipelines (GPPLs) and supporting depth-less based image recomposition
CN1890660A (en) 2003-11-19 2007-01-03 路西德信息技术有限公司 Method and system for multiple 3-d graphic pipeline over a PC bus
US7372463B2 (en) * 2004-04-09 2008-05-13 Paul Vivek Anand Method and system for intelligent scalable animation with intelligent parallel processing engine and intelligent animation engine
DE102004042166A1 (en) * 2004-08-31 2006-03-16 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Image processing device and corresponding operating method
WO2006064774A1 (en) * 2004-12-13 2006-06-22 Matsushita Electric Industrial Co., Ltd. Multilayer body containing active material layer and solid electrolyte layer, and all-solid lithium secondary battery using same
JP2008538620A (en) 2005-01-25 2008-10-30 ルーシッド インフォメイション テクノロジー リミテッド Graphics processing and display system using multiple graphics cores on a monolithic silicon chip
US20090096798A1 (en) * 2005-01-25 2009-04-16 Reuven Bakalash Graphics Processing and Display System Employing Multiple Graphics Cores on a Silicon Chip of Monolithic Construction
US7450129B2 (en) * 2005-04-29 2008-11-11 Nvidia Corporation Compression of streams of rendering commands
US7656412B2 (en) * 2005-12-21 2010-02-02 Microsoft Corporation Texture resampling with a processor
US7924278B2 (en) * 2006-07-28 2011-04-12 Microsoft Corporation Real-time GPU rendering of piecewise algebraic surfaces
GB2449398B (en) * 2006-09-29 2009-02-11 Imagination Tech Ltd Improvements in memory management for systems for generating 3-dimensional computer images
US7830387B2 (en) * 2006-11-07 2010-11-09 Microsoft Corporation Parallel engine support in display driver model
KR100803220B1 (en) * 2006-11-20 2008-02-14 삼성전자주식회사 Method and apparatus for rendering of 3d graphics of multi-pipeline
WO2008067483A1 (en) * 2006-11-29 2008-06-05 University Of Utah Research Foundation Ray tracing a three dimensional scene using a grid
US7932902B2 (en) * 2007-09-25 2011-04-26 Microsoft Corporation Emitting raster and vector content from a single software component
US8330763B2 (en) * 2007-11-28 2012-12-11 Siemens Aktiengesellschaft Apparatus and method for volume rendering on multiple graphics processing units (GPUs)
US8605081B2 (en) * 2008-10-26 2013-12-10 Zebra Imaging, Inc. Converting 3D data to hogel data
GB0823254D0 (en) 2008-12-19 2009-01-28 Imagination Tech Ltd Multi level display control list in tile based 3D computer graphics system
GB0823468D0 (en) * 2008-12-23 2009-01-28 Imagination Tech Ltd Display list control stream grouping in tile based 3D computer graphics systems
JP2010165100A (en) * 2009-01-14 2010-07-29 Cellius Inc Image generation system, program, and information storage medium
US9235452B2 (en) * 2010-02-05 2016-01-12 Microsoft Technology Licensing, Llc Graphics remoting using augmentation data
WO2011149460A1 (en) * 2010-05-27 2011-12-01 Landmark Graphics Corporation Method and system of rendering well log values
US9424685B2 (en) 2012-07-31 2016-08-23 Imagination Technologies Limited Unified rasterization and ray tracing rendering environments
CN102866887B (en) * 2012-09-07 2015-03-25 深圳市至高通信技术发展有限公司 Method and device for realizing three-dimensional user interface
GB201223089D0 (en) 2012-12-20 2013-02-06 Imagination Tech Ltd Hidden culling in tile based computer generated graphics
GB2541084B (en) 2013-03-15 2017-05-17 Imagination Tech Ltd Rendering with point sampling and pre-computed light transport information
GB2506706B (en) 2013-04-02 2014-09-03 Imagination Tech Ltd Tile-based graphics
KR102124395B1 (en) 2013-08-12 2020-06-18 삼성전자주식회사 Graphics processing apparatus and method thereof
US9417911B2 (en) * 2014-03-12 2016-08-16 Live Planet Llc Systems and methods for scalable asynchronous computing framework

Citations (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4949280A (en) * 1988-05-10 1990-08-14 Battelle Memorial Institute Parallel processor-based raster graphics system architecture
US5101475A (en) * 1989-04-17 1992-03-31 The Research Foundation Of State University Of New York Method and apparatus for generating arbitrary projections of three-dimensional voxel-based data
US5187660A (en) * 1989-12-01 1993-02-16 At&T Bell Laboratories Arrangement for displaying on a display volumetric data
US5363475A (en) * 1988-12-05 1994-11-08 Rediffusion Simulation Limited Image generator for generating perspective views from data defining a model having opaque and translucent features
US5434968A (en) * 1991-09-10 1995-07-18 Kubota Corporation Image data processing device with multi-processor
US5459823A (en) * 1990-07-05 1995-10-17 Canon Kabushiki Kaisha Graphics engine for true colour 2D graphics
US5511154A (en) * 1990-11-15 1996-04-23 International Business Machines Corporation Method and apparatus for managing concurrent access to multiple memories
US5544283A (en) * 1993-07-26 1996-08-06 The Research Foundation Of State University Of New York Method and apparatus for real-time volume rendering from an arbitrary viewing direction
US5546530A (en) * 1990-11-30 1996-08-13 Vpl Research, Inc. Method and apparatus for rendering graphical images using parallel processing
US5557711A (en) * 1990-10-17 1996-09-17 Hewlett-Packard Company Apparatus and method for volume rendering
US5640496A (en) * 1991-02-04 1997-06-17 Medical Instrumentation And Diagnostics Corp. (Midco) Method and apparatus for management of image data by linked lists of pixel values
US5734808A (en) * 1993-09-28 1998-03-31 Namco Ltd. Pipeline processing device, clipping processing device, three-dimensional simulator device and pipeline processing method
US5757385A (en) * 1994-07-21 1998-05-26 International Business Machines Corporation Method and apparatus for managing multiprocessor graphical workload distribution
US5760781A (en) * 1994-09-06 1998-06-02 The Research Foundation Of State University Of New York Apparatus and method for real-time volume visualization
US5764228A (en) * 1995-03-24 1998-06-09 3Dlabs Inc., Ltd. Graphics pre-processing and rendering system
US5774133A (en) * 1991-01-09 1998-06-30 3Dlabs Ltd. Computer system with improved pixel processing capabilities
US5794016A (en) * 1995-12-11 1998-08-11 Dynamic Pictures, Inc. Parallel-processor graphics architecture
US5841444A (en) * 1996-03-21 1998-11-24 Samsung Electronics Co., Ltd. Multiprocessor graphics system
US5963212A (en) * 1992-08-26 1999-10-05 Bakalash; Reuven Parallel computing system for modeling and data processing
US6008813A (en) * 1997-08-01 1999-12-28 Mitsubishi Electric Information Technology Center America, Inc. (Ita) Real-time PC based volume rendering system
US6016150A (en) * 1995-08-04 2000-01-18 Microsoft Corporation Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers
US6052129A (en) * 1997-10-01 2000-04-18 International Business Machines Corporation Method and apparatus for deferred clipping of polygons
US6064393A (en) * 1995-08-04 2000-05-16 Microsoft Corporation Method for measuring the fidelity of warped image layer approximations in a real-time graphics rendering pipeline
US6100899A (en) * 1997-10-02 2000-08-08 Silicon Graphics, Inc. System and method for performing high-precision, multi-channel blending using multiple blending passes
US6246421B1 (en) * 1996-12-24 2001-06-12 Sony Corporation Apparatus and method for parallel rendering of image pixels
US20010036356A1 (en) * 2000-04-07 2001-11-01 Autodesk, Inc. Non-linear video editing system
US6339432B1 (en) * 1999-09-24 2002-01-15 Microsoft Corporation Using alpha values to control pixel blending
US20020015055A1 (en) * 2000-07-18 2002-02-07 Silicon Graphics, Inc. Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US20030025699A1 (en) * 1998-03-02 2003-02-06 Wei Tien En Method and apparatus for a video graphics circuit having parallel pixel processing
US6532017B1 (en) * 1998-11-12 2003-03-11 Terarecon, Inc. Volume rendering pipeline
US6559843B1 (en) * 1993-10-01 2003-05-06 Compaq Computer Corporation Segmented ray casting data parallel volume rendering
US6570579B1 (en) * 1998-11-09 2003-05-27 Broadcom Corporation Graphics display system
US6577317B1 (en) * 1998-08-20 2003-06-10 Apple Computer, Inc. Apparatus and method for geometry operations in a 3D-graphics pipeline
US6597363B1 (en) * 1998-08-20 2003-07-22 Apple Computer, Inc. Graphics processor with deferred shading
US20040021659A1 (en) * 2002-07-31 2004-02-05 Silicon Graphics Inc. System and method for decoupling the user interface and application window in a graphics application
US6870539B1 (en) * 2000-11-17 2005-03-22 Hewlett-Packard Development Company, L.P. Systems for compositing graphical data
US6903753B1 (en) * 2000-10-31 2005-06-07 Microsoft Corporation Compositing images from multiple sources

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5392393A (en) 1993-06-04 1995-02-21 Sun Microsystems, Inc. Architecture for a high performance three dimensional graphics accelerator
JP3889195B2 (en) * 1999-02-03 2007-03-07 株式会社東芝 Image processing apparatus, image processing system, and image processing method

Patent Citations (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4949280A (en) * 1988-05-10 1990-08-14 Battelle Memorial Institute Parallel processor-based raster graphics system architecture
US5363475A (en) * 1988-12-05 1994-11-08 Rediffusion Simulation Limited Image generator for generating perspective views from data defining a model having opaque and translucent features
US5101475A (en) * 1989-04-17 1992-03-31 The Research Foundation Of State University Of New York Method and apparatus for generating arbitrary projections of three-dimensional voxel-based data
US5187660A (en) * 1989-12-01 1993-02-16 At&T Bell Laboratories Arrangement for displaying on a display volumetric data
US5459823A (en) * 1990-07-05 1995-10-17 Canon Kabushiki Kaisha Graphics engine for true colour 2D graphics
US5557711A (en) * 1990-10-17 1996-09-17 Hewlett-Packard Company Apparatus and method for volume rendering
US5511154A (en) * 1990-11-15 1996-04-23 International Business Machines Corporation Method and apparatus for managing concurrent access to multiple memories
US5546530A (en) * 1990-11-30 1996-08-13 Vpl Research, Inc. Method and apparatus for rendering graphical images using parallel processing
US5774133A (en) * 1991-01-09 1998-06-30 3Dlabs Ltd. Computer system with improved pixel processing capabilities
US5640496A (en) * 1991-02-04 1997-06-17 Medical Instrumentation And Diagnostics Corp. (Midco) Method and apparatus for management of image data by linked lists of pixel values
US5434968A (en) * 1991-09-10 1995-07-18 Kubota Corporation Image data processing device with multi-processor
US5963212A (en) * 1992-08-26 1999-10-05 Bakalash; Reuven Parallel computing system for modeling and data processing
US5544283A (en) * 1993-07-26 1996-08-06 The Research Foundation Of State University Of New York Method and apparatus for real-time volume rendering from an arbitrary viewing direction
US5734808A (en) * 1993-09-28 1998-03-31 Namco Ltd. Pipeline processing device, clipping processing device, three-dimensional simulator device and pipeline processing method
US6559843B1 (en) * 1993-10-01 2003-05-06 Compaq Computer Corporation Segmented ray casting data parallel volume rendering
US5757385A (en) * 1994-07-21 1998-05-26 International Business Machines Corporation Method and apparatus for managing multiprocessor graphical workload distribution
US5847711A (en) * 1994-09-06 1998-12-08 The Research Foundation Of State University Of New York Apparatus and method for parallel and perspective real-time volume visualization
US5760781A (en) * 1994-09-06 1998-06-02 The Research Foundation Of State University Of New York Apparatus and method for real-time volume visualization
US5764228A (en) * 1995-03-24 1998-06-09 3Dlabs Inc., Ltd. Graphics pre-processing and rendering system
US6016150A (en) * 1995-08-04 2000-01-18 Microsoft Corporation Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers
US6064393A (en) * 1995-08-04 2000-05-16 Microsoft Corporation Method for measuring the fidelity of warped image layer approximations in a real-time graphics rendering pipeline
US5794016A (en) * 1995-12-11 1998-08-11 Dynamic Pictures, Inc. Parallel-processor graphics architecture
US5841444A (en) * 1996-03-21 1998-11-24 Samsung Electronics Co., Ltd. Multiprocessor graphics system
US6246421B1 (en) * 1996-12-24 2001-06-12 Sony Corporation Apparatus and method for parallel rendering of image pixels
US6008813A (en) * 1997-08-01 1999-12-28 Mitsubishi Electric Information Technology Center America, Inc. (Ita) Real-time PC based volume rendering system
US6243098B1 (en) * 1997-08-01 2001-06-05 Terarecon, Inc. Volume rendering pipelines
US6052129A (en) * 1997-10-01 2000-04-18 International Business Machines Corporation Method and apparatus for deferred clipping of polygons
US6100899A (en) * 1997-10-02 2000-08-08 Silicon Graphics, Inc. System and method for performing high-precision, multi-channel blending using multiple blending passes
US20030025699A1 (en) * 1998-03-02 2003-02-06 Wei Tien En Method and apparatus for a video graphics circuit having parallel pixel processing
US6577317B1 (en) * 1998-08-20 2003-06-10 Apple Computer, Inc. Apparatus and method for geometry operations in a 3D-graphics pipeline
US6597363B1 (en) * 1998-08-20 2003-07-22 Apple Computer, Inc. Graphics processor with deferred shading
US6570579B1 (en) * 1998-11-09 2003-05-27 Broadcom Corporation Graphics display system
US6532017B1 (en) * 1998-11-12 2003-03-11 Terarecon, Inc. Volume rendering pipeline
US6339432B1 (en) * 1999-09-24 2002-01-15 Microsoft Corporation Using alpha values to control pixel blending
US20010036356A1 (en) * 2000-04-07 2001-11-01 Autodesk, Inc. Non-linear video editing system
US20020015055A1 (en) * 2000-07-18 2002-02-07 Silicon Graphics, Inc. Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US6903753B1 (en) * 2000-10-31 2005-06-07 Microsoft Corporation Compositing images from multiple sources
US6870539B1 (en) * 2000-11-17 2005-03-22 Hewlett-Packard Development Company, L.P. Systems for compositing graphical data
US20040021659A1 (en) * 2002-07-31 2004-02-05 Silicon Graphics Inc. System and method for decoupling the user interface and application window in a graphics application

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7405734B2 (en) 2000-07-18 2008-07-29 Silicon Graphics, Inc. Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US20020015055A1 (en) * 2000-07-18 2002-02-07 Silicon Graphics, Inc. Method and system for presenting three-dimensional computer graphics images using multiple graphics processing units
US7646434B2 (en) 2003-05-02 2010-01-12 Yoostar Entertainment Group, Inc. Video compositing systems for providing interactive entertainment
US20040218100A1 (en) * 2003-05-02 2004-11-04 Staker Allan Robert Interactive system and method for video compositing
US20090040385A1 (en) * 2003-05-02 2009-02-12 Megamedia, Llc Methods and systems for controlling video compositing in an interactive entertainment system
US20090041422A1 (en) * 2003-05-02 2009-02-12 Megamedia, Llc Methods and systems for controlling video compositing in an interactive entertainment system
US7528890B2 (en) 2003-05-02 2009-05-05 Yoostar Entertainment Group, Inc. Interactive system and method for video compositing
US7649571B2 (en) 2003-05-02 2010-01-19 Yoostar Entertainment Group, Inc. Methods for interactive video compositing
US20090237565A1 (en) * 2003-05-02 2009-09-24 Yoostar Entertainment Group, Inc. Video compositing systems for providing interactive entertainment
US20090237566A1 (en) * 2003-05-02 2009-09-24 Yoostar Entertainment Group, Inc. Methods for interactive video compositing
US20090164908A1 (en) * 2005-06-10 2009-06-25 Nvidia Corporation Using a scalable graphics system to enable a general-purpose multi-user computer system
US20060282781A1 (en) * 2005-06-10 2006-12-14 Diamond Michael B Using a graphics system to enable a multi-user computer system
US8893016B2 (en) * 2005-06-10 2014-11-18 Nvidia Corporation Using a graphics system to enable a multi-user computer system
US10026140B2 (en) 2005-06-10 2018-07-17 Nvidia Corporation Using a scalable graphics system to enable a general-purpose multi-user computer system
US20100027961A1 (en) * 2008-07-01 2010-02-04 Yoostar Entertainment Group, Inc. Interactive systems and methods for video compositing
US20100031149A1 (en) * 2008-07-01 2010-02-04 Yoostar Entertainment Group, Inc. Content preparation systems and methods for interactive video systems
US8824861B2 (en) 2008-07-01 2014-09-02 Yoostar Entertainment Group, Inc. Interactive systems and methods for video compositing
US9143721B2 (en) 2008-07-01 2015-09-22 Noo Inc. Content preparation systems and methods for interactive video systems
US10332560B2 (en) 2013-05-06 2019-06-25 Noo Inc. Audio-video compositing and effects
KR20230094827A (en) * 2021-12-21 2023-06-28 동아대학교 산학협력단 Generating apparatus and method of image data for fire detection training, and learning apparatus and method using the same
KR102630183B1 (en) * 2021-12-21 2024-01-25 동아대학교 산학협력단 Generating apparatus and method of image data for fire detection training, and learning apparatus and method using the same

Also Published As

Publication number Publication date
US20020015055A1 (en) 2002-02-07
WO2002007092A2 (en) 2002-01-24
WO2002007092A8 (en) 2002-10-03
US7405734B2 (en) 2008-07-29
WO2002007092A3 (en) 2002-04-25

Similar Documents

Publication Publication Date Title
US20020130889A1 (en) System, method, and computer program product for real time transparency-based compositing
US7342588B2 (en) Single logical screen system and method for rendering graphical data
US7102653B2 (en) Systems and methods for rendering graphical data
US6763175B1 (en) Flexible video editing architecture with software video effect filter components
US6466222B1 (en) Apparatus and method for computing graphics attributes in a graphics display system
US6882346B1 (en) System and method for efficiently rendering graphical data
US8306399B1 (en) Real-time video editing architecture
EP0367183B1 (en) System for high speed computer graphics computation
US8184127B2 (en) Apparatus for and method of generating graphic data, and information recording medium
US6763176B1 (en) Method and apparatus for real-time video editing using a graphics processor
CN104767956A (en) Video processing with multiple graphical processing units
US6924799B2 (en) Method, node, and network for compositing a three-dimensional stereo image from a non-stereo application
US7554563B2 (en) Video display control apparatus and video display control method
EP0553549A1 (en) Architecture for transferring pixel streams
US7864197B2 (en) Method of background colour removal for porter and duff compositing
US8031197B1 (en) Preprocessor for formatting video into graphics processing unit (“GPU”)-formatted data for transit directly to a graphics memory
US6154195A (en) System and method for performing dithering with a graphics unit having an oversampling buffer
US6967659B1 (en) Circuitry and systems for performing two-dimensional motion compensation using a three-dimensional pipeline and methods of operating the same
EP0752686B1 (en) Loopback video preview for a computer display
US6680739B1 (en) Systems and methods for compositing graphical data
US7103226B1 (en) Video processor with composite graphics and video picture elements
CN101067924B (en) Visual frequency accelerating method based on the third party playing software
CN110312084B (en) Multi-channel video processor and method for realizing watermark superposition based on processor
US6791553B1 (en) System and method for efficiently rendering a jitter enhanced graphical image
US7397479B2 (en) Programmable multiple texture combine circuit for a graphics processing system and method for use thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: SILICON GRAPHICS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BLYTHE, DAVID;FORAN, JAMES L.;REEL/FRAME:013202/0336;SIGNING DATES FROM 20020328 TO 20020422

AS Assignment

Owner name: WELLS FARGO FOOTHILL CAPITAL, INC.,CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:SILICON GRAPHICS, INC. AND SILICON GRAPHICS FEDERAL, INC. (EACH A DELAWARE CORPORATION);REEL/FRAME:016871/0809

Effective date: 20050412

Owner name: WELLS FARGO FOOTHILL CAPITAL, INC., CALIFORNIA

Free format text: SECURITY AGREEMENT;ASSIGNOR:SILICON GRAPHICS, INC. AND SILICON GRAPHICS FEDERAL, INC. (EACH A DELAWARE CORPORATION);REEL/FRAME:016871/0809

Effective date: 20050412

AS Assignment

Owner name: GENERAL ELECTRIC CAPITAL CORPORATION,CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:SILICON GRAPHICS, INC.;REEL/FRAME:018545/0777

Effective date: 20061017

Owner name: GENERAL ELECTRIC CAPITAL CORPORATION, CALIFORNIA

Free format text: SECURITY INTEREST;ASSIGNOR:SILICON GRAPHICS, INC.;REEL/FRAME:018545/0777

Effective date: 20061017

AS Assignment

Owner name: MORGAN STANLEY & CO., INCORPORATED, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GENERAL ELECTRIC CAPITAL CORPORATION;REEL/FRAME:019995/0895

Effective date: 20070926

Owner name: MORGAN STANLEY & CO., INCORPORATED,NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GENERAL ELECTRIC CAPITAL CORPORATION;REEL/FRAME:019995/0895

Effective date: 20070926

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION