WO2002028085A2 - Reusing decoded multimedia data for multiple users - Google Patents

Reusing decoded multimedia data for multiple users Download PDF

Info

Publication number
WO2002028085A2
WO2002028085A2 PCT/US2001/042401 US0142401W WO0228085A2 WO 2002028085 A2 WO2002028085 A2 WO 2002028085A2 US 0142401 W US0142401 W US 0142401W WO 0228085 A2 WO0228085 A2 WO 0228085A2
Authority
WO
WIPO (PCT)
Prior art keywords
storage
data
media
requested data
decoded
Prior art date
Application number
PCT/US2001/042401
Other languages
French (fr)
Other versions
WO2002028085A9 (en
WO2002028085A3 (en
Inventor
Alan T. Ruberg
Gerard A. Wall
Original Assignee
Sun Microsystems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Microsystems, Inc. filed Critical Sun Microsystems, Inc.
Priority to AU2001296944A priority Critical patent/AU2001296944A1/en
Priority to GB0307244A priority patent/GB2385966A/en
Publication of WO2002028085A2 publication Critical patent/WO2002028085A2/en
Publication of WO2002028085A3 publication Critical patent/WO2002028085A3/en
Publication of WO2002028085A9 publication Critical patent/WO2002028085A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21815Source of audio or video content, e.g. local disk arrays comprising local storage units
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2747Remote storage of video programs received via the downstream path, e.g. from the server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends

Definitions

  • Multimedia is the distribution of information from a variety of information sources in more than one form, e.g., pictures, voices and videos along with traditional text, to provide pictures and animation. Such information may be encoded/compressed in various formats for efficient storage and transmission purposes and may need decoding/decompression prior to its final display.
  • each user typically requires its own media storage and decoder so that the data may be stored and manipulated by individual users.
  • FIG. 1 illustrates a Replay TV environment where a stream of television content is being broadcast to individual users.
  • each user In order for each user to have independent control of what and when he desires to playback the received content, each user must have a media storage for storing the received content, a media decoder for decoding the content, a media controller for controlling the portion desired and a display for displaying the decoded content.
  • live TV broadcast may be received at each user from a media source 102.
  • Media storage 112 at user 1 stores the received broadcast.
  • media decoder 122 decodes the desired portion of the stored broadcast.
  • a media controller 132 transmits the decoded data to a user display 142.
  • media storage 114 at user 2 stores the received broadcast so he can manipulate the stored data for output, independent from user 1.
  • This invention relates to multimedia data decoding in a network environment where encoded data is decoded and the decoded data is then stored in a storage that is accessible to multiple users on the network.
  • an event may initiate a search (e.g., playback, fast-forward, different speed) for the requested data in an encoded data storage.
  • the found data is decoded and the decoded data is then stored in the storage.
  • the requested data may also come from live feed, in which case the storage management unit may search for the requested data as live data is received.
  • encoded data is decoded once and stored in a nonvolatile storage.
  • the decoded data is stored in a volatile storage where old data is erased as new data is stored when incoming data exceeds storage size.
  • One or more embodiments of the invention may be implemented as computer software in the form of computer readable code executed on a general purpose computer, in the form of bytecode class files executable within a JavaTM runtime environment running on such a computer, or in the form of bytecodes running on a processor (or devices enabled to process bytecodes) existing in a distributed environment (e.g., one or more processor on a network).
  • a general purpose computer in the form of bytecode class files executable within a JavaTM runtime environment running on such a computer, or in the form of bytecodes running on a processor (or devices enabled to process bytecodes) existing in a distributed environment (e.g., one or more processor on a network).
  • a processor or devices enabled to process bytecodes
  • any suitable computer system and programming/processing environment may be used.
  • the present invention may be implemented in computer systems where the data is provided through a network such as a local area network (LAN), a wide area network (WAN), the internet, world wide web (Web), or other suitable network configurations.
  • FIG. 2 illustrates a virtual desktop architecture where one or more servers (e.g., servers 152 through 156) communicate with one or more desktop units (DTUs) such as DTUs 170, 180 and 190 through an interconnect fabric 160.
  • DTUs desktop units
  • the service machines handle the translation to and from the virtual desktop architecture wire protocol.
  • Computers 152, 153, 154, 155 and 156 may be service producing machines such as a proxy for another device providing the computational service (e.g., a database computer in a three tiered architecture, where the proxy computer might only generate queries and execute user interface code). Any of computers 152, 153, 154, 155 and 156 may be implemented as a transmitter. In one embodiment, computers 152, 153, 154, 155 and 156 connect directly to DTUs 170, 180 and 190 through interconnect fabric 160.
  • Interconnect fabric 160 may be any suitable communication paths for carrying data between services 150 and DTUs 170, 180 and 190.
  • interconnect fabric 160 is a local area network implemented as an Ethernet network.
  • Other local network, wide area networks, the internet, the world wide web, and other types of communication path may also be utilized.
  • Internet fabric 160 may be implemented with a physical medium such as a wire or fiber optic cable, or it may be implemented in a wireless environment.
  • DTUs 170, 180, and 190 are the means by which users can access the computational services provided by the servers or services 150, and as such, DTUs 170, 180 and 190 may also be referred to as a client, user workstation, terminal or HID.
  • a desktop unit includes a display, a keyboard, a mouse and audio speakers.
  • DTU 170 includes a display 171, a keyboard 174, a mouse 175, and audio speakers 172.
  • DTUs include the electronics needed to interface attached devices (e.g., display, keyboard, mouse and speakers) to interconnect fabric 160 and to transmit data to and receive data from the services 150.
  • Desktop units 170, 180 and 190 may be any suitable computer systems, including general purpose computers, client-server systems, or network computers.
  • desktop units 170, 180 and 190 may be workstations from, e.g., Sun Microsystems, inc., IBM Corporation, Hewlett Packard, Digital and other manufacturers. Any of DTUs 170, 180 and 190 may be implemented as a receiver.
  • Keyboard 174 and mouse 175 introduce user input and communicate that user input with the DTU they are attached to.
  • Other suitable input devices e.g., scanner, digital camera
  • Display 171 and audio speakers 172 are output devices.
  • Other suitable output devices e.g., printer
  • FIG. 3 illustrates a block diagram of an embodiment of a desktop unit illustrated in FIG. 2.
  • Various components of the DTU are coupled internally to a Peripheral Component Interconnect (PCI) bus 226.
  • PCI Peripheral Component Interconnect
  • a network controller 210 is coupled to PCI bus 226 and communicates to an interconnect fabric such as an ethernet, through path 228.
  • An audio codec 212 receives audio data on interface 230 and is coupled to network controller 210. Audio codec may be a hardware circuit (chip) or software routine that converts sound into digital code and vice versa.
  • USB Universal Serial Bus
  • An embedded processor 204 is coupled to PCI bus 226.
  • Embedded processor 204 may be, for example, a Sparc2ep, which is coupled to a flash memory 206 and a dynamic random access memory (DRAM) 208.
  • processor 204 may be a SPARCTM microprocessor manufactured by Sun Microsystems, Inc., a 680X0 processor manufactured by Motorola, a 80X86 manufactured by Intel, a Pentium processor, or any other suitable microprocessor or microcomputer.
  • a video controller e.g., frame buffer controller 214, is also coupled to PCI bus 226.
  • Video controller 214 may be, for example, an ATI RagePro+ frame buffer controller (or any other suitable controller) that provides Super Video Graphics Array (SVGA) output on path 236.
  • SVGA Super Video Graphics Array
  • National TV Standards Committee (NTSC) or Phase Alternating Line (PAL) data may be provided via path 232 to video controller 214 through video decoder 220.
  • NTSC or PAL data may be provided via path 234 from video controller 214 through a video encoder 222.
  • a smart card interface 218 and a Synchronous Graphics Random Access Memory (SGRAM) 216 may also be coupled to video controller 214.
  • SGRAM Synchronous Graphics Random Access Memory
  • desktop units 170, 180 and 190 may be implemented using a single chip that includes necessary processing capabilities and graphic Tenderer.
  • FIG. 4 shows a general purpose computer 250 that may be used to implement servers 152 through 156 shown in FIG. 2.
  • a keyboard 251 and mouse 252 are coupled to a bi-directional system bus 253. Keyboard 251 and mouse 252 introduce user input to computer system 250 and communicate user input to a processor 254. Other suitable input devices may be used in addition to, or in place of, mouse 252 and/or keyboard 251.
  • I/O (input/output) unit 255 coupled to bi-directional system bus 253 represents I/O elements such as printers, A/V (audio/video) I/Os, etc.
  • Bi-directional system bus 253 may contain, for example, thirty-two address lines for addressing a video memory 256 or a main memory 257.
  • System bus 253 may also includes, for example, a 32-bit data bus for transferring data between and among components, e.g., processor 254, main memory 257, video memory 256 and mass storage 258, all coupled to bus 253.
  • multiplex data/address lines may be used instead of separate data and address lines.
  • Main memory 257 may comprise dynamic random access memory (DRAM) or other suitable memories.
  • Video memory 256 may be a dual-ported video random access memory.
  • one port of video memory 256 may be coupled to a video amplifier 259 which is used to drive a monitor 260 which may be a cathode ray tube (CRT) raster monitor, a liquid crystal display (LCD), or any suitable monitors for displaying graphic images.
  • Video amplifier 259 is well known in the art and may be implemented by any suitable apparatus.
  • pixel data stored in video memory 256 is converted to a raster signal suitable for use by monitor 260.
  • Mass storage 258 may include both fixed and removable media, such as magnetic, optical or magnetic optical storage systems or any other available mass storage technology.
  • Computer 250 may include a communication interface 261 coupled to bidirectional system bus 253.
  • Communication interface 261 provides a two-way data communication via a network link 262 to a local network 263.
  • communication interface 261 is an integrated service digital network (ISDN) card or a modem
  • ISDN integrated service digital network
  • communication interface 261 provides a data communication connection to the corresponding type of telephone line, which comprises part of network link 262.
  • ISDN integrated service digital network
  • LAN local area network,
  • Wireless links are also possible.
  • communication interface 261 sends and receives electrical, electromagnetic or optical signals which carry digital data streams representing various types of information.
  • Network link 262 typically provides data communication through one or more networks to other data devices.
  • network link 262 may provide a connection through local network 263 to a host computer 264 or to data equipment operated by an Internet Service Provider (ISP) 265.
  • ISP 265 in turn provides data communication services through the world wide packet data communication network commonly referred to as the "internet" 266.
  • Local network 263 and internet 266 both use electrical, electromagnetic or optical signals which carry digital data streams.
  • the signals through the various networks and the signals on network link 262 and through communication interface 261, which carry the digital data to and from computer 250, are exemplary forms of carrier waves transporting the information.
  • Computer 250 can send messages and receive data, including program code, through these communication channels.
  • server 267 might transmit a requested code for an application program through Internet 266, ISP 265, local network 263 and communication interface 261.
  • the received code may be executed by processor 254 as the code is received, and/or stored in mass storage 258 or other non-volatile storage for later execution. In this manner, computer 250 may obtain application code in the form of a carrier wave.
  • Application code may be embodied in any form of computer program product.
  • a computer program product comprises a medium configured to store or transport computer readable code or data, or in which computer readable code or data may be embedded.
  • Some examples of computer program products are CD- ROM disks, ROM cards, floppy disks, magnetic tapes, computer hard drives, servers on a network, and carrier waves.
  • a central computing resource is used and the user display resource is distributed (e.g., a virtual desktop architecture)
  • specialized video processors may be located between the network where data is available and the interconnect that connects the distributed displays to the central computing resources (e.g., services).
  • video processing and hardware requirements associated with a receiver may be minimized by specifying a single video protocol for transmission of video data between transmitters and receivers on a network.
  • the protocol may specify a color format that allows for high video quality and minimizes the complexity of the receiver.
  • Transmitters may be equipped with transformation mechanisms that provide for conversion of video data into the designated protocol as needed.
  • FIG. 5 illustrates a multimedia system in accordance with the present invention.
  • Server 320 which may be a general purpose computer described above or any other suitable servers, receives media data from various media sources such as, but are not limited to, encoded media storage 318, encoded receiver 302, compressed video conference data source 304, decoded receiver 306 and hardwired data storage/receiver 308.
  • Media data received from the media sources e.g., encoded media storage 318, encoded receiver 302 or compressed video conference data source 304, may be compressed or encoded.
  • Compression reduces the number of binary bits necessary to represent the information contained within the data. Since every bit incurs a cost when being transmitted or stored, compression reduces cost, especially for multimedia applications which involve large amounts of data.
  • compression techniques for example, Huffman method, Dictionary Approaches, Adaptive Coding, Run-Length Encoding method, Quadtree Compression method, Moving Pictures Experts Group (MPEG), etc.
  • Compression may be performed by, e.g., a Coder- Decoder (codec) which may be a hardware or software that converts analog sound, speech or video to digital code (analog to digital) and vice versa (digital to analog).
  • Hardware codecs (chips) may be built into device such as digital telephones and video-conferencing stations.
  • Software codecs may be used to record and play audio and video over a network utilizing the CPU in a server for processing.
  • Encoding may be a procedure to compress data of multimedia resources such as audio, video, or graphics files for efficient storage and transmission purpose. For example, encoding may compress high-bandwidth media signals to low-bandwidth signals. The compression then allows for the real-time transmission of media clips via the Internet.
  • compress is used interchangeably with the term “encode.”
  • decompress is used interchangeably with the term “decode.”
  • any suitable data format may be used.
  • decompressor 312 reverses the compression process and place the compressed data into a format for playback.
  • Decompressor 312 may be a hardware or a software device that decompresses data and may be configured to have capabilities for processing various forms of compression.
  • decompressor 312 may be an application-specific hardware or a general purpose computer.
  • Decompressor 312 may be a stand-alone unit or may be a function provided by server 320.
  • Media storage 322 may be a semi-permanent or permanent holding place for data, for example, magnetic disks, optical disks or magnetic tapes.
  • Semi-permanent or permanent storage may be more economical for applications where small content is to be shown at multiple locations, for example, a standard video clip a few minutes in length to be shown at remotely installed kiosks.
  • the content may be decoded once and stored at a server.
  • the individual kiosk can then access the decoded content at the server. This configuration effectively eliminates the need for storage and decoding resources at each kiosk and since the video clip has a predefined size, a fixed-sized storage may be utilized.
  • media storage 322 may be volatile and reusable, for example, a random access memory (RAM).
  • content of media storage 322 may be updated using a least recently used (LRU) algorithm. For example, when data exceeds the storage size limit, the least recently accessed data is thrown away and new data stored the storage. By using this scheme, the items that are most frequently accessed by the users tend to stay in the media storage. Since the media storage does not need to store the entire content, the media storage may be sized to obtain maximum efficiency. For example, the size of decoded media storage 322 may be adjusted to store just a few minutes of content that are most frequently accessed. In another embodiment, for example, in the case of a live feed, media storage 322 may be updated based on first-in-first-out scheme, e.g., new data replaces the oldest data in the storage.
  • decoded media storage 322 may be used as a data source, independent of the original media source.
  • cache may be provided in the media controller units 332, 334, 336 and 338 to enhance the capacity of the central decoded media storage 322.
  • decoded media storage can be hierarchical.
  • better coverage, faster-than-realtime decoding or smaller storage caches may be provided. For example, if there are more decoding resources available than required for the workload, smaller storage caches may be used.
  • Media controller units e.g., MCUs 332, 334, 336 and 338, may be controllers or chips that process any combination of audio, video, graphics, fax and modem operations.
  • Media controller units via a storage management unit 323 associated with media storage 322, request particular media segments for display from the decoded media storage 322.
  • Storage management unit 323 takes requests to play media data from media controllers 332, 334, 336 and 338 and makes sure that the requested decoded/decompressed media exists in the decoded media storage 322. Storage management unit 323 then directs the stored data to the appropriate media controllers 332, 334, 336 and 338 that are requesting the data. Storage management unit 323 may be implemented in software or hardware.
  • storage management unit 323 produces media control events to e.g., encoded media storage 318 and bulk decoder 312 to produce requested decoded data.
  • media controller 332, 334, 336 or 338 may search the encoded media storage 318, for example, by backing up to the beginning of the media, advancing to the end of the media, or playing backwards, forwards and at multiple speeds.
  • control of the incoming data and bulk decoder 312 may be restricted because storage management unit 323 may be continuously accepting new data to be placed in the decoded media storage 322 and erasing old data when new data fills decoded media storage 322.
  • user media controllers 332, 334, 336 and 338 may only be able to go as far back as the size of the decoded media storage 322 and forward as far as the current real time.
  • decoded media stored in decoded media storage 322 may be displayed on a user display, e.g., user display 342, 344 or 346.
  • data stored in decoded media storage 322 may be archived into a media archive unit 348.
  • Media archive unit 348 may be any suitable media storage. Other types of media dispositions may be used in addition to viewing and archiving.
  • the amount of multimedia resources may be decreased and their utilization optimized.

Abstract

Method and apparatus for distributing media data in a network environment. Encoded data is decoded and the decoded data is stored in a storage that is accessible to multiple users on the network. A storage management unit accepts requests from a media controller coupled to the users and determines whether the requested data is stored in the storage. If the requested data is in the storage, the requested data is directed to the requesting media controller for use. If the requested data is not already in the storage, an event is generated so that the requested data may be generated and stored in the storage.

Description

REUSING DECODED MULTIMEDIA
DATA FOR MULTIPLE USERS
FIELD OF THE INVENTION
This invention relates to multimedia data decoding, more particularly, to multimedia data decoding in a network environment.
Sun, Sun Microsystems, the Sun Logo, Java, Java Developer Connection, Solaris, JavaOne, Sun Video Plus, and Write Once, Run Anywhere, and the Network is the Computer are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries.
BACKGROUND OF THE INVENTION
Multimedia is the distribution of information from a variety of information sources in more than one form, e.g., pictures, voices and videos along with traditional text, to provide pictures and animation. Such information may be encoded/compressed in various formats for efficient storage and transmission purposes and may need decoding/decompression prior to its final display. In a multi-user environment where multiple users desire to receive the same content but not necessarily at the same location or at the same time, each user typically requires its own media storage and decoder so that the data may be stored and manipulated by individual users.
FIG. 1 illustrates a Replay TV environment where a stream of television content is being broadcast to individual users. In order for each user to have independent control of what and when he desires to playback the received content, each user must have a media storage for storing the received content, a media decoder for decoding the content, a media controller for controlling the portion desired and a display for displaying the decoded content. For example, live TV broadcast may be received at each user from a media source 102. Media storage 112 at user 1 stores the received broadcast. During the playback, media decoder 122 decodes the desired portion of the stored broadcast. A media controller 132 then transmits the decoded data to a user display 142. Similarly, media storage 114 at user 2 stores the received broadcast so he can manipulate the stored data for output, independent from user 1.
In the example shown, data is stored at each individual user site and each user must decode the data for use. In a network architecture where a large number of end user computers are connected to a limited number of servers, the requirement of dedicated multimedia storage and decoding resources at each user computer may be prohibitively expensive. In addition, the single stream of broadcast may be stored at multiple locations and decompressed/decoded multiple times, once at each user, making it inefficient. Furthermore, the multimedia computing resource at each user may not be used at all times (e.g., a user may be turned off). Therefore, multimedia computing resources at some users may be idling, furthering the inefficiency.
SUMMARY OF THE INVENTION This invention relates to multimedia data decoding in a network environment where encoded data is decoded and the decoded data is then stored in a storage that is accessible to multiple users on the network.
In accordance with one embodiment of the present invention, a decoder decodes encoded data from a data source. The decoded data is then stored in a storage that is accessible to multiple users on a network. A storage management unit coupled to the storage accepts requests from media controllers and dispatches the requested data to the appropriate user via a corresponding media controller. In one embodiment, the storage management unit accepts requests from media controllers and determines whether the requested data exists in the storage. If the requested data exists in the storage, the storage management unit directs the requested data to the requesting media controller. If the requested data does not exist in the storage, an event may be generated to produce the requested data in the storage. For example, an event may initiate a search (e.g., playback, fast-forward, different speed) for the requested data in an encoded data storage. The found data is decoded and the decoded data is then stored in the storage. The requested data may also come from live feed, in which case the storage management unit may search for the requested data as live data is received.
In one embodiment, encoded data is decoded once and stored in a nonvolatile storage. In another embodiment, the decoded data is stored in a volatile storage where old data is erased as new data is stored when incoming data exceeds storage size. By decoding the encoded data and storing the decoded data in a shared location, storage and decoding resources may be optimized because resources may be shared by multiple users.
This summary is not intended to limit the scope of the invention, which is defined solely by the claims attached hereto.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 illustrates a prior art system. FIG. 2 illustrates a virtual desktop system architecture. FIG. 3 shows a block diagram of a desktop unit. FIG. 4 illustrates a block diagram of a general purpose computer.
FIG. 5 illustrates a multimedia system in accordance with the present invention.
While specific embodiments are described and illustrated herein, these embodiments are not intended to limit the scope of the invention, which is susceptible to various modifications and alternative forms.
DETAILED DESCRIPTION OF THE INVENTION
In accordance with the present invention, method and apparatus for decoding multimedia data for multiple users in a network are provided.
One or more embodiments of the invention may be implemented as computer software in the form of computer readable code executed on a general purpose computer, in the form of bytecode class files executable within a Java™ runtime environment running on such a computer, or in the form of bytecodes running on a processor (or devices enabled to process bytecodes) existing in a distributed environment (e.g., one or more processor on a network). In general, any suitable computer system and programming/processing environment may be used.
In one embodiment, the present invention may be implemented in computer systems where the data is provided through a network such as a local area network (LAN), a wide area network (WAN), the internet, world wide web (Web), or other suitable network configurations. FIG. 2 illustrates a virtual desktop architecture where one or more servers (e.g., servers 152 through 156) communicate with one or more desktop units (DTUs) such as DTUs 170, 180 and 190 through an interconnect fabric 160.
The functionality of a virtual desktop system is partitioned between a display and input devices (the combination of which is referred to as an "human interface device" or "HID"), and data sources or services. Specifically, state and computation functions typically reside in data sources/services 150 while input and display functions typically reside at the HID. Data sources/services are typically not tied to a specific computer and may be distributed over one or more traditional desktop systems or traditional servers. For example, one computer may have one or more services, or a service may be implemented by one or more computers. In general, services provide computation, state and data to the HIDs and services are typically controlled under a common authority or manager (e.g., service manager).
Services may be, but are not limited to Java™ program execution services, Xl l/Unix services, archived video services and Windows NT services. In general, a service is a process that provides output data and responds to user requests and input. For example, services may have the responsibility to handle communications with the HID that is currently being used. The communication may involve taking the output from the computational service and converting it into a standard protocol for the HID. This data protocol conversion may be handled by, e.g., a middleware layer, such as an XI 1 server, the Microsoft Windows interface, a video format transcoder, the OpenGL interface, or a variant of the java.awt.graphics class within the service producer machine. The service machines, e.g., computer 152, 153, 154, 155 and 156, handle the translation to and from the virtual desktop architecture wire protocol. Computers 152, 153, 154, 155 and 156 may be service producing machines such as a proxy for another device providing the computational service (e.g., a database computer in a three tiered architecture, where the proxy computer might only generate queries and execute user interface code). Any of computers 152, 153, 154, 155 and 156 may be implemented as a transmitter. In one embodiment, computers 152, 153, 154, 155 and 156 connect directly to DTUs 170, 180 and 190 through interconnect fabric 160. Interconnect fabric 160 may be any suitable communication paths for carrying data between services 150 and DTUs 170, 180 and 190. In one embodiment, interconnect fabric 160 is a local area network implemented as an Ethernet network. Other local network, wide area networks, the internet, the world wide web, and other types of communication path may also be utilized. Internet fabric 160 may be implemented with a physical medium such as a wire or fiber optic cable, or it may be implemented in a wireless environment.
DTUs 170, 180, and 190 are the means by which users can access the computational services provided by the servers or services 150, and as such, DTUs 170, 180 and 190 may also be referred to as a client, user workstation, terminal or HID. Typically, a desktop unit includes a display, a keyboard, a mouse and audio speakers. For example, DTU 170 includes a display 171, a keyboard 174, a mouse 175, and audio speakers 172. In general, DTUs include the electronics needed to interface attached devices (e.g., display, keyboard, mouse and speakers) to interconnect fabric 160 and to transmit data to and receive data from the services 150. Desktop units 170, 180 and 190 may be any suitable computer systems, including general purpose computers, client-server systems, or network computers. For example, desktop units 170, 180 and 190 may be workstations from, e.g., Sun Microsystems, inc., IBM Corporation, Hewlett Packard, Digital and other manufacturers. Any of DTUs 170, 180 and 190 may be implemented as a receiver. Keyboard 174 and mouse 175 introduce user input and communicate that user input with the DTU they are attached to. Other suitable input devices (e.g., scanner, digital camera) may be used iii addition to, or in place of, keyboard 174 and mouse 175. Display 171 and audio speakers 172 are output devices. Other suitable output devices (e.g., printer) may be used in addition to, or in place of, display 171 and audio speakers 172.
FIG. 3 illustrates a block diagram of an embodiment of a desktop unit illustrated in FIG. 2. Various components of the DTU are coupled internally to a Peripheral Component Interconnect (PCI) bus 226. A network controller 210 is coupled to PCI bus 226 and communicates to an interconnect fabric such as an ethernet, through path 228. An audio codec 212 receives audio data on interface 230 and is coupled to network controller 210. Audio codec may be a hardware circuit (chip) or software routine that converts sound into digital code and vice versa.
Universal Serial Bus (USB) data communication is provided on paths 224 to a USB controller 202 which is coupled to PCI bus 226.
An embedded processor 204 is coupled to PCI bus 226. Embedded processor 204 may be, for example, a Sparc2ep, which is coupled to a flash memory 206 and a dynamic random access memory (DRAM) 208. In the alternative, processor 204 may be a SPARC™ microprocessor manufactured by Sun Microsystems, Inc., a 680X0 processor manufactured by Motorola, a 80X86 manufactured by Intel, a Pentium processor, or any other suitable microprocessor or microcomputer. A video controller, e.g., frame buffer controller 214, is also coupled to PCI bus 226. Video controller 214 may be, for example, an ATI RagePro+ frame buffer controller (or any other suitable controller) that provides Super Video Graphics Array (SVGA) output on path 236. National TV Standards Committee (NTSC) or Phase Alternating Line (PAL) data may be provided via path 232 to video controller 214 through video decoder 220. Similarly, NTSC or PAL data may be provided via path 234 from video controller 214 through a video encoder 222. A smart card interface 218 and a Synchronous Graphics Random Access Memory (SGRAM) 216 may also be coupled to video controller 214.
The functions described above for desktop units 170, 180 and 190 may be implemented using a single chip that includes necessary processing capabilities and graphic Tenderer.
FIG. 4 shows a general purpose computer 250 that may be used to implement servers 152 through 156 shown in FIG. 2. A keyboard 251 and mouse 252 are coupled to a bi-directional system bus 253. Keyboard 251 and mouse 252 introduce user input to computer system 250 and communicate user input to a processor 254. Other suitable input devices may be used in addition to, or in place of, mouse 252 and/or keyboard 251. I/O (input/output) unit 255 coupled to bi-directional system bus 253 represents I/O elements such as printers, A/V (audio/video) I/Os, etc.
Bi-directional system bus 253 may contain, for example, thirty-two address lines for addressing a video memory 256 or a main memory 257. System bus 253 may also includes, for example, a 32-bit data bus for transferring data between and among components, e.g., processor 254, main memory 257, video memory 256 and mass storage 258, all coupled to bus 253. Alternatively, multiplex data/address lines may be used instead of separate data and address lines.
Processor 254 may be a microprocessor manufactured by Motorola (e.g., 680X0 processor), a microprocessor manufactured by Intel (e.g., 80X86 or Pentium processor) or a SPARC microprocessor from Sun Microsystems, Inc. Other suitable microprocessor or microcomputer may be utilized.
Main memory 257 may comprise dynamic random access memory (DRAM) or other suitable memories. Video memory 256 may be a dual-ported video random access memory. For example, one port of video memory 256 may be coupled to a video amplifier 259 which is used to drive a monitor 260 which may be a cathode ray tube (CRT) raster monitor, a liquid crystal display (LCD), or any suitable monitors for displaying graphic images. Video amplifier 259 is well known in the art and may be implemented by any suitable apparatus. In one embodiment, pixel data stored in video memory 256 is converted to a raster signal suitable for use by monitor 260. Mass storage 258 may include both fixed and removable media, such as magnetic, optical or magnetic optical storage systems or any other available mass storage technology.
Computer 250 may include a communication interface 261 coupled to bidirectional system bus 253. Communication interface 261 provides a two-way data communication via a network link 262 to a local network 263. For example, if communication interface 261 is an integrated service digital network (ISDN) card or a modem, communication interface 261 provides a data communication connection to the corresponding type of telephone line, which comprises part of network link 262. If communication interface 261 is a local area network, (LAN) card, communication interface 261 provides a data communication connection via network link 262 to a compatible LAN. Wireless links are also possible. In any such implementation, communication interface 261 sends and receives electrical, electromagnetic or optical signals which carry digital data streams representing various types of information. Network link 262 typically provides data communication through one or more networks to other data devices. For example, network link 262 may provide a connection through local network 263 to a host computer 264 or to data equipment operated by an Internet Service Provider (ISP) 265. ISP 265 in turn provides data communication services through the world wide packet data communication network commonly referred to as the "internet" 266. Local network 263 and internet 266 both use electrical, electromagnetic or optical signals which carry digital data streams. The signals through the various networks and the signals on network link 262 and through communication interface 261, which carry the digital data to and from computer 250, are exemplary forms of carrier waves transporting the information.
Computer 250 can send messages and receive data, including program code, through these communication channels. In the Internet example, server 267 might transmit a requested code for an application program through Internet 266, ISP 265, local network 263 and communication interface 261. The received code may be executed by processor 254 as the code is received, and/or stored in mass storage 258 or other non-volatile storage for later execution. In this manner, computer 250 may obtain application code in the form of a carrier wave.
Application code may be embodied in any form of computer program product. A computer program product comprises a medium configured to store or transport computer readable code or data, or in which computer readable code or data may be embedded. Some examples of computer program products are CD- ROM disks, ROM cards, floppy disks, magnetic tapes, computer hard drives, servers on a network, and carrier waves.
In one embodiment of the present invention, where a central computing resource is used and the user display resource is distributed (e.g., a virtual desktop architecture), specialized video processors may be located between the network where data is available and the interconnect that connects the distributed displays to the central computing resources (e.g., services). Specifically, video processing and hardware requirements associated with a receiver may be minimized by specifying a single video protocol for transmission of video data between transmitters and receivers on a network. For example, the protocol may specify a color format that allows for high video quality and minimizes the complexity of the receiver. Transmitters may be equipped with transformation mechanisms that provide for conversion of video data into the designated protocol as needed. Compression of the components of the color format may be provided to reduce transmission bandwidth requirements, thereby enables the addition of video decoders without interrupting service, going to any user site, or having to specifically upgrade every user. In one embodiment, the designated protocol specifies a color format including a luminance value and two chrominance values. Quantized differential coding is applied to the luminance value and subsampling is performed on the chrominance values to reduce transmission bandwidth requirements. In one embodiment of the invention, upscaling of video data is performed at the receiver, whereas downscaling is performed at the transmitter. Various display sizes can thus be accommodated with efficient use of network bandwidth. In general, any suitable video signal processing and transmission methods may be used.
FIG. 5 illustrates a multimedia system in accordance with the present invention. Server 320, which may be a general purpose computer described above or any other suitable servers, receives media data from various media sources such as, but are not limited to, encoded media storage 318, encoded receiver 302, compressed video conference data source 304, decoded receiver 306 and hardwired data storage/receiver 308. Media data received from the media sources, e.g., encoded media storage 318, encoded receiver 302 or compressed video conference data source 304, may be compressed or encoded.
Compression reduces the number of binary bits necessary to represent the information contained within the data. Since every bit incurs a cost when being transmitted or stored, compression reduces cost, especially for multimedia applications which involve large amounts of data. There are many compression techniques, for example, Huffman method, Dictionary Approaches, Adaptive Coding, Run-Length Encoding method, Quadtree Compression method, Moving Pictures Experts Group (MPEG), etc. Compression may be performed by, e.g., a Coder- Decoder (codec) which may be a hardware or software that converts analog sound, speech or video to digital code (analog to digital) and vice versa (digital to analog). Hardware codecs (chips) may be built into device such as digital telephones and video-conferencing stations. Software codecs may be used to record and play audio and video over a network utilizing the CPU in a server for processing.
Encoding may be a procedure to compress data of multimedia resources such as audio, video, or graphics files for efficient storage and transmission purpose. For example, encoding may compress high-bandwidth media signals to low-bandwidth signals. The compression then allows for the real-time transmission of media clips via the Internet. In this description, the term "compress" is used interchangeably with the term "encode." Similarly, the term "decompress" is used interchangeably with the term "decode." In accordance with the present invention, any suitable data format may be used.
The compressed data is decompressed using decompressor (bulk decoder) 312. Decompressor 312 reverses the compression process and place the compressed data into a format for playback. Decompressor 312 may be a hardware or a software device that decompresses data and may be configured to have capabilities for processing various forms of compression. In one embodiment, decompressor 312 may be an application-specific hardware or a general purpose computer. Decompressor 312 may be a stand-alone unit or may be a function provided by server 320. The decompressed data may then be stored in a format that is ready for use (e.g., basic video format) in decoded media storage 322 for distribution to various DTUs (e.g., DTUs 342, 344 and 346) via respective media controller units (MCUs) 332, 334, 336 and 338. Decoded data (e.g., in basic data format) received from a live receiver 306 or hardwired storage/receiver 308 may be directly stored in decoded media storage 322.
Media storage 322 may be a semi-permanent or permanent holding place for data, for example, magnetic disks, optical disks or magnetic tapes. Semi-permanent or permanent storage may be more economical for applications where small content is to be shown at multiple locations, for example, a standard video clip a few minutes in length to be shown at remotely installed kiosks. The content may be decoded once and stored at a server. The individual kiosk can then access the decoded content at the server. This configuration effectively eliminates the need for storage and decoding resources at each kiosk and since the video clip has a predefined size, a fixed-sized storage may be utilized.
In the alternative, media storage 322 may be volatile and reusable, for example, a random access memory (RAM). In one embodiment, content of media storage 322 may be updated using a least recently used (LRU) algorithm. For example, when data exceeds the storage size limit, the least recently accessed data is thrown away and new data stored the storage. By using this scheme, the items that are most frequently accessed by the users tend to stay in the media storage. Since the media storage does not need to store the entire content, the media storage may be sized to obtain maximum efficiency. For example, the size of decoded media storage 322 may be adjusted to store just a few minutes of content that are most frequently accessed. In another embodiment, for example, in the case of a live feed, media storage 322 may be updated based on first-in-first-out scheme, e.g., new data replaces the oldest data in the storage.
In one embodiment, where the source media fits entirely within the media storage 322, decoded media storage 322 may be used as a data source, independent of the original media source. In one embodiment, cache may be provided in the media controller units 332, 334, 336 and 338 to enhance the capacity of the central decoded media storage 322. In other words, decoded media storage can be hierarchical. In other embodiments, better coverage, faster-than-realtime decoding or smaller storage caches may be provided. For example, if there are more decoding resources available than required for the workload, smaller storage caches may be used.
Media controller units, e.g., MCUs 332, 334, 336 and 338, may be controllers or chips that process any combination of audio, video, graphics, fax and modem operations. Media controller units, via a storage management unit 323 associated with media storage 322, request particular media segments for display from the decoded media storage 322.
Storage management unit 323 takes requests to play media data from media controllers 332, 334, 336 and 338 and makes sure that the requested decoded/decompressed media exists in the decoded media storage 322. Storage management unit 323 then directs the stored data to the appropriate media controllers 332, 334, 336 and 338 that are requesting the data. Storage management unit 323 may be implemented in software or hardware.
In one embodiment, if the requested data does not exist in decoded media storage 322, storage management unit 323 produces media control events to e.g., encoded media storage 318 and bulk decoder 312 to produce requested decoded data. In this embodiment, media controller 332, 334, 336 or 338 may search the encoded media storage 318, for example, by backing up to the beginning of the media, advancing to the end of the media, or playing backwards, forwards and at multiple speeds. In one embodiment, where a live feed (e.g., from encoded receiver 302, decoded receiver 306 or compressed video conference 304) is involved, control of the incoming data and bulk decoder 312 may be restricted because storage management unit 323 may be continuously accepting new data to be placed in the decoded media storage 322 and erasing old data when new data fills decoded media storage 322. As such, user media controllers 332, 334, 336 and 338 may only be able to go as far back as the size of the decoded media storage 322 and forward as far as the current real time.
The decoded media stored in decoded media storage 322 may be displayed on a user display, e.g., user display 342, 344 or 346. In the alternative, data stored in decoded media storage 322 may be archived into a media archive unit 348.
Media archive unit 348 may be any suitable media storage. Other types of media dispositions may be used in addition to viewing and archiving.
By providing the ability to decode encoded media just once and then "share" the decoded media from a shared location, the amount of multimedia resources may be decreased and their utilization optimized.
While the present invention has been described with reference to particular figures and embodiments, it should be understood that the description is for illustration only and should not be taken as limiting the scope of the invention. Many changes and modifications may be made to the invention, by one having ordinary skill in the art, without departing from the spirit and scope of the invention.

Claims

CLAIMSWe claim:
1. A method for distributing media data to a plurality of devices in a network, comprising: storing decoded media data in a storage, the storage being accessible by the plurality of output devices; and distributing the decoded media data to a requesting media controller coupled to one of the output devices. 10
2. The method of claim 1, wherein the media data is encoded, further comprising decoding the media data once.
3. The method of claim 1, further comprising receiving the media data from a media storage.
4. The method of claim 1, further comprising receiving the media data from a live feed receiver.
5. The method of claim 1, further comprising managing the storage, comprising: receiving a data request from the media controller; and determining whether requested data is in the storage.
6. The method of claim 5, wherein the requested data is in the storage, further comprising transferring the requested data to the media controller.
7. The method of claim 6, wherein the one of the devices comprises a display, further comprising displaying the requested data on the display.
8. The method of claim 6, wherein the one of the devices comprises a storage device, further comprising archiving the requested data.
9. The method of claim 5, wherein the requested data is not in the storage, further comprising generating an event to produce the requested data in the storage.
10. A method for distributing encoded data from a server to a plurality of desktop units in a network, comprising:
10 decoding the encoded data to produce decoded data; storing the decoded data in a storage; and distributing the decoded data to at least one of the plurality of desktop units.
11. The method of claim 10, further comprising: receiving a request for data from a media controller coupled to the at least one of the plurality of desktop units; and determining whether the requested data is in the storage.
12. The method of claim 11, wherein the requested data is in the storage, further comprising directing the requested data to the requesting media controller.
13. The method of claim 11, wherein the requested data is not in the storage, further comprising generating an event to a decoder to produce requested data from a media source.
14. A media system, comprising: a decoder; and a storage coupled to the decoder, the storage being accessible to a 30 plurality of devices.
15. The media system of claim 14, wherein the storage comprises volatile memory.
16. The media system of claim 14, further comprising a storage management unit coupled to the storage for managing the storage.
17. The media system of claim 16, further comprising a media controller coupled to one of the devices, the media controller accepting a request for data from the device and checking the media storage for the requested data through the storage management unit.
18. A computer system having a server and a plurality of devices coupled via a network, the system comprising:
15 a decoder for decoding encoded data; a storage coupled to the decoder for storing decoded data; and a storage management unit coupled to the storage for receiving requests from a media controller coupled to one of the plurality of devices, the storage management unit determining whether requested data is in the 20 storage and transferring the requested data to the media controller if the requested data is in the storage.
19. The computer system of claim 18, wherein the storage comprises volatile memory.
20. The computer system of claim 18, wherein the devices comprise a media archive.
PCT/US2001/042401 2000-09-29 2001-09-28 Reusing decoded multimedia data for multiple users WO2002028085A2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2001296944A AU2001296944A1 (en) 2000-09-29 2001-09-28 Reusing decoded multimedia data for multiple users
GB0307244A GB2385966A (en) 2000-09-29 2001-09-28 Reusing decoded multimedia data for multiple users

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US67593900A 2000-09-29 2000-09-29
US09/675,939 2000-09-29

Publications (3)

Publication Number Publication Date
WO2002028085A2 true WO2002028085A2 (en) 2002-04-04
WO2002028085A3 WO2002028085A3 (en) 2002-06-06
WO2002028085A9 WO2002028085A9 (en) 2003-02-13

Family

ID=24712565

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/042401 WO2002028085A2 (en) 2000-09-29 2001-09-28 Reusing decoded multimedia data for multiple users

Country Status (3)

Country Link
AU (1) AU2001296944A1 (en)
GB (1) GB2385966A (en)
WO (1) WO2002028085A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7363363B2 (en) 2002-05-17 2008-04-22 Xds, Inc. System and method for provisioning universal stateless digital and computing services
CN101720036B (en) * 2009-12-15 2011-11-16 青岛海信宽带多媒体技术有限公司 System for distributing DVB data to multiple users
CN110933470A (en) * 2019-11-29 2020-03-27 杭州当虹科技股份有限公司 Video data sharing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790935A (en) * 1996-01-30 1998-08-04 Hughes Aircraft Company Virtual on-demand digital information delivery system and method
US6012091A (en) * 1997-06-30 2000-01-04 At&T Corporation Video telecommunications server and method of providing video fast forward and reverse
US6016507A (en) * 1997-11-21 2000-01-18 International Business Machines Corporation Method and apparatus for deleting a portion of a video or audio file from data storage prior to completion of broadcast or presentation
US6108695A (en) * 1997-06-24 2000-08-22 Sun Microsystems, Inc. Method and apparatus for providing analog output and managing channels on a multiple channel digital media server

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5790935A (en) * 1996-01-30 1998-08-04 Hughes Aircraft Company Virtual on-demand digital information delivery system and method
US6108695A (en) * 1997-06-24 2000-08-22 Sun Microsystems, Inc. Method and apparatus for providing analog output and managing channels on a multiple channel digital media server
US6012091A (en) * 1997-06-30 2000-01-04 At&T Corporation Video telecommunications server and method of providing video fast forward and reverse
US6016507A (en) * 1997-11-21 2000-01-18 International Business Machines Corporation Method and apparatus for deleting a portion of a video or audio file from data storage prior to completion of broadcast or presentation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CHIU M Y M ET AL: "PARTIAL VIDEO SEQUENCE CACHING SCHEME FOR VOD SYSTEMS WITH HETEROGENEOUS CLIENTS" IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, IEEE INC. NEW YORK, US, vol. 45, no. 1, 1 February 1998 (1998-02-01), pages 44-51, XP000735203 ISSN: 0278-0046 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7363363B2 (en) 2002-05-17 2008-04-22 Xds, Inc. System and method for provisioning universal stateless digital and computing services
US7783701B2 (en) 2002-05-17 2010-08-24 Simtone Corporation System and method for provisioning universal stateless digital and computing services
CN101720036B (en) * 2009-12-15 2011-11-16 青岛海信宽带多媒体技术有限公司 System for distributing DVB data to multiple users
CN110933470A (en) * 2019-11-29 2020-03-27 杭州当虹科技股份有限公司 Video data sharing method

Also Published As

Publication number Publication date
AU2001296944A1 (en) 2002-04-08
GB0307244D0 (en) 2003-04-30
GB2385966A (en) 2003-09-03
WO2002028085A9 (en) 2003-02-13
WO2002028085A3 (en) 2002-06-06

Similar Documents

Publication Publication Date Title
US20090322784A1 (en) System and method for virtual 3d graphics acceleration and streaming multiple different video streams
US7627886B2 (en) Systems and methods for displaying video streams
US9635373B2 (en) System and method for low bandwidth display information transport
US5968120A (en) Method and system for providing on-line interactivity over a server-client network
US5838927A (en) Method and apparatus for compressing a continuous, indistinct data stream
US5457780A (en) System for producing a video-instruction set utilizing a real-time frame differential bit map and microblock subimages
US6263023B1 (en) High definition television decoder
US6252889B1 (en) Selectable depacketizer architecture
US5550982A (en) Video application server
US5742347A (en) Efficient support for interactive playout of videos
US8310493B1 (en) Method and system for application broadcast
US8170123B1 (en) Media acceleration for virtual computing services
JP5123186B2 (en) Remote protocol support for large object communication in any format
AU2017213593A1 (en) Transmission of reconstruction data in a tiered signal quality hierarchy
WO1999054804A9 (en) Method and apparatus for providing a virtual desktop system architecture
Chen et al. Downloading and stream conversion: Supporting interactive playout of videos in a client station
WO1997030551A1 (en) Method and systems for progressive asynchronous transmission of multimedia data
US20020023267A1 (en) Universal digital broadcast system and methods
WO2002028085A2 (en) Reusing decoded multimedia data for multiple users
US20040194145A1 (en) Leveraging PC processing power to handle CPU intensive tasks
US7133408B1 (en) Shared decoder
US20230088496A1 (en) Method for video streaming
KR100207406B1 (en) Scheduler data using method of stu system
Keller et al. Xmovie: architecture and implementation of a distributed movie system
Zeadally Delivery of high quality uncompressed video over ATM to windows NT desktop

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

ENP Entry into the national phase in:

Ref document number: 0307244

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20010928

Format of ref document f/p: F

AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PH PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
COP Corrected version of pamphlet

Free format text: PAGES 1/5-5/5, DRAWINGS, REPLACED BY NEW PAGES 1/5-5/5; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: JP