EP0702491A1 - Video optimized media streamer with cache management - Google Patents
Video optimized media streamer with cache management Download PDFInfo
- Publication number
- EP0702491A1 EP0702491A1 EP95305966A EP95305966A EP0702491A1 EP 0702491 A1 EP0702491 A1 EP 0702491A1 EP 95305966 A EP95305966 A EP 95305966A EP 95305966 A EP95305966 A EP 95305966A EP 0702491 A1 EP0702491 A1 EP 0702491A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- data
- video
- buffer
- node
- data buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0656—Data buffering arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
Definitions
- This invention relates to a system for delivery of multimedia data and, more particularly, an interactive video server system that provides video simultaneously to a plurality of terminals with minimal buffering.
- the playing of movies and video is today accomplished with rather old technology.
- the primary storage media is analog tape, such as VHS recorders/players, and extends up to the very high quality and very expensive D1 VTR's used by television studios and broadcasters.
- VHS recorders/players There are many problems with this technology. A few such problems include: the manual labour required to load the tapes, the wear and tear on the mechanical units, tape head, and the tape itself, and also the expense.
- troubles Broadcast Stations is that the VTRs can only perform one function at a time, sequentially. Each tape unit costs from $75,000 to $150,000.
- TV stations want to increase their revenues from commercials, which are nothing more than short movies, by inserting special commercials into their standard programs and thereby targeting each city as a separate market. This is a difficult task with tape technology, even with the very expensive Digital D1 tape systems or tape robots.
- Broadcast methods including motion picture, cable, television network, and record industries
- Broadcast methods generally provide storage in the form of analog or digitally recorded tape.
- the playing of tapes causes isochronous data streams to be generated which are then moved through broadcast industry equipment to the end user.
- Computer methods generally provide storage in the form of disks, or disks augmented with tape, and record data in compressed digital formats such as DVI, JPEG and MPEG.
- computers deliver non-isochronous data streams to the end user, where hardware buffers and special application code smooths the data streams to enable continuous viewing or listening.
- Video tape subsystems have traditionally exhibited a cost advantage over computer disk subsystems due to the cost of the storage media.
- video tape subsystems have the disadvantages of tape management, access latency, and relatively low reliability. These disadvantages are increasingly significant as computer storage costs have dropped, in combination with the advent of the real-time digital compression/decompression techniques.
- Computers interface primarily to workstations and other computer terminals with interfaces and protocols that are termed “non-isochronous”.
- non-isochronous To assure smooth (isochronous) delivery of multimedia data to the end user, computer systems require special application code and large buffers to overcome inherent weaknesses in their traditional communication methods.
- computers are not video friendly in that they lack compatible interfaces to equipment in the multimedia industry which handle isochronous data streams and switch among them with a high degree of accuracy.
- the invention provides a "video friendly" computer subsystem which enables isochronous data stream delivery in a multimedia environment over traditional interfaces for that industry.
- a media streamer in accordance with the invention is optimized for the delivery of isochronous data streams and can stream data into new computer networks with ATM (Asynchronous Transfer Mode) technology.
- This invention eliminates the disadvantages of video tape while providing a VTR (video tape recorder) metaphor for system control.
- the system of this invention provides the following features: scalability to deliver from 1 to 1000's of independently controlled data streams to end users; an ability to deliver many isochronous data streams from a single copy of data; mixed output interfaces; mixed data rates; a simple "open system” control interface; automation control support; storage hierarchy support; and low cost per delivered stream.
- a data storage system includes a mass storage unit storing a data entity, such as a digital representation of a video presentation, that is partitioned into a plurality N of temporally-ordered segments.
- a data buffer is bidirectionally coupled to the mass storage unit for storing up to M of the temporally-ordered segments, wherein M is less than N.
- the data buffer has an output for outputting stored ones of the temporally-ordered segments.
- the data storage system further includes a data buffer manager for scheduling transfers of individual ones of the temporally-ordered segments between the mass storage unit and the data buffer. The data buffer manager schedules the transfers in accordance with at least a predicted time that an individual one of the temporally-ordered segments will be required to be output from the data buffer.
- a media streamer having at least one storage node for storing a digital representation of at least one video presentation.
- the at least one video presentation requires a time T to present in its entirety, and is stored as a plurality of N data blocks. Each data block is a T/N portion of the at least one video presentation.
- the at least one storage node includes a first data buffer for buffering at least one of the N data blocks.
- the media streamer further includes a plurality of communication nodes each having an input port that is coupled via a circuit switch to an output of the first data buffer for sequentially receiving a plurality of the N data blocks therefrom. The sequentially received N data blocks are associated with a same video presentation or with different video presentations.
- Each of the plurality of communication nodes further have a plurality of output ports, wherein individual ones of the plurality of output ports output a digital representation of one video presentation.
- Individual ones of the plurality of communication nodes further include a second data buffer for buffering at least one of the N data blocks prior to outputting the at least one of the N data blocks.
- the media streamer further includes at least one control node responsive to a first operating condition for causing transfer of one of the N data blocks from the first data buffer to an output port of a first communication node and also to an output port of a second communication node, the at least one control node being further responsive to a second operating condition for causing transfer of one of the N data blocks from the first data buffer to the second data buffer of one of the communication nodes, and for causing transfer of the one of the N data blocks from the second data buffer to a plurality of the output ports of the one of the communication nodes.
- Embodiments are disclosed of presently preferred distributed data buffer management techniques for selecting blocks to be retained in a buffer memory, either in a storage node or in a communication node. These techniques rely on the predictable nature of the video data stream, and thus are enabled to predict the future requirements for a given one of the data blocks.
- a video optimized stream server system 10 (hereafter referred to as media streamer) is shown in Fig. 10 and includes four architecturally distinct components to provide scalability, high availability and configuration flexibility. The major components follow:
- a typical media streamer with 64 nodes implementation might contain 31 communication nodes, 31 storage nodes, 2 control nodes interconnected with the low latency switch 12.
- a smaller system might contain no switch and a single hardware node that supports communications, storage and control functions.
- the design of media streamer 10 allows a small system to grow to a large system in the customer installation. In all configurations, the functional capability of media streamer 10 can remain the same except for the number of streams delivered and the number of multimedia hours stored.
- Fig. 1A further details of low latency switch 12 are shown.
- a plurality of circuit switch chips (not shown) are interconnected on crossbar switch cards 20 which are interconnected via a planar board (schematically shown).
- the planar and a single card 20 constitute a low latency crossbar switch with 16 node ports. Additional cards 20 may be added to configure additional node ports and, if desired, active redundant node ports for high availability.
- Each port of the low latency switch 12 enables, by example, a 25 megabyte per second, full duplex communication channel.
- Each packet contains a header portion that controls the switching state of individual crossbar switch points in each of the switch chips.
- the control node 18 provides the other nodes (storage nodes 16, 17 and communication nodes 14) with the information necessary to enable peer-to-peer operation via the low latency switch 12.
- tape storage node 17 provides a high capacity storage facility for storage of digital representations of video presentations.
- a video presentation can include one or more images that are suitable for display and/or processing.
- a video presentation may include an audio portion.
- the one or more images may be logically related, such as sequential frames of a film, movie, or animation sequence.
- the images may originally be generated by a camera, by a digital computer, or by a combination of a camera and a digital computer.
- the audio portion may be synchronized with the display of successive images.
- a data representation of a video presentation can be any suitable digital data format for representing one or more images and possibly audio.
- the digital data may be encoded and/or compressed.
- a tape storage node 17 includes a tape library controller interface 24 which enables access to multiple tape records contained in a tape library 26.
- a further interface 28 enables access to other tape libraries via an SCSI bus interconnection.
- An internal system memory 30 enables a buffering of video data received from either of interfaces 24 or 28, or via DMA data transfer path 32.
- System memory block 30 may be a portion of a PC 34 which includes software 36 for tape library and file management actions.
- a switch interface and buffer module 38 (used also in disk storage nodes 16, communication nodes 14, and control nodes 18) enables interconnection between the tape storage node 17 and low latency switch 12.
- the module 38 is responsible for partitioning a data transfer into packets and adding the header portion to each packet that the switch 12 employs to route the packet.
- the module 38 is responsible for stripping off the header portion before locally buffering or otherwise handling the received data.
- Video data from tape library 26 is entered into system memory 30 in a first buffering action.
- the video data is routed through low latency switch 12 to a disk storage node 16 to be made ready for substantially immediate access when needed.
- Each disk storage node 16 includes a switch interface and buffer module 40 which enables data to be transferred from/to a RAID buffer video cache and storage interface module 42.
- Interface 42 passes received video data onto a plurality of disks 45, spreading the data across the disks in a quasi-RAID fashion.
- RAID memory storage are known in the prior art and are described in "A Case for Redundant Arrays of Inexpensive Disks (RAID)", Patterson et al., ACM SIGMOD Conference, Chicago, IL, June 1-3, 1988 pages 109-116.
- a disk storage node 16 further has an internal PC 44 which includes software modules 46 and 48 which, respectively, provide storage node control, video file and disk control, and RAID mapping for data stored on disks 45.
- each disk storage node 16 provides a more immediate level of video data availability than a tape storage node 17.
- Each disk storage node 16 further is enabled to buffer (in a cache manner) video data in a semiconductor memory of switch interface and buffer module 40 so as to provide even faster availability of video data, upon receiving a request therefor.
- a storage node includes a mass storage unit (or an interface to a mass storage unit) and a capability to locally buffer data read from or to be written to the mass storage unit.
- the storage node may include sequential access mass storage in the form of one or more tape drives and/or disk drives, and may include random access storage, such as one or more disk drives accessed in a random access fashion and/or semiconductor memory.
- FIG. 1D a block diagram is shown of internal components of a communications node 14. Similar to each of the above noted nodes, communication node 14 includes a switch interface and buffer module 50 which enables communications with low latency switch 12 as described previously. Video data is directly transferred between switch interface and buffer module 50 to a stream buffer and communication interface 52 for transfer to a user terminal (not shown).
- a PC 54 includes software modules 56 and 58 which provide, respectively, communication node control (e.g., stream start/stop actions) and enable the subsequent generation of an isochronous stream of data.
- An additional input 60 to stream buffer and communication interface 52 enables frame synchronization of output data.
- System controller 64 that data is received from automation control equipment 62 which is, in turn, controlled by a system controller 64 that exerts overall operational control of the stream server 10 (see Fig. 1).
- System controller 64 responds to inputs from user control set top boxes 65 to cause commands to be generated that enable media streamer 10 to access a requested video presentation.
- System controller 64 is further provided with a user interface and display facility 66 which enables a user to input commands, such as by hard or soft buttons, and other data to enable an identification of video presentations, the scheduling of video presentations, and control over the playing of a video presentation.
- Each control node 18 is configured as a PC and includes a switch interface module for interfacing with low latency switch 12. Each control node 18 responds to inputs from system controller 64 to provide information to the communication nodes 14 and storage nodes 16, 17 to enable desired interconnections to be created via the low latency switch 12. Furthermore, control node 18 includes software for enabling staging of requested video data from one or more of disk storage nodes 16 and the delivery of the video data, via a stream delivery interface, to a user display terminal. Control node 18 further controls the operation of both tape and disk storage nodes 16, 17 via commands sent through low latency switch 12.
- the media streamer has three architected external interfaces, shown in Fig. 1.
- the external interfaces are:
- Control node breaks the incoming data file into segments (i.e. data blocks) and spreads it across one or more storage nodes.
- Material density and the number of simultaneous users of the data affect the placement of the data on storage nodes 16, 17. Increasing density and/or simultaneous users implies the use of more storage nodes for capacity and bandwidth.
- control node 18 selects and activates an appropriate communication node 14 and passes control information indicating to it the location of the data file segments on the storage nodes 16, 17.
- the communications node 14 activates the storage nodes 16, 17 that need to be involved and proceeds to communicate with these nodes, via command packets sent through the low latency switch 12, to begin the movement of data.
- Data is moved between disk storage nodes 16 and communication nodes 14 via low latency switch 12 and "just in time” scheduling algorithms.
- the technique used for scheduling and data flow control is more fully described below.
- the data stream that is emitted from a communication node interface 14 is multiplexed to/from disk storage nodes 16 so that a single communication node stream uses a fraction of the capacity and bandwidth of each disk storage node 16. In this way, many communication nodes 14 may multiplex access to the same or different data on the disk storage nodes 16.
- media streamer 10 can provide 1500 individually controlled end user streams from the pool of communication nodes 14, each of which is multiplexing accesses to a single multimedia file spread across the disk storage nodes 16. This capability is termed "single copy multiple stream”.
- the commands that are received over the control interface are executed in two distinct categories. Those which manage data and do not relate directly to stream control are executed at "low priority". This enables an application to load new data into the media streamer 10 without interfering with the delivery of data streams to end users.
- the commands that affect stream delivery are executed at "high priority”.
- the control interface commands are shown in Fig. 2.
- the low priority data management commands for loading and managing data in media streamer 10 include VS-CREATE, VS-OPEN, VS-READ, VS-WRITE, VS-GET_POSITION, VS-SET_POSITION, VS-CLOSE, VS-RENAME, VS-DELETE GET_ATTRIBUTES, and VS-GET_NAMES.
- the high priority stream control commands for starting and managing stream outputs include VS-CONNECT, VS-PLAY, VS-RECORD, VS-SEEK, VS-PAUSE, VS-STOP and VS-DISCONNECT.
- Control node 18 monitors stream control commands to assure that requests can be executed. This "admission control" facility in control node 18 may reject requests to start streams when the capabilities of media streamer 10 are exceeded. This may occur in several circumstances:
- the communication nodes 14 are managed as a heterogeneous group, each with a potentially different bandwidth (stream) capability and physical definition.
- the VS-CONNECT command directs media streamer 10 to allocate a communication node 14 and some or all of its associated bandwidth enabling isochronous data stream delivery.
- media streamer 10 can play uncompressed data stream(s) through communication node(s) 14 at 270 MBits/Sec while simultaneously playing compressed data stream(s) at much lower data rates (usually 1-16 Mbits/Sec) on other communication nodes 14.
- Storage nodes 16, 17 are managed as a heterogeneous group, each with a potentially different bandwidth (stream) capability and physical definition.
- the VS-CREATE command directs media streamer 10 to allocate storage in one or more storage nodes 16, 17 for a multimedia file and its associated metadata.
- the VS-CREATE command specifies both the stream density and the maximum number of simultaneous users required.
- VS-CONNECT-LIST Three additional commands support automation control systems in the broadcast industry: VS-CONNECT-LIST, VS-PLAY-AT-SIGNAL and VS-RECORD-AT-SIGNAL.
- VS-CONNECT-LIST allows applications to specify a sequence of play commands in a single command to the subsystem.
- Media streamer 10 will execute each play command as if it were issued over the control interface but will transition between the delivery of one stream and the next seamlessly.
- An example sequence follows:
- VS-PLAY-AT-SIGNAL and VS-RECORD-AT-SIGNAL allow signals from the external Automation Control Interface to enable data transfer for play and record operations with accuracy to a video fame boundary.
- the VS-CONNECT-LIST includes a PLAY-AT-SIGNAL subcommand to enable the transition from FILE1 to FILE2 based on the external automation control interface signal. If the subcommand were VS-PLAY instead, the transition would occur only when the FILE1 transfer was completed.
- commands that media streamer 10 executes provide the ability to manage storage hierarchies. These commands are: VS-DUMP, VS-RESTORE, VS-SEND, VS-RECEIVE and VS-RECEIVE_AND_PLAY. Each causes one or more multimedia files to move between storage nodes 16 and two externally defined hierarchical entities.
- data flow is optimized for isochronous data transfer to significantly reduce cost.
- data flow is optimized for isochronous data transfer to significantly reduce cost.
- media streamer 10 functions as a system of interconnected adapters with an ability to perform peer-peer data movement between themselves through the low latency switch 12.
- the low latency switch 12 has access to data storage and moves data segments from one adapter's memory to that of another without a "host computer” intervention.
- Media streamer 10 provides hierarchical storage elements. It exhibits a design that allows scalability from a very small video system to a very large system. It also provides a flexibility for storage management to adapt to the varied requirements necessary to satisfy functions of Video on Demand, Near Video on Demand, Commercial insertion, high quality uncompressed video storage, capture and playback.
- video presentations are moved from high performance digital tape to disk, to be played out at the much lower data rate required by the end user. In this way, only a minimum amount of video time is stored on the disk subsystem. If the system is "Near Video on Demand", then only, by example, 5 minutes of each movie need be in disk storage at any one time. This requires only 22 segments of 5 minutes each for a typical 2 hour movie. The result is that the total disk storage requirement for a video presentation is reduced, since not all of the video presentation is kept on the disk file at any one time. Only that portion of the presentation that is being played need be present in the disk file.
- each data block stores a portion of the video presentation that corresponds to approximately a T/N period of the video presentation.
- a last data block of the N data blocks may store less than a T/N period.
- the statistical average is that about 25% of video stream requests will be for the same movie, but at different sub-second time intervals, and the distribution of viewers will be such that more than 50% of those sub-second demands will fall within a group of 15 movie segments.
- An aspect of this invention is the utilization of the most appropriate technology that will satisfy this demand.
- a random access cartridge loader (such as produced by the IBM Corporation) is a digital tape system that has high storage capacity per tape, mechanical robotic loading of 100 tapes per drawer, and up to 2 tape drives per drawer. The result is an effective tape library for movie-on-demand systems.
- the invention also enables very low cost digital tape storage library systems to provide the mass storage of the movies, and further enables low demand movies to be played directly from tape to speed-matching buffers and then on to video decompression and distribution channels.
- a second advantage of combining hierarchical tape storage to any video system is that it provides rapid backup to any movie that is stored on disk, in the event that a disk becomes inoperative.
- a typical system will maintain a "spare" disk such that if one disk unit fails, then movies can be reloaded from tape. This would typically be combined with a RAID or a RAID-like system.
- a typical system will still contain a library of movies that are stored on tape, since the usual number of movies in the library is 10x to 100x greater than the number that will be playing at any one time.
- segments of it are loaded to a disk storage node 16 and started from there.
- media streamer 10 As demand for "hot” movies grows, media streamer 10, through an MRU-based algorithm, decides to move key movies up into cache. This requires substantial cache memory, but in terms of the ratio of cost to the number of active streams, the high volume that can be supported out of cache lowers the total cost of the media streamer 10.
- Algorithms that control the placement and distribution of the content across all of the storage media enable delivery of isochronous data to a wide spectrum of bandwidth requirements. Because the delivery of isochronous data is substantially 100% predictable, the algorithms are very much different from the traditional ones used for other segments of the computer industry where caching of user-accessed data is not always predictable.
- media streamer 10 delivers video streams to various outputs such as TV sets and set top boxes attached via a network, such as a LAN, ATM, etc.
- a distributed architecture consisting of multiple storage and communication nodes is preferred.
- the data is stored on storage nodes 16, 17 and is delivered by communication nodes.
- a communication node 14 obtains the data from appropriate storage nodes 16, 17.
- the control node 18 provides a single system image to the external world.
- the nodes are connected by the cross-connect, low latency switch 12.
- Data rates and the data to be delivered is predictable for each stream.
- the invention makes use of this predictability to construct a data flow architecture that makes full use of resources and which insures that the data for each stream is available at every stage when it is needed.
- Data flow between the storage nodes 16, 17 and the communication nodes 14 can be set up in a number of different ways.
- a communication node 14 is generally responsible for delivering multiple streams. It may have requests outstanding for data for each of these streams, and the required data may come from different storage nodes 16,17. If different storage nodes were to attempt, simultaneously, to send data to the same communication node, only one storage node would be able to send the data, and the other storage nodes would be blocked. The blockage would cause these storage nodes to retry sending the data, degrading switch utilization and introducing a large variance in the time required to send data from a storage node to the communication node. In this invention, there is no contention for an input port of a communication node 14 among different storage nodes 16, 17.
- the amount of required buffering can be determined as follows: the communication node 14 determines the mean time required to send a request to the storage node 16, 17 and receive the data. This time is determined by adding the time to send a request to the storage node and the time to receive the response, to the time needed by the storage node to process the request. The storage node in turn determines the mean time required to process the request by adding the mean time required to read the data from disk and any delays involved in processing the request. This is the latency in processing the request. The amount of buffering required is the memory storage needed at the stream data rate to cover the latency. The solution described below takes advantage of special conditions in the media streamer environment to reduce latency and hence to reduce the resources required. The latency is reduced by using a just-in-time scheduling algorithm at every stage of the data (e.g., within storage nodes and communications nodes), in conjunction with anticipating requests for data from the previous stage.
- the reduction of latency relies on a just-in-time scheduling algorithm at every stage.
- the basic principle is that at every stage in the data flow for a stream, the data is available when the request for that data arrives. This reduces latency to the time needed for sending the request and performing any data transfer.
- the control node 18 sends a request to the storage node 16 for data for a specific stream, the storage node 16 can respond to the request almost immediately. This characteristic is important to the solution to the contention problem described above.
- a storage node 16 can anticipate when a next request for data for a specific stream can be expected. The identity of the data to be supplied in response to the request is also known. The storage node 16 also knows where the data is stored and the expected requests for the other streams. Given this information and the expected time to process a read request from a disk, the storage node 16 schedules a read operation so that the data is available just before the request from the communication node 14 arrives. For example, if the stream data rate is 250KB/sec, and a storage node 16 contains every 4th segment of a video, requests for data for that stream will arrive every 4 seconds. If the time to process a read request is 500 msec (with the requisite degree of confidence that the read request will complete in 500 msec) then the request is scheduled for at least 500 msec before the anticipated receipt of request from the communication node 14.
- the control node 18 function is to provide an interface between media streamer 10 and the external world for control flow. It also presents a single system image to the external world even if the media streamer 10 is itself implemented as a distributed system.
- the control node functions are implemented by a defined Application Program Interface (API) .
- API Application Program Interface
- the API provides functions for creating the video content in media streamer 10 as well as for real-time functions such as playing/recording of video data.
- the control node 18 forwards real-time requests to play or stop the video to the communication nodes 14.
- a communication node 14 has the following threads (in the same process) dedicated to handle a real time video interface: a thread to handle connect/disconnect requests, a thread to handle play/stop and pause/resume requests, and a thread to handle a jump request (seek forward or seek backward). In addition it has an input thread that reads data for a stream from the storage nodes 16 and an output thread that writes data to the output ports.
- a data flow structure in a communication node 14 for handling data during the playing of a video is depicted in Fig. 3.
- the data flow structure includes an input thread 100 that obtains data from a storage node 16.
- the input thread 100 serializes receipt of data from storage nodes so that only one storage node is sending data at any one time.
- the input thread 100 ensures that when an output thread 102 needs to write out of a buffer for a stream, the buffer is already filled with data.
- there is a scheduler function 104 that schedules both the input and output operations for the streams. This function is used by both the input and output threads 100 and 102.
- the request queue 106 for the output thread 102 contains requests that identify the stream and that points to an associated buffer that needs to be emptied. These requests are arranged in order by a time at which they need to be written to the video output interface.
- the output thread 102 empties a buffer, it marks it as empty and invokes the scheduler function 104 to queue the request in an input queue 108 for the stream to the input thread (for the buffer to be filled).
- the queue 108 for the Input thread 100 is also arranged in order by a time at which buffers need to be filled.
- Input thread 100 also works off the request queue 108 arranged by request time. Its task is to fill the buffer from a storage node 16. For each request in its queue, the input thread 100 takes the following actions. The input thread 100 determines the storage node 16 that has the next segment of data for the stream (the data for a video stream is preferably striped across a number of storage nodes). The input thread 100 then sends a request to the determined storage node (using messages through switch 12) requesting data for the stream, and then waits for the data to arrive.
- This protocol ensures that only one storage node 16 will be sending data to a particular communications node 14 at any time, i.e., it removes the conflict that may arise if the storage nodes were to send data asynchronously to a communications node 14.
- the input thread 100 marks the buffer as full and invokes the scheduler 104 to buffer a request (based on the stream's data rate) to the output thread 102 to empty the buffer.
- the structure of the storage node 16 for data flow to support the playing of a stream is depicted in Fig. 4.
- the storage node 16 has a pool of buffers that contain video data. It has an input thread 110 for each of the logical disk drives and an output thread 112 that writes data out to the communications nodes 14 via the switch matrix 12. It also has a scheduler function 114 that is used by the input and output threads 110, 112 to schedule operations. It also has a message thread 116 that processes requests from communications nodes 14 requesting data.
- the message thread 116 When a message is received from a communications node 14 requesting data, the message thread 116 will normally find the requested data already buffered, and queues the request (queue 118) to the output thread. The requests are queued in time order. The output thread 112 will empty the buffer and add it to the list of free buffers. Each of the input threads 110 have their own request queues. For each of the active streams that have video data on the associated disk drive, a queue 120 ordered by request time (based on the data rate, level of striping, etc.) to fill the next buffer is maintained. The thread takes the first request in queue 120, associates a free buffer with it and issues an I/O request to fill the buffer with the data from the disk drive.
- the buffer When the buffer is filled, it is added to the list of full buffers. This is the list that is checked by the message thread 116 when the request for data for the stream is received. When a message for data is received from a communication node 14 and the required buffer is not full, it is considered to be a missed deadline.
- a just-in-time scheduling technique is used in both the communications nodes 14 and the storage nodes 16.
- the technique employs the following parameters:
- the requests are scheduled at a frequency determined by the expressions given above, and are scheduled so that they complete in advance of when the data is needed. This is accomplished by "priming" the data pipe with data at the start of playing a video stream.
- Calculations of sfc and dfc are made at connect time, in both the communication node 14 playing the stream and the storage nodes 16 containing the video data.
- the frequency (or its inverse, the interval) is used in scheduling input from disk in the storage node 16 (see Fig. 4) and in scheduling the output to the port (and input from the storage nodes) in the communication node 14 (see Fig. 3).
- the communication node 14 responsible for playing the stream will schedule input and output requests at the frequency of 1/sec. or at intervals of 1.0 seconds. Assuming that the communication node 14 has two buffers dedicated for the stream, the communication node 14 ensures that it has both buffers filled before it starts outputting the video stream.
- the communication node 14 will have sent messages to all four storage nodes 16 containing a stripe of the video data.
- the first two of the storage nodes will anticipate the requests for the first segment from the stripes and will schedule disk requests to fill the buffers.
- the communication node 14 will schedule input requests (see Fig. 3) to read the first two segments into two buffers, each of size 250,000 bytes.
- the communication node 14 will first insure that the two buffers are full, and then informs all storage nodes 16 that play is about to commence. It then starts playing the stream.
- the communication node 14 requests data from a storage node 16.
- the communication node 14 then requests data from each of the storage nodes, in sequence, at intervals of one second, i.e. it will request data from a specific storage node at intervals of four seconds. It always requests 250,000 bytes of data at a time.
- the calculations for the frequency at which a communication node requests data from the storage nodes 16 is done by the communication node 14 at connect time.
- the storage nodes 16 anticipate the requests for the stream data as follows.
- the storage node 16 containing stripe 3 (see section H below) can expect a request for the next 250,000 byte segment one second after the play has commenced, and every four seconds thereafter.
- the storage node 16 containing stripe 4 can expect a request two seconds after the play has commenced and every four seconds thereafter.
- the storage node 16 containing stripe 2 can expect a request four seconds after play has commenced and four seconds thereafter. That is, each storage node 16 schedules the input from disk at a frequency of 250,000 bytes every four seconds from some starting time (as described above). The scheduling is accomplished in the storage node 16 after receipt of the play command and after a buffer for the stream has been output. The calculation of the request frequency is done at the time the connect request is received.
- the buffer size at the communication node 14 may be 50,000 bytes and the buffer size at the storage node 16 may be 250,000 bytes.
- the frequency of requests at the communication node 14 will be (250,000/50,000) 5/sec. or every 0.2 seconds, while the frequency at the storage node 16 will remain at 1/sec.
- the communication node 14 reads the first two buffers (100,000 bytes) from the storage node containing the first stripe (note that the segment size is 250,000 bytes and the storage node 16 containing the first segment will schedule the input from disk at connect time).
- the communication node 14 informs the storage nodes 16 of same and outputs the first buffer.
- the communication node 14 schedules the next input.
- the buffers will empty every 0.2 seconds and the communication node 14 requests input from the storage nodes 16 at that frequency, and also schedules output at the same frequency.
- storage nodes 16 can anticipate five requests to arrive at intervals of 0.2 seconds (except for the first segment where 100,000 bytes have been already read, so initially three request will come after commencement of play every four seconds, i.e., the next sequence of five requests (each for 50,000 bytes) will arrive four seconds after the last request of the previous sequence). Since, the buffer size at the storage node is 250,000 bytes, the storage nodes 16 will schedule the input from disk every four seconds (just as in the example above).
- the following steps trace the control and data flow for the playing action of a stream.
- the steps are depicted in Figure 5 for setting up a video for play.
- the steps are in time order.
- the input and output threads continue to deliver the video presentation to the specified port until a stop/pause command is received or the video completes.
- Media streamer 10 is a passive server, which performs video server operations when it receives control commands from an external control system.
- Figure 7 shows a system configuration for media streamer 10 applications and illustrates the interfaces present in the system.
- Media streamer 10 provides two levels of interfaces for users and application programs to control its operations: a user interface ((A) in Fig. 7); and an application program interface ((B) in Fig. 7).
- Both levels of interface are provided on client control systems, which communicate with the media streamer 10 through a remote procedure call (RPC) mechanism.
- RPC remote procedure call
- Media streamer 10 provides two types of user interfaces: a command line interface; and a graphical user interface.
- the command line interface displays a prompt on the user console or interface (65,66 of Fig. 1). After the command prompt, the user enters a command, starting with a command keyword followed by parameters. After the command is executed, the interface displays a prompt again and waits for the next command input.
- the media streamer command line interface is especially suitable for the following two types of operations:
- Batch control involves starting execution of a command script that contains a series of video control commands.
- a command script can be prepared in advance to include pre-recorded, scheduled programs for an extended period of time. At the scheduled start time, the command script is executed by a single batch command to start broadcasting without further operator intervention.
- Automatic control involves executing a list of commands generated by a program to update/play materials stored on media streamer 10. For example, a news agency may load new materials into the media streamer 10 every day.
- An application control program that manages the new materials can generate media streamer commands (for example, Load, Delete, Unload) to update the media streamer 10 with the new materials.
- the generated commands may be piped to the command line interface for execution.
- Fig. 8 is an example of the media streamer graphical user interface.
- the interface resembles the control panel of a video cassette recorder, which has control buttons such as Play, Pause, Rewind, and Stop.
- control buttons such as Play, Pause, Rewind, and Stop.
- it also provides selection panels when an operation involves a selection by the user (for example, load requires the user to select a video presentation to be loaded.)
- the graphical user interface is especially useful for direct user interactions.
- a "Batch” button 130 and an “Import/Export” button 132 are included in the graphical user interface. Their functions are described below.
- Media streamer 10 provides three general types of user functions: Import/Export; VCR-like play controls; and Advanced user controls.
- Import/Export functions are used to move video data into and out of the media streamer 10.
- the source of the video data is specified as a file or a device of the client control system.
- the target of the video data is specified with a unique name within media streamer 10.
- the source of the video data is specified by its name within media streamer 10
- the target of the video data is specified as a file or a device of the client control system.
- media streamer 10 also provides a "delete” function to remove a video and a “get attributes” function to obtain information about stored videos (such as name, data rate).
- Import/Export functions through the graphical user interface
- Media streamer 10 provides a set of VCR-like play controls.
- the media streamer graphical user interface in Fig. 8 shows that the following functions are available: Load, Eject, Play, Slow, Pause, Stop, Rewind, Fast Forward and Mute. These functions are activated by clicking on the corresponding soft buttons on the graphical user interface.
- the media streamer command line interface provides a similar set of functions:
- Setup - sets up a video for a specific output port. Analogous to loading a video cassette into a VCR.
- Status - displays the status of ports, such as which video is playing, elapsed playing time, etc.
- Play list set up multiple videos and their sequence to be played on a port
- Play length limit the time a video will be played
- Batch operation perform a list of operations stored in a command file.
- the Play list and Play length controls are accomplished with a "Load” button 134 on the graphical user interface.
- Each "setup" command will specify a video to be added to the Play list for a specific port. It also specifies a time limit that the video will be played.
- Fig. 9 shows the panel which appears in response to clicking on the "load” soft button 134 on the graphical user interface to select a video to be added to the play list and to specify the time limit for playing the video.
- the user clicks on a file name in the "Files” box 136 the name is entered into “File Name” box 138.
- the “Add” button 140 the file name in "File Name” box 138 is appended to the "Play List” box 142 with its time limit and displays the current play list (with time limit of each video on the play list).
- the batch operation is accomplished by using a "Batch" soft button 130 on the graphical user interface (see Fig. 8).
- a batch selection panel is displayed for the user to select or enter the command file name (see Fig. 10). Pressing an "Execute” button 144 on the batch selection panel starts the execution of the commands in the selected command file.
- Fig. 10 is an example of the "Batch” and "Execute” operation on the graphical user interface.
- the user has first created a command script in a file “batch2" in the c:/batchcmd directory. The user then clicks on "Batch” button 130 on the graphical user interface shown in Fig. 8 to bring up the Batch Selection panel. Next, the user clicks on "c:/batchcmd" in "Directory" box 146 of the Batch Selection panel.
- Media streamer 10 provides the above-mentioned Application Program Interface (API) so that application control programs can interact with media streamer 10 and control its operations (reference may be made again to Fig. 7).
- API Application Program Interface
- the API consists of remote procedure call (RPC)-based procedures.
- Application control programs invoke the API functions by making procedure calls.
- the parameters of the procedure call specify the functions to be performed.
- the application control programs invoke the API functions without regarding the logical and physical location of media streamer 10.
- the identity of a media streamer 10 to provide the video services is established at either the client control system startup time or, optionally, at the application control program initiation time. Once the identity of media streamer 10 is established, the procedure calls are directed to the correct media streamer 10 for servicing.
- API functions are processed synchronously, i.e., once a function call is returned to the caller, the function is completed and no additional processing at media streamer 10 is needed.
- API functions are processed synchronously, i.e., once a function call is returned to the caller, the function is completed and no additional processing at media streamer 10 is needed.
- the API functions are configured as synchronous operations, additional processing overheads for context switching, asynchronous signalling and feedbacks are avoided. This performance is important in video server applications due to the stringent real-time requirements.
- API functions The processing of API functions is performed in the order that requests are received. This ensures that user operations are processed in the correct order. For example, a video must be connected (setup) before it can be played. Another example is that switching the order of a "Play” request followed by a "Pause” request will have a completely different result to the user.
- a VS-PLAY function initiates the playing of the video and returns the control to the caller immediately (without waiting until the completion of the video play).
- the rationale for this architecture is that since the time for playing a video is typically long (minutes to hours) and unpredictable (there may be pause or stop commands), by making the VS-PLAY function asynchronous, it frees up the resources that would otherwise be allocated for an unpredictably, long period of time.
- media streamer 10 At completion of video play, media streamer 10 generates an asynchronous call to a system/port address specified by the application control program to notify the application control program of the video completion event.
- the system/port address is specified by the application control program when it calls the API VS-CONNECT function to connect the video. It should be noted that the callback system/port address for VS-PLAY is specified at the individual video level. That means the application control programs have the freedom of directing video completion messages to any control point. For example, one application may desire the use of one central system/port to process the video completion messages for many or all of the client control systems. In another application, several different system/port addresses may be employed to process the video completion messages for one client control system.
- media streamer 10 is enabled to support multiple concurrent client control systems with heterogeneous hardware and software platforms, with efficient processing of both synchronous and asynchronous types of operations, while ensuring the correct sequencing of the operation requests.
- the media streamer 10 may use an IBM OS/2 operating system running on a PS/2 system, while a client control system may use an TBM AIX operating system running on an RS/6000 system (IBM, OS/2, PS/2, AIX, and RS/6000 are all trademarks of the International Business Machines Corporation).
- Fig. 11 shows the RPC structure for the communications between a client control system 11 and the media streamer 10.
- the client control system 11 functions as the RPC client and the media streamer 10 functions as the RPC server. This is indicated at (A) in Fig. 11.
- asynchronous function i.e., VS-PLAY
- media streamer 10 is the RPC client. This is indicated at (B) in Fig. 11.
- the user command line interface is comprised of three internal parallel processes (threads).
- a first process parses a user command line input and performs the requested operation by invoking the API functions, which result in RPC calls to the media streamer 10 ((A) in Figure 11). This process also keeps track of the status of videos being set up and played for various output ports.
- a second process periodically checks the elapsed playing time of each video against their specified time limit. If a video has reached its time limit, the video is stopped and disconnected and the next video in the wait queue (if any) for the same output port is started.
- a third process in the client control system 11 functions as an RPC server to receive the VS-PLAY asynchronous termination notification from the media streamer 10 (B) in Fig. 11).
- a first process functions as an RPC server for the API function calls coming from the client control system 11 ((A) in Fig. 11).
- the first process receives the RPC calls and dispatches the appropriate procedures to perform the requested functions (such as VS-CONNECT, VS-PLAY, VS-DISCONNECT).
- a second process functions as an RPC client for calling the appropriate client control system addresses to notify the application control programs with asynchronous termination events. The process blocks itself waiting on an internal pipe, which is written by other processes that handle the playing of videos.
- An aspect of this invention provides integrated mechanisms for tailoring cache management and related I/O operations to the video delivery environment. This aspect of the invention is now described in detail.
- Prior art mechanisms for cache management are built into cache controllers and the file subsystems of operating systems. They are designed for general purpose use, and are not specialized to meet the needs of video delivery.
- Fig. 12 illustrates one possible way in which a conventional cache management mechanism may be configured for video delivery.
- This technique employs a video split between two disk files 160, 162 (because it is too large for one file), and a processor containing a file system 164, a media server 168, and a video driver 170. Also illustrated are two video adapter ports 172, 174 for two video streams. Also illustrated is the data flow to read a segment of disk file 160 into main storage, and to subsequently write the data to a first video port 172, and also the data flow to read the same segment and write it to a second video port 174.
- Fig. 12 is used to illustrate problems incurred by the prior art which are addressed and overcome by the media streamer 10 of this invention.
- Steps A2 and A3 are repeated multiple times.
- Steps A5 and A6 are repeated multiple times.
- Steps A7-A12 function in a similar manner, except that port 1 is changed to port 2. If a part of Sk is in the cache in file system 166 when needed for port 2, then step A8 may be skipped.
- video delivery involves massive amounts of data being transferred over multiple data streams.
- the overall usage pattern fits neither of the two traditional patterns used to optimize caching; random and sequential. If the random option is selected, most cache buffers will probably contain data from video segments which have been recently read, but will have no video stream in line to read them before they have expired. If the sequential option is chosen, the most recently used cache buffers are re-used first, so there is even less chance of finding the needed segment part in the file system cache.
- an important element of video delivery is that the data stream be delivered isochronously, that is without breaks and interruptions that a viewer or user would find objectionable.
- Prior art caching mechanisms cannot ensure the isochronous delivery of a video data stream to a user.
- Video are stored and managed in fixed size segments.
- the segments are sequentially numbered so that, for example, segment 5 would store a portion of a video presentation that is nearer to the beginning of the presentation than would a segment numbered 6.
- the segment size is chosen to optimize disk I/O, video I/O, bus usage and processor usage.
- a segment of a video has a fixed content, which depends only on the video name, and the segment number. All I/O to disk and to the video output, and all caching operations, are done aligned on segment boundaries.
- This aspect of the invention takes two forms, depending on whether the underlying hardware supports peer-to-peer operations with data flow directly between disk and video output card in a communications node 14, without passing through cache memory in the communications node.
- caching is done at the disk storage unit 16.
- data is read directly into page-aligned, contiguous cache memory (in a communications node 14) in segment-sized blocks to minimize I/O operations and data movement. (See F. Video Optimized Digital Memory Allocation, below).
- the data remains in the same location and is written directly from this location until the video segment is no longer needed. While the video segment is cached, all video streams needing to output the video segment access the same cache buffer. Thus, a single copy of the video segment is used by many users, and the additional I/O, processor, and buffer memory usage to read additional copies of the same video segment is avoided. For peer to peer operations, half of the remaining I/O and almost all of the processor and main memory usage are avoided at the communication nodes 14.
- Fig. 13 illustrates an embodiment of the invention for the case of a system without peer-to-peer operations.
- the video data is striped on the disk storage nodes 16 so that odd numbered segments are on first disk storage node 180 and even numbered segments are on second disk storage node 182 (see Section H below).
- segment Sk is to be read from disk 182 into a cache buffer 184 in communication node 186, and is then to be written to video output ports 1 and 2.
- the SK video data segment is read directly into cache buffer 184 with one I/O operation, and is then written to port 1.
- the SK video data segment is written from cache buffer 184 to port 2 with one I/O operation.
- Fig. 14 illustrates the data flow for a configuration containing support for peer-to-peer operations between a disk storage node and a video output card.
- a pair of disk drives 190, 192 contain a striped video presentation which is fed directly to a pair of video ports 194, 196 without passing through the main memory of an intervening communication node 14.
- the data flow for this configuration is to read segment Sk from disk 192 directly to port 1 (with one I/O operation) via disk cache buffer 198.
- segment Sk is read directly from disk cache buffer 198 into port 2 (with one I/O operation).
- peer to peer and main memory caching mechanism e.g., using peer to peer operations for video presentations which are playing to only one port of a communication node 14, and caching in the communications node 14 for video presentations which are playing to multiple ports of the communication node 14.
- a policy for dividing the caching responsibility between disk storage nodes and the communication node is chosen to maximize the number of video streams which can be supported with a given hardware configuration. If the number of streams to be supported known, then the amount and placement of caching storage can then be determined.
- a predictive caching mechanism meets the need for a caching policy well suited to video delivery.
- Video presentations are in general very predictable. Typically, they start playing at the beginning, play at a fixed rate for a fairly lengthy predetermined period, and stop only when the end is reached.
- the caching approach of the media streamer 10 takes advantage of this predictability to optimize the set of video segments which are cached at any one time.
- the predictability is used both to schedule a read operation to fill a cache buffer, and to drive the algorithm for reclaiming of cache buffers. Buffers whose contents are not predicted to be used before they would expire are reclaimed immediately, freeing the space for higher priority use. Buffers whose contents are in line for use within a reasonable time are not reclaimed, even if their last use was long ago.
- This information is used both to schedule a read operation to fill a cache buffer, and to drive the algorithm for re-using cache buffers.
- a cache buffer containing a video segment which is not predicted to be played by any of the currently playing video streams is re-used before re-using any buffers which are predicted to be played.
- the frequency of playing the video and the segment number are used as weights to determine a priority for keeping the video segment cached.
- the highest retention priority within this group is assigned to video segments that occur early in a frequently played video.
- the next predicted play time and the number of streams left to play the video segment are used as weights to determine the priority for keeping the video segment cached.
- the weights essentially allow the retention priority of a cache buffer to be set to the difference between the predicted number of I/Os (for any video segment) with the cache buffer reclaimed, and the predicted number with it retained.
- buffers containing v5 data already used by s7 are reclaimed first, followed by buffers containing v8 data already used by s2, followed by buffers containing v4 data already used by s12, followed by remaining buffers with the lowest retention priority.
- connection operations where it is possible to predict that a video segment will be played in the near future, but not exactly when
- stop operations when previous predictions must be revised
- the clustering of streams using a same video presentation is also taken into account during connection and play operations.
- VS-PLAY-AT-SIGNAL can be used to start playing a video on multiple streams at the same time. This improves clustering, leaving more system resources for other video streams, enhancing the effective capacity of the system. More specifically, clustering, by delaying one stream for a short period so that it coincides in time with a second stream, enables one copy of segments in cache to be used for both streams and thus conserves processing assets.
- Digital video data has attributes unlike those of normal data processing data in that it is non-random, that is sequential, large, and time critical rather than content critical. Multiple streams of data must be delivered at high bit rates, requiring all nonessential overhead to be minimized in the data path. Careful buffer management is required to maximize the efficiency and capacity of the media streamer 10. Memory allocation, deallocation, and access are key elements in this process, and improper usage can result in memory fragmentation, decreased efficiency, and delayed or corrupted video data.
- the media streamer 10 of this invention employs a memory allocation procedure which allows high level applications to allocate and deallocate non-swappable, page aligned, contiguous memory segments (blocks) for digital video data.
- the procedure provides a simple, high level interface to video transmission applications and utilizes low level operating system modules and code segments to allocate memory blocks in the requested size.
- the memory blocks are contiguous and fixed in physical memory, eliminating the delays or corruption possible from virtual memory swapping or paging, and the complexity of having to implement gather/scatter routines in the data transmission software.
- the high level interface also returns a variety of addressing mode values for the requested memory block, eliminating the need to do costly dynamic address conversion to fit the various memory models that can be operating concurrently in a media streamer environment.
- the physical address is available for direct access by other device drivers, such as a fixed disk device, as well as the process linear and process segmented addresses that are used by various applications.
- a deallocation routine is also provided that returns a memory block to the system, eliminating fragmentation problems since the memory is all returned as a single block.
- a control block is returned with the various memory model addresses of the memory area, along with the length of the block.
- a device driver is defined in the system configuration files and is automatically initialized as the system starts.
- An application then opens the device driver as a pseudo device to obtain its label, then uses the interface to pass the commands and parameters.
- the supported commands are Allocate Memory and Deallocate Memory, the parameters are memory size and pointers to the logical memory addresses. These addresses are set by the device driver once the physical block of memory has been allocated and the physical address is converted to logical addresses. A null is returned if the allocation fails.
- Fig. 15 shows a typical set of applications that would use this procedure.
- Buffer 1 is requested by a 32-bit application for data that is modified and then placed into buffer 2.
- This buffer can then be directly manipulated by a 16 bit application using a segmented address, or by a physical device such as a fixed disk drive.
- a 16 bit application using a segmented address, or by a physical device such as a fixed disk drive.
- a video application may use this approach to minimize data movement by placing the digital video data in the buffer directly from the physical disk, then transferring it directly to the output device without moving it several times in the process.
- Video streams be delivered to their destination isochronously, that is without delays that can be perceived by the human eye as discontinuities in movement or by the ear as interruptions in sound.
- Current disk technology may involve periodic action, such as performing predictive failure analysis that may cause significant delays in data access. While most I/O operations complete within 100 ms, periodic delays of 100 ms are common and delays of three full seconds can occur.
- the media streamer 10 must also be capable of efficiently sustaining high data transfer rates.
- a disk drive configured for general purpose data storage and retrieval will suffer inefficiencies in the use of memory, disk buffers, SCSI bus and disk capacity if not optimized for the video server application.
- disk drives employed herewith are tailored for the role of smooth and timely delivery of large amounts of data by optimizing disk parameters.
- the parameters may be incorporated into the manufacture of disk drives specialized for video servers, or they may be variables that can be set through a command mechanism.
- Parameters controlling periodic actions are set to minimize or eliminate delays.
- Parameters affecting buffer usage are set to allow for transfer of very large amounts of data in a single read or write operation.
- Parameters affecting speed matching between a SCSI bus and a processor bus are tuned so that data transfer starts neither too soon nor too late.
- the disk media itself is formatted with a sector size that maximizes effective capacity and band-width.
- the physical disk media is formatted with a maximum allowable physical sector size. This formatting option minimizes the amount of space wasted in gaps between sectors, maximizes device capacity, and maximizes the burst data rate.
- a preferred implementation is 744 byte sectors.
- Disks may have an associated buffer.
- This buffer is used for reading data from the disk media asynchronously from availability of the bus for the transfer of the data. Likewise the buffer is used to hold data arriving from the bus asynchronously from the transfer of that data to the disk media.
- the buffer may be divided into a number of segments and the number is controlled by a parameter. If there are too many segments, each may be too small to hold the amount of data requested in a single transfer.
- the buffer When the buffer is full, the device must initiate reconnection and begin transfer; if the bus/device is not available at this time, a rotational delay will ensue. In the preferred implementation, this value is set so that any buffer segment is at least as large as the data transfer size, e.g., set to one.
- the disk attempts to reconnect to the bus to effect a data transfer to the host.
- the point in time that the disk attempts this reconnection affects the efficiency of bus utilization.
- the relative speeds of the bus and the disk determine the best point in time during the fill operation to begin data transfer to the host.
- the buffer will fill as data arrives from the host and, at a certain point in the fill process, the disk should attempt a reconnection to the bus. Accurate speed matching results in fewer disconnect/reselect cycles on the SCSI bus with resulting higher maximum throughput.
- the parameters that control when the reconnection is attempted are called "read buffer full ratio” and "write buffer empty ratio”.
- Presently preferred values for buffer-full and buffer-empty ratios are approximately 204.
- Some disk drive designs require periodic recalibration of head position with changes in temperature. Some of these disk drive types further allow control over whether thermal compensation is done for all heads in an assembly at the same time, or whether thermal compensation is done one head at a time. If all heads are done at once, delays of hundreds of milliseconds during a read operation for video data may ensue. Longer delays in read times results in the need for larger main memory buffers to smooth data flow and prevent artifacts in the multimedia presentation.
- the preferred approach is to program the Thermal Compensation Head Control function to allow compensation of one head at a time.
- Limit Idle Time Function parameters can be used to inhibit the saving of error logs and performing idle time functions. The preferred implementation sets a parameter to limit these functions.
- the media streamer 10 of this invention uses a technique for serving many simultaneous streams from a single copy of the data. The technique takes into account the data rate for an individual stream and the number of streams that may be simultaneously accessing the data.
- the above-mentioned data striping involves the concept of a logical file whose data is partitioned to reside in multiple file components, called stripes. Each stripe is allowed to exist on a different disk volume, thereby allowing the logical file to span multiple physical disks.
- the disks may be either local or remote.
- a logical file for a video, video 1 is segmented into M segments or blocks each of a specific size, e.g. 256 KB. The last segment may only be partially filled with data.
- a segment of data is placed in the first stripe, followed by a next segment that is placed in the second stripe, etc.
- the next segment is written to the first stripe.
- stripe 1 will contain the segments 1, N+1, 2*N+1, etc.
- stripe 2 will contain the segments 2, N+2, 2*N+2, etc., and so on.
- a similar striping of data is known to be used in data processing RAID arrangements, where the purpose of striping is to assure data integrity in case a disk is lost.
- Such a RAID storage system dedicates one of N disks to the storage of parity data that is used when data recovery is required.
- the disk storage nodes 16 of the media streamer 10 are organized as a RAID-like structure, but parity data is not required (as a copy of the video data is available from a tape store).
- Fig. 17 illustrates a first important aspect of this data arrangement, i.e., the separation of each video presentation into data blocks or segments that are spread across the available disk drives to enable each video presentation to be accessed simultaneously from multiple drives without requiring multiple copies.
- the concept is one of striping, not for data integrity reasons or performance reasons, per se, but for concurrency or bandwidth reasons.
- the media stream 10 stripes video presentation by play segments, rather than by byte block, etc.
- stripe 1 is a file containing segments 1, 5, 9, etc. of video file 1;
- stripe 2 is a file containing segments 2, 6, 10, etc., of video file 1;
- stripe 3 is a file containing segments 3, 7, 11, etc. of the video file and
- stripe 4 is a file containing the segments 4, 8, 12, etc., of video file 1, until all M segments of video file 1 are contained in one of the four stripe files.
- parameters are computed as follows to customize the striping of each individual video.
- the segment size is selected so as to obtain a reasonably effective data rate from the disk. However, it cannot be so large as to adversely affect the latency. Further it should be small enough to buffer/cache in memory.
- a preferred segment size is 256KB, and is constant for video presentations of data rates in ranges from 128KB/sec. to 512KB/sec. If the video data rate is higher, then it may be preferable to use a larger segment size.
- the segment size depends on the basic unit of I/O operation for the range of video presentations stored on the same media. The principle employed is to use a segment size that contains approximately 0.5 to 2 seconds of video data.
- each disk has a logical volume associated with it.
- Each video presentation is divided into component files, as many components as the number of stripes needed.
- Each component file is stored on a different logical volume. For example, if video data has to be delivered at 250 KB/sec per stream and 30 simultaneous streams are supported from the same video, started at say 15 second intervals, a total data rate of at least 7.5 MB/sec is obtained. If a disk drive can support on the average 3 MB/sec., at least 3 stripes are required for the video presentation.
- the effective rate at which data can be read from a disk is influenced by the size of the read operation. For example, if data is read from the disk in 4KB blocks (from random positions on the disk), the effective data rate may be 1MB/sec. whereas if the data is read in 256KB blocks the rate may be 3 MB/sec.
- the memory required for buffers also increases and the latency, the delay in using the data read, also increases because the operation has to complete before the data can be accessed.
- a size is selected based on the characteristics of the devices and the memory configuration.
- the size of the data transfer is the selected segment size.
- the effective data rate from a device is determined. For example, for some disk drives, a 256KB segment size provides a good balance for the effective use of the disk drives (effective data rate of 3 MB/sec.) and buffer size (256 KB).
- the maximum number of streams that can be supported is limited by the effective data rate of the disk, e.g. if the effective data rate is 3MB/s and a stream data rate is 200KB/s, then no more than 15 streams can be supplied from the disk. If, for instance, 60 streams of the same video are needed then the data has to be duplicated on 4 disks. However, if striping is used in accordance with this invention, 4 disks of 1/4 the capacity can be used. Fifteen streams can be simultaneously played from each of the 4 stripes for a total of 60 simultaneous streams from a single copy of the video data. The start times of the streams are skewed to ensure that the requests for the 60 streams are evenly spaced among the stripes. Note also that if the streams are started close to each other, the need for I/O can be reduced by using video data that is cached.
- the number of stripes for a given video is influenced by two factors, the first is the maximum number of streams that are to be supplied at any time from the video and the other is the total number of streams that need to be supplied at any time from all the videos stored on the same disks as the video.
- s maximum (r*n/d, r*m/d), where:
- the number of disks over which data for a video presentation is striped are managed as a set, and can be thought of as a very large physical disk. Striping allows a video file to exceed the size limit of the largest file that a system's physical file system will allow. The video data, in general, will not always require the same amount of storage on all the disks in the set. To balance the usage of the disk, when a video is striped, the striping is begun from the disk that has the most free space.
- n 30.
- m also 30, i.e., the total number of streams to be delivered from all disks.
- the manner in which the algorithm is used in the media streamer 10 is as follows.
- the storage (number of disk drives) is divided into groups of disks. Each group has a certain capacity and capability to deliver a given number of simultaneous streams (at an effective data rate per disk based on a predetermined segment size).
- the segment size for each group is constant. Different groups may choose different segments sizes (and hence have different effective data rates).
- a group is first chosen by the following criteria.
- the segment size is consistent with the data rate of the video, i.e., if the stream data rate is 250,000 bytes/sec., the segment size is in the range of 125K to 500 KB.
- the next criteria is to ensure that the number of disks in the group are sufficient to support the maximum number of simultaneous streams, i.e., the number of disks where "r" is the stream data rate and "n" the maximum number of simultaneous streams, and "d” the effective data rate of a disk in the group.
- the sum total of simultaneous streams that need to be supported from all of the videos in the disk group does not exceed its capacity. That is, if "m” is the capacity of the group, the "m - n" should be greater than or equal to the sum of all the streams that can be played simultaneously from the videos already stored in the group.
- the calculation is done in control node 18 at the time the video data is loaded into the media streamer 10.
- all disks will be in a single pool which defines the total capacity of the media streamer 10, both for storage and the number of supportable streams.
- the number of disks (or stripes) necessary to support a given number of simultaneous streams is calculated from the formula m*r/d, where m is the number of streams, r is the data rate for a stream, and d is the effective data rate for a disk. Note that if the streams can be of different rates, then m*r, in the above formula, should be replaced by: Max (sum of the data rates of all simultaneous streams).
- the result of using this technique for writing the data is that the data can be read for delivering many streams at a specified rate without the need for multiple copies of the digital representation of the video presentation.
- By striping the data across multiple disk volumes the reading of one part of the file for delivering one stream does not interfere with the reading of another part of the file for delivering another stream.
- video servers generally fit one of two profiles. Either they use PC technology to build a low cost (but also low bandwidth) video server or they use super-computing technology to build a high bandwidth (also expensive) video server.
- a object of this invention then is to deliver high bandwidth video, but without the high cost of super-computer technology.
- a preferred approach to achieving high bandwidth at low cost is to use the low latency switch (crossbar circuit switch matrix) 18 to interconnect low cost PC based "nodes" into a video server (as shown in Fig. 1).
- An important aspect of the media streamer architecture is efficient use of the video stream bandwidth that is available in each of the storage nodes 16 and communication nodes 14. The bandwidth is maximized by combining the special time bandwidth allocation capability of a low-cost switch technology.
- Fig. 18 shows a conventional logical connection between a switch interface and a storage node.
- the switch interface must be full duplex (i.e., information can be sent in either direction simultaneously) to allow the transfer of video (and control information) both into and out of the storage node. Because video content is written to the storage node once and then read many times, most of the bandwidth requirements for the storage node are in the direction towards the switch. In the case of a typical switch interface, the bandwidth of the storage node is under-utilized because that half of the bandwidth devoted to write capability is so infrequently used.
- Fig. 19 shows a switch interface in accordance with this invention.
- This interface dynamically allocates its total bandwidth in real time either into or out of the switch 18 to meet the current demands of the node.
- the storage node 16 is used as an example.
- the communication nodes 14 have similar requirements, but most of their bandwidth is in the direction from the switch 18.
- the dynamic allocation is achieved by grouping two or more of the physical switch interfaces, using appropriate routing headers for the switch 12, into one logical switch interface 18a.
- the video data (on a read, for example) is then split between the two physical interfaces. This is facilitated by striping the data across multiple storage units as described previously.
- the receiving node combines the video data back into a single logical stream.
- the switch interface is rated at 2X MB/sec. full duplex i.e., X MB/sec. in each direction. But video data is usually sent only in one direction (from the storage node into the switch). Therefore only X MB/sec. of video bandwidth is delivered from the storage node, even though the node has twice that capability (2X).
- the storage node is under utilized.
- the switch interface of Fig. 19 dynamically allocates the entire 2X MB/sec. bandwidth to transmitting video from the storage node into the switch. The result is increased bandwidth from the node, higher bandwidth from the video server, and a lower cost per video stream.
- Digital video data is sequential, continuous, large, and time critical, rather than content critical. Streams of video data must be delivered isochronously at high bit rates, requiring all nonessential overhead to be minimized in the data path.
- the receiving hardware is a video set top box or some other suitable video data receiver.
- Standard serial communication protocols insert additional bits and bytes of data into the stream for synchronization and data verification, often at the hardware level. This corrupts the video data stream if the receiver is not able to transparently remove the additional data. The additional overhead introduced by these bits and bytes also decreases the effective data rate which creates video decompression and conversion errors.
- a serial communications chip 200 in a communications node 14 disables data formatting and integrity information, such as the parity, start and stop bits, cyclic redundancy check codes and sync bytes, and prevents idle characters from being generated.
- Input FIFO buffers 202, 204, 206, etc. are employed to insure a constant (isochronous) output video data stream while allowing bus cycles for loading of the data blocks.
- a 1000 byte FIFO buffer 208 simplifies the CPU and bus loading logic.
- communications output chip 200 does not allow the disabling of an initial synchronization (sync) byte generation, then the value of the sync byte is programmed to the value of the first byte of each data block (and the data block pointer is incremented to the second byte).
- Byte alignment must also be managed with real data, since any padding bytes will corrupt the data stream if they are not part of the actual compressed video data.
- a circular buffer or a plurality of large buffers (e.g. 202, 204, 206) must be used. This is necessary to allow sufficient time to fill an input buffer while outputting data from a previously filled buffer. Unless buffer packing is done earlier in the video data stream path, the end of video condition can result in a very small buffer that will be output before the next buffer transfer can complete resulting in a data underrun. This necessitates a minimum of three large, independent buffers.
- a circular buffer in dual mode memory is also a suitable embodiment.
- digital video data is moved from disk to buffer memory. Once enough data is in buffer memory, it is moved from memory to an interface adapter in a communications node 14.
- the interfaces used are the SCSI 20 MB/sec., fast/wide interface or the SSA serial SCSI interface.
- the SCSI interface is expanded to handle 15 addresses and the SSA architecture supports up to 256.
- Other suitable interfaces include, but are not limited to, RS422, V.35, V.36, etc.
- video data from the interface is passed from a communication node 14 across a communications bus 210 to NTSC adapter 212 (see also Fig. 20) where the data is buffered.
- Adapter 212 pulls the data from a local buffer 214, where multiple blocks of data are stored to maximize the performance of the bus.
- the key goal of adapter 212 is to maintain an isochronous flow of data from the memory 214 to MPEG chips 216, 218 and thus to NTSC chip 220 and D/A 222, to insure that there are no interruptions in the delivery of video and/or audio.
- MPEG logic modules 216, 218 convert the digital (compressed) video data into component level video and audio.
- An NTSC encoder 220 converts the signal into NTSC baseband analog signals.
- MPEG audio decoder 216 converts the digital audio into parallel digital data which is then passed through a Digital to Analog converter 222 and filtered to generate audio Left and Right outputs.
- the goal in creating a solution to the speed matching and Isochronous delivery problem is an approach that not only maximizes the bandwidth delivery of the system but also imposes the fewest performance constraints.
- bus structure such as SSA and SCSI
- processors and mechanical storage devices
- mechanical storage devices such disk files, tape files, optical storage units, etc.
- Both of these buses contain attributes that make them suitable for high bandwidth delivery of video data, provided that means are taken to control the speed and isochronous delivery of video data.
- the SCSI bus allows for the bursting of data at 20 Mbytes/sec. which minimizes the amount of time that any one video signal is being moved from buffer memory to a specific NTSC adapter.
- the adapter card 212 contains a large buffer 214 with a performance capability to burst data into memory from bus 210 at high peak rates and to remove data from buffer 214 at much lower rates for delivery to NTSC decoder chips 216, 218.
- Buffer 214 is further segmented into smaller buffers and connected via software controls to act as multiple buffers connected in a circular manner.
- An advantage of this approach is that it frees the system software to deliver blocks of video data well in advance of any requirement for the video data, and at very high delivery rates.
- This provides the media streamer 10 with the ability to manage many multiple video steams on a dynamic throughput requirement.
- a processor in a communications node has time, it can cause delivery of several large blocks of data that will be played in sequence. Once this is done, the processor is free to control other streams without an immediate need to deliver slow continuous isochronous data to each port.
- a small FIFO memory 224 is inserted between the larger decoder buffer 214 and MPEG decoders 216, 218.
- the FIFO memory 224 allows controller 226 to move smaller blocks, typically 512 bytes of data, from buffer 214 to FIFO 224 which, in turn, converts the data into serial bit streams for delivery to MPEG decoders 216, 218.
- Both the audio and the video decoder chips 216, 218 can take their input from the same serial data stream, and internally separate and decode the data required.
- the transmission of data from the output of the FIFO memory 224 occurs in an isochronous manner, or substantially isochronous manner, to ensure the delivery of an uninterrupted video presentation to a user or consumer of the video presentation.
- compressed digital video data and command streams from buffer memory are converted by device level software into SCSI commands and data streams, and are transmitted over SCSI bus 210 to a target adapter 212 at SCSI II fast data rates.
- the data is then buffered and fed at the required content output rate to MPEG logic for decompression and conversion to analog video and audio data. Feedback is provided across SCSI bus 210 to pace the data flow and insure proper buffer management.
- the SCSI NTSC/PAL adapter 212 provides a high level interface to SCSI bus 210, supporting a subset of the standard SCSI protocol.
- the normal mode of operation is to open the adapter 212, write data (video and audio) streams to it and, closing the adapter 212 only when completed.
- Adapter 212 pulls data as fast as necessary to keep its buffers full, with the communication nodes 14 and storage nodes 16 providing blocks of data, that are sized to optimize the bus data transfer and minimize bus overhead.
- System parameters can be overwritten via control packets using a Mode Select SCSI command if necessary.
- Video/Audio synchronization is internal to the adapter 212 and no external controls are required. Errors are minimized, with automatic resynchronization and continued audio/video output.
- a mix of direct access device and sequential device commands are used as well as standard common commands to fit the functionality of the SCSI video output adapter. As with all SCSI commands, a valid status byte is returned after every command, and the sense data area is loaded with the error conditions if a check condition is returned.
- the standard SCSI commands used include RESET, INQUIRY, REQUEST SENSE, MODE SELECT, MODE SENSE, READ, WRITE, RESERVE, RELEASE, TEST UNIT READY.
- the video control commands are user-level video output control commands, and are extensions to the standard commands listed above. They provide a simplified user level front end to the low level operating system or SCSI commands that directly interface to the SCSI video output adapter 212.
- the implementation of each command employs microcode to emulate the necessary video device function and avoid video and audio anomalies caused by invalid control states.
- a single SCSI command; the SCSI START/STOP UNIT command is used to translate video control commands to the target SCSI video output adapter 212, with any necessary parameters moved along with the command. This simplifies both the user application interface and the adapter card 212 microcode. The following commands are employed.
- the data input into the MPEG chip set (216, 218) is halted, the audio is muted, and the video is blanked.
- the parameter field selects the stop mode.
- the normal mode is for the buffer and position pointer to remain current, so that PLAY continues at the same location in the video stream.
- a second (end of movie or abort) mode is to set the buffer pointers to the start of the next buffer and release the current buffer.
- a third mode is also for end of movie conditions, but the stop (mute and blank) is delayed until the data buffer runs empty.
- a fourth mode may be employed with certain MPEG decoder implementations to provide for a delayed stop with audio, but freeze frame for the last valid frame when the data runs out. In each of these cases, the video adapter 212 microcode determines the stopping point so that the video and audio output is halted on the proper boundary to allow a clean restart.
- the data input into the MPEG chip set (216, 218) is halted and the audio is muted, but the video is not blanked. This causes the MPEG video chip set (216, 218) to hold a freeze frame of the last good frame. This is limited to avoid burn-in of the video tube.
- a Stop command is preferably issued by the control node 18 but the video output will automatically go to blank if no commands are received within 5 minutes.
- the adapter 212 microcode maintains the buffer positions and decoder states to allow for a smooth transition back to play.
- This command blanks the video output without impacting the audio output, mutes the audio output without impacting the video, or both. Both muting and blanking can be turned off with a single command using a Mode parameter, which allows a smoother transition and reduced command overhead. These are implemented on the video adapter 212 after decompression and conversion to analog, with hardware controls to ensure a positive, smooth transition.
- This command slows the data input rate into the MPEG chip set, (216, 218) causing it to intermittently freeze frame, simulating a slow play function on a VCR.
- the audio is muted to avoid digital error noise.
- the parameter field specifies a relative speed from 0 to 100.
- An alternative implementation disables the decoder chip set (216, 218) error handling, and then modifies the data clocking speed into the decoder chip set to the desired playing speed. This is dependent on the flexibility of the video adapter's clock architecture.
- This command starts the data feed process into the MPEG chip set (216, 218), enabling the audio and video outputs.
- a buffer selection number is passed to determine which buffer to begin the playing sequence from, and a zero value indicates that the current play buffer should be used (typical operation).
- a non-zero value is only accepted if the adapter 212 is in STOPPED mode, if in PAUSED mode the buffer selection parameter is ignored and playing is resumed using the current buffer selection and position.
- the controller 226 rotates through the buffers sequentially maintaining a steady stream of data into the MPEG chip set (216, 218). Data is read from the buffer at the appropriate rate into the MPEG bus starting at address zero until N bytes are read, then the controller 226 switches to the next buffer and continues reading data.
- the adapter bus and microcode provides sufficient bandwidth for both the SCSI Fast data transfer into the adapter buffers 214, and the steady loading of the data onto the output FIFO 224 that feeds the MPEG decompression chips (216, 218).
- This command is used to scan through data in a manner that emulates fast forward on a VCR.
- rate parameter There are two modes of operation that are determined by the rate parameter.
- a rate of 0 means that it is a rapid fast forward where the video and audio should be blanked and muted, the buffers flushed, and an implicit play is executed when data is received from a new position forward in the video stream.
- An integer value between 1 and 10 indicates the rate that the input stream is being forwarded.
- the video is 'sampled' by skipping over blocks of data to achieve the specified average data rate.
- the adapter 212 plays a portion of data at nearly the normal rate, jumps ahead, then plays the next portion to emulate the fast forward action.
- This command is used to scan backwards through data in a manner that emulates rewind on a VCR.
- rate parameter There are two modes of operation that are determined by the rate parameter.
- a rate of 0 means that it is a rapid rewind where the video and audio should be blanked and muted, the buffers flushed, and an implicit play executed when data is received from a new position forward in the video stream.
- An integer value between 1 and 10 indicates the rate that the input stream is being rewound.
- the video is 'sampled' by skipping over blocks of data to achieve the specified average data rate.
- the rewind data stream is built by assembling small blocks of data that are 'sampled' from progressively early positions in the video stream.
- the adapter card 212 smoothly handles the transitions and synchronization to play at the normal rate, skipping back to the next sampled portion to emulate rewind scanning.
- Digital video servers provide data to many concurrent output devices, but digital video data decompression and conversion requires a constant data stream.
- Data buffering techniques are used to take advantage of the SCSI data burst mode transmission, while still avoiding data underrun or buffer overrun, allowing media streamer 10 to transmit data to many streams with minimal intervention.
- SCSI video adapter card 212 (Figs. 21, 22) includes a large buffer 214 for video data to allow full utilization of the SCSI burst mode data transfer process.
- An exemplary configuration would be one buffer 214 of 768K, handled by local logic as a wrap-around circular buffer. Circular buffers are preferred to dynamically handle varying data block sizes, rather than fixed length buffers that are inefficient in terms of both storage and management overhead when transferring digital video data.
- the video adapter card 212 microcode supports several buffer pointers, keeping the last top of data as well as the current length and top of data. This allows a retry to overwrite failed transmission, or a pointer to be positioned to a byte position within the current buffer if necessary.
- the data block length is maintained exactly as transmitted (e.g., byte or word specific even if long word alignment is used by the intermediate logic) to insure valid data delivery to the decode chip set (216, 218). This approach minimizes the steady state operation overhead, while still allowing flexible control of the data buffers.
- multiple pointers are available for all buffer related operations. For example, one set may be used to select the PLAY buffer and current position within that buffer, and a second set to select the write buffer and a position within that buffer (typically zero) for a data preload operation. A current length and maximum length value are maintained for each block of data received since variable length data blocks are also supported.
- the buffer operation is managed by the video adapter's controller 226, placing the N bytes of data in the next available buffer space starting at address zero of that buffer. Controller 226 keeps track of the length of data in each buffer and if that data has been "played" or not. Whenever sufficient buffer space is free, the card accepts the next WRITE command and DMA's the data into that buffer. If not enough buffer space is free to accept the full data block (typically a Slow Play or Pause condition), the WRITE is not accepted and a buffer full return code is returned.
- a LOCATE command is used to select a 'current' write buffer and position within that buffer (typically zero) for each buffer access command (Write, Erase, etc.).
- the buffer position is relative to the start of data for the last block of data that was successfully transmitted. This is done preferably for video stream transition management, with the automatic mode reactivated as soon as possible to minimize command overhead in the system.
- Digital video data transmission has different error management requirements than the random data access usage that SCSI is normally used for in data processing applications. Minor data loss is less critical than transmission interruption, so the conventional retries and data validation schemes are modified or disabled.
- the normal SCSI error handling procedures are followed with the status byte being returned during the status phase at the completion of each command.
- the status byte indicates either a GOOD (00) condition, a BUSY (8h) if the target SCSI chip 227 is unable to accept a command, or a CHECK CONDITION (02h) if an error has occurred.
- the controller 226 of the SCSI video adapter 212 automatically generates a Request Sense command on a Check Condition response to load the error and status information, and determines if a recovery procedure is possible.
- the normal recovery procedure is to clear the error state, discard any corrupted data, and resume normal play as quickly as possible.
- the adapter 212 may have to be reset and the data reloaded before the play can resume. Error conditions are logged and reported back to the host system with the next INQUIRY or REQUEST SENSE SCSI operation.
- retries are automated up to X number of retries, where X is dependent on the stream data rate. This is allowed only to the point in time that the next data buffer arrives. At that point, an error is logged if the condition is unexpected (i.e., Buffer full but not PAUSED or in SLOW PLAY mode) and a device reset or clear may be necessary to recover and continue video play.
- bidirectional video adapters can be employed to receive a video presentation, to digitize the video presentation as a data representation thereof, and to transmit the data representation over the bus 210 to a communication node 14 for storage, via low latency switch 18, within a storage node or nodes 16, 17 as specified by the control node 18.
Abstract
A data storage system includes a mass storage unit storing a data entity, such as a digital representation of a video presentation, that is partitioned into a plurality N of temporally-ordered segments. A data buffer is bidirectionally coupled to the mass storage unit for storing up to M of the temporally-ordered segments, wherein M is less than N. The data buffer has an output for outputting stored ones of the temporally-ordered segments. The data storage system further includes a data buffer manager for scheduling transfers of individual ones of the temporally-ordered segments between the mass storage unit and the data buffer. The data buffer manager schedules the transfers in accordance with at least a predicted time that an individual one of the temporally-ordered segments will be required to be output from the data buffer. When employed with a media streamer (10) distributed data buffer management techniques are employed for selecting blocks to be retained in a buffer memory, either in a storage node (16, 17) or in a communication node (14). These techniques rely on the predictable nature of the video data stream, and thus are enabled to predict the future requirements for a given one of the data blocks.
Description
- This invention relates to a system for delivery of multimedia data and, more particularly, an interactive video server system that provides video simultaneously to a plurality of terminals with minimal buffering.
- The playing of movies and video is today accomplished with rather old technology. The primary storage media is analog tape, such as VHS recorders/players, and extends up to the very high quality and very expensive D1 VTR's used by television studios and broadcasters. There are many problems with this technology. A few such problems include: the manual labour required to load the tapes, the wear and tear on the mechanical units, tape head, and the tape itself, and also the expense. One significant limitation that troubles Broadcast Stations is that the VTRs can only perform one function at a time, sequentially. Each tape unit costs from $75,000 to $150,000.
- TV stations want to increase their revenues from commercials, which are nothing more than short movies, by inserting special commercials into their standard programs and thereby targeting each city as a separate market. This is a difficult task with tape technology, even with the very expensive Digital D1 tape systems or tape robots.
- Traditional methods of delivery of multimedia data to end users fall into two categories: 1) broadcast industry methods and 2) computer industry methods. Broadcast methods (including motion picture, cable, television network, and record industries) generally provide storage in the form of analog or digitally recorded tape. The playing of tapes causes isochronous data streams to be generated which are then moved through broadcast industry equipment to the end user. Computer methods generally provide storage in the form of disks, or disks augmented with tape, and record data in compressed digital formats such as DVI, JPEG and MPEG. On request, computers deliver non-isochronous data streams to the end user, where hardware buffers and special application code smooths the data streams to enable continuous viewing or listening.
- Video tape subsystems have traditionally exhibited a cost advantage over computer disk subsystems due to the cost of the storage media. However, video tape subsystems have the disadvantages of tape management, access latency, and relatively low reliability. These disadvantages are increasingly significant as computer storage costs have dropped, in combination with the advent of the real-time digital compression/decompression techniques.
- Though computer subsystems have exhibited compounding cost/performance improvements, they are not generally considered to be "video friendly". Computers interface primarily to workstations and other computer terminals with interfaces and protocols that are termed "non-isochronous". To assure smooth (isochronous) delivery of multimedia data to the end user, computer systems require special application code and large buffers to overcome inherent weaknesses in their traditional communication methods. Also, computers are not video friendly in that they lack compatible interfaces to equipment in the multimedia industry which handle isochronous data streams and switch among them with a high degree of accuracy.
- With the introduction of the use of computers to compress and store video material in digital format, a revolution has begun in several major industries such as television broadcasting, movie studio production, "Video on Demand" over telephone lines, pay-per-view movies in hotels, etc. Compression technology has progressed to the point where acceptable results can be achieved with compression ratios of 100x to 180x. Such compression ratios make random access disk technology an attractive alternative to prior art tape systems.
- With an ability to random access digital disk data and the very high bandwidth of disk systems, the required system function and performance is within the performance, hardware cost, and expendability of disk technology. In the past, the use of disk files to store video or movies was never really a consideration because of the cost of storage. That cost has seen significant reductions in the recent past.
- For the many new emerging markets that utilize compressed video data, using MPEG standards, there are several ways in which video data can be stored in a cost effective manner. This invention provides a hierarchical solution to many different performance requirements and results in a modular systems approach that can be customized to meet market requirements.
- The invention provides a "video friendly" computer subsystem which enables isochronous data stream delivery in a multimedia environment over traditional interfaces for that industry. A media streamer in accordance with the invention is optimized for the delivery of isochronous data streams and can stream data into new computer networks with ATM (Asynchronous Transfer Mode) technology. This invention eliminates the disadvantages of video tape while providing a VTR (video tape recorder) metaphor for system control. The system of this invention provides the following features: scalability to deliver from 1 to 1000's of independently controlled data streams to end users; an ability to deliver many isochronous data streams from a single copy of data; mixed output interfaces; mixed data rates; a simple "open system" control interface; automation control support; storage hierarchy support; and low cost per delivered stream.
- In accordance with an aspect of this invention a data storage system includes a mass storage unit storing a data entity, such as a digital representation of a video presentation, that is partitioned into a plurality N of temporally-ordered segments. A data buffer is bidirectionally coupled to the mass storage unit for storing up to M of the temporally-ordered segments, wherein M is less than N. The data buffer has an output for outputting stored ones of the temporally-ordered segments. The data storage system further includes a data buffer manager for scheduling transfers of individual ones of the temporally-ordered segments between the mass storage unit and the data buffer. The data buffer manager schedules the transfers in accordance with at least a predicted time that an individual one of the temporally-ordered segments will be required to be output from the data buffer.
- Further in accordance with this invention there is provided a media streamer having at least one storage node for storing a digital representation of at least one video presentation. The at least one video presentation requires a time T to present in its entirety, and is stored as a plurality of N data blocks. Each data block is a T/N portion of the at least one video presentation. The at least one storage node includes a first data buffer for buffering at least one of the N data blocks. The media streamer further includes a plurality of communication nodes each having an input port that is coupled via a circuit switch to an output of the first data buffer for sequentially receiving a plurality of the N data blocks therefrom. The sequentially received N data blocks are associated with a same video presentation or with different video presentations. Each of the plurality of communication nodes further have a plurality of output ports, wherein individual ones of the plurality of output ports output a digital representation of one video presentation. Individual ones of the plurality of communication nodes further include a second data buffer for buffering at least one of the N data blocks prior to outputting the at least one of the N data blocks. The media streamer further includes at least one control node responsive to a first operating condition for causing transfer of one of the N data blocks from the first data buffer to an output port of a first communication node and also to an output port of a second communication node, the at least one control node being further responsive to a second operating condition for causing transfer of one of the N data blocks from the first data buffer to the second data buffer of one of the communication nodes, and for causing transfer of the one of the N data blocks from the second data buffer to a plurality of the output ports of the one of the communication nodes.
- Embodiments are disclosed of presently preferred distributed data buffer management techniques for selecting blocks to be retained in a buffer memory, either in a storage node or in a communication node. These techniques rely on the predictable nature of the video data stream, and thus are enabled to predict the future requirements for a given one of the data blocks.
- The invention will now be described, by way of example only, with reference to the accompanying drawings, in which:
- Fig. 1 is a block diagram of a media streamer incorporating the invention hereof;
- Fig. 1A is a block diagram which illustrates further details of a circuit switch shown in Fig. 1;
- Fig. 1B is a block diagram which illustrates further details of a tape storage node shown in Fig. 1;
- Fig. 1C is a block diagram which illustrates further details of a disk storage node shown in Fig. 1;
- Fig. 1D is a block diagram which illustrates further details of a communication node shown in Fig. 1;
- Fig. 2 illustrates a list of video stream output control commands which are executed at high priority and a further list of data management commands which are executed at lower priority;
- Fig. 3 is a block diagram illustrating communication node data flow;
- Fig. 4 is a block diagram illustrating disk storage node data flow;
- Fig. 5 illustrates control message flow to enable a connect to be accomplished;
- Fig. 6 illustrates control message flow to enable a play to occur;
- Fig. 7 illustrates interfaces which exist between the media streamer and client control systems;
- Fig. 8 illustrates a display panel showing a plurality of "soft" keys used to operate the media streamer;
- Fig. 9 illustrates a load selection panel that is displayed upon selection of the load soft key on Fig. 8;
- Fig. 10 illustrates a batch selection panel that is displayed when the batch key in Fig. 8 is selected;
- Fig. 11 illustrates several client/server relationships which exist between a client control system and the media streamer;
- Fig. 12 illustrates a prior art technique for accessing video data and feeding it to one or more output ports;
- Fig. 13 is a block diagram indicating how plural video ports can access a single video segment contained in a communications node cache memory;
- Fig. 14 is a block diagram illustrating how plural video ports have direct access to a video segment contained in cache memory on the disk storage node;
- Fig. 15 illustrates a memory allocation scheme employed by the invention hereof;
- Fig. 16 illustrates a segmented logical file for a
video 1; - Fig. 17 illustrates how the various segments of
video 1 are striped across a plurality of disk drives; - Fig. 18 illustrates a prior art switch interface between a storage node and a cross bar switch;
- Fig. 19 illustrates how the prior art switch interface shown in Fig. 18 is modified to provide extended output bandwidth for a storage node;
- Fig. 20 is a block diagram illustrating a procedure for assuring constant video output to a video output bus;
- Fig. 21 illustrates a block diagram of a video adapter used in converting digital video data to analog video data; and
- Fig. 22 is a block diagram showing control modules that enable SCSI bus commands to be employed to control the video adapter card of Fig. 21.
- In the following description, a number of terms are used that are described below:
- AAL-5
- ATM ADAPTATION LAYER-5: Refers to a class of ATM service suitable for data transmission.
- ATM
- ASYNCRHONOUS TRANSFER MODE: A high speed switching and transport technology that can be used in a local or wide area network, or both. It is designed to carry both data and video/audio.
- Betacam
- A professional quality analog video format.
- CCIR 601
- A standard resolution for digital television. 720 x 840 (for NTSC) or 720 x 576 (for PAL) luminance, with chrominance subsampled 2:1 horizontally.
- CPU
- CENTRAL PROCESSING UNIT: In computer architecture, the main entity that processes computer instructions.
- CRC
- CYCLIC REDUNDANCY CHECK. A data error detection scheme.
- D1
- Digital Video recording format conforming to CCIR 601. Records on 19mm video tape.
- D2
- Digital video recording format conforming to SMPTE 244M. Records on 19mm video tape.
- D3
- Digital Video recording format conforming to SMPTE 244M. Records on 1/2'' video tape.
- DASD
- DIRECT ACCESS STORAGE DEVICE: Any on-line data storage device or CD-ROM player that can be addressed is a DASD. Used synonymously with magnetic disk drive.
- DMA
- DIRECT MEMORY ACCESS: A method of moving data in a computer architecture that does not require the CPU to move the data.
- DVI
- A relatively low quality digital video compression format usually used to play video from CD-ROM disks to computer screens.
- E1
- European equivalent of T1.
- FIFO
- FIRST IN FIRST OUT: Queue handling method that operates on a first-come, first-served basis.
- GenLock
- Refers to a process of synchronization to another video signal. It is required in computer capture of video to synchronize the digitizing process with the scanning parameters of the video signal.
- I/O
- INPUT/OUTPUT
- Isochronous
- Used to describe information that is time sensitive and that is sent (preferably) without interruptions. Video and audio data sent in real time are isochronous.
- JPEG
- JOINT PHOTOGRAPHIC EXPERT GROUP: A working committee under the auspices of the International Standards Organization that is defining a proposed universal standard for digital compression of still images for use in computer systems.
- KB
- KILO BYTES: 1024 bytes.
- LAN
- LOCAL AREA NETWORK: High-speed transmission over twisted pair, coax, or fibre optic cables that connect terminals, computers and peripherals together at distances of about a mile or less.
- LRU
- LEAST RECENTLY USED
- MPEG
- MOVING PICTURE EXPERTS GROUP: A working committee under the auspices of the International Standards Organization that is defining standards for the digital compression/decompression of motion video/audio. MPEG-1 is the initial standard and is in use. MPEG-2 will be the next standard and will support digital, flexible, scalable video transport. It will cover multiple resolutions, bit rates and delivery mechanisms.
- MPEG-1, MPEG-2
- See MPEG
- MRU
- MOST RECENTLY USED
- MTNU
- MOST TIME TO NEXT USE
- NTSC format
- NATIONAL TELEVISION STANDARDS COMMITTEE: The colour television format that is the standard in the United States and Japan.
- PAL format
- PHASE ALTERNATION LINE: The colour television format that is the standard for Europe except for France.
- PC
- PERSONAL COMPUTER: A relatively low cost computer that can be used for home or business.
- RAID
- REDUNDANT ARRAY of INEXPENSIVE DISKS: A storage arrangement that uses several magnetic or optical disks working in tandem to increase bandwidth output and to provide redundant backup.
- SCSI
- SMALL COMPUTER SYSTEM INTERFACE: An industry standard for connecting peripheral devices and their controllers to a computer.
- SIF
- SOURCE INPUT FORMAT: One quarter the CCIR 601 resolution.
- SMPTE
- SOCIETY OF MOTION PICTURE & TELEVISION ENGINEERS.
- SSA
- SERIAL STORAGE ARCHITECTURE: A standard for connecting peripheral devices and their controllers to computers. A possible replacement for SCSI.
- T1
- Digital interface into the telephone network with a bit rate of 1.544 Mb/sec.
- TCP/IP
- TRANSMISSION CONTROL PROTOCOL/INTERNET PROGRAM: A set of protocols developed by the Department of Defense to link dissimilar computers across networks.
- VHS
- VERTICAL HELICAL SCAN: A common format for recording analog video on magnetic tape.
- VTR
- VIDEO TAPE RECORDER: A device for recording video on magnetic tape.
- VCR
- VIDEO CASSETTE RECORDER: Same as VTR.
- A video optimized stream server system 10 (hereafter referred to as media streamer) is shown in Fig. 10 and includes four architecturally distinct components to provide scalability, high availability and configuration flexibility. The major components follow:
- 1) Low Latency Switch 12: a hardware/microcode component with a primary task of delivering data and control information between
Communication Nodes 14, one ormore Storage Nodes more Control Nodes 18. - 2) Communication Node 14: a hardware/microcode component with the primary task of enabling the "playing" (delivering data isochronously) or "recording" (receiving data isochronously) over an externally defined interface usually familiar to the broadcast industry: NTSC, PAL, D1, D2, etc. The digital-to-video interface is embodied in a video card contained in a plurality of
video ports 15 connected at the output of eachcommunication node 14. - 3)
Storage Node 16, 17: a hardware/microcode component with the primary task of managing a storage medium such as disk and associated storage availability options. - 4) Control Node 18: a hardware/microcode component with the primary task of receiving and executing control commands from an externally defined subsystem interface familiar to the computer industry.
- A typical media streamer with 64 nodes implementation might contain 31 communication nodes, 31 storage nodes, 2 control nodes interconnected with the
low latency switch 12. A smaller system might contain no switch and a single hardware node that supports communications, storage and control functions. The design ofmedia streamer 10 allows a small system to grow to a large system in the customer installation. In all configurations, the functional capability ofmedia streamer 10 can remain the same except for the number of streams delivered and the number of multimedia hours stored. - In Fig. 1A, further details of
low latency switch 12 are shown. A plurality of circuit switch chips (not shown) are interconnected oncrossbar switch cards 20 which are interconnected via a planar board (schematically shown). The planar and asingle card 20 constitute a low latency crossbar switch with 16 node ports.Additional cards 20 may be added to configure additional node ports and, if desired, active redundant node ports for high availability. Each port of thelow latency switch 12 enables, by example, a 25 megabyte per second, full duplex communication channel. - Information is transferred through the
switch 12 in packets. Each packet contains a header portion that controls the switching state of individual crossbar switch points in each of the switch chips. Thecontrol node 18 provides the other nodes (storage nodes low latency switch 12. - In Fig. 1B, internal details of a
tape storage node 17 are illustrated. As will be hereafter understood,tape storage node 17 provides a high capacity storage facility for storage of digital representations of video presentations. - As employed herein a video presentation can include one or more images that are suitable for display and/or processing. A video presentation may include an audio portion. The one or more images may be logically related, such as sequential frames of a film, movie, or animation sequence. The images may originally be generated by a camera, by a digital computer, or by a combination of a camera and a digital computer. The audio portion may be synchronized with the display of successive images. As employed herein a data representation of a video presentation can be any suitable digital data format for representing one or more images and possibly audio. The digital data may be encoded and/or compressed.
- Referring again to Fig. 1B a
tape storage node 17 includes a tapelibrary controller interface 24 which enables access to multiple tape records contained in atape library 26. Afurther interface 28 enables access to other tape libraries via an SCSI bus interconnection. Aninternal system memory 30 enables a buffering of video data received from either ofinterfaces path 32.System memory block 30 may be a portion of aPC 34 which includessoftware 36 for tape library and file management actions. A switch interface and buffer module 38 (used also indisk storage nodes 16,communication nodes 14, and control nodes 18) enables interconnection between thetape storage node 17 andlow latency switch 12. That is, themodule 38 is responsible for partitioning a data transfer into packets and adding the header portion to each packet that theswitch 12 employs to route the packet. When receiving a packet from theswitch 12 themodule 38 is responsible for stripping off the header portion before locally buffering or otherwise handling the received data. - Video data from
tape library 26 is entered intosystem memory 30 in a first buffering action. Next, in response to initial direction fromcontrol node 18, the video data is routed throughlow latency switch 12 to adisk storage node 16 to be made ready for substantially immediate access when needed. - In Fig. 1C, internal details of a
disk storage node 16 are shown. Eachdisk storage node 16 includes a switch interface andbuffer module 40 which enables data to be transferred from/to a RAID buffer video cache andstorage interface module 42.Interface 42 passes received video data onto a plurality ofdisks 45, spreading the data across the disks in a quasi-RAID fashion. Details of RAID memory storage are known in the prior art and are described in "A Case for Redundant Arrays of Inexpensive Disks (RAID)", Patterson et al., ACM SIGMOD Conference, Chicago, IL, June 1-3, 1988 pages 109-116. - A
disk storage node 16 further has aninternal PC 44 which includessoftware modules disks 45. In essence, eachdisk storage node 16 provides a more immediate level of video data availability than atape storage node 17. Eachdisk storage node 16 further is enabled to buffer (in a cache manner) video data in a semiconductor memory of switch interface andbuffer module 40 so as to provide even faster availability of video data, upon receiving a request therefor. - In general, a storage node includes a mass storage unit (or an interface to a mass storage unit) and a capability to locally buffer data read from or to be written to the mass storage unit. The storage node may include sequential access mass storage in the form of one or more tape drives and/or disk drives, and may include random access storage, such as one or more disk drives accessed in a random access fashion and/or semiconductor memory.
- In Fig. 1D, a block diagram is shown of internal components of a
communications node 14. Similar to each of the above noted nodes,communication node 14 includes a switch interface andbuffer module 50 which enables communications withlow latency switch 12 as described previously. Video data is directly transferred between switch interface andbuffer module 50 to a stream buffer and communication interface 52 for transfer to a user terminal (not shown). APC 54 includessoftware modules additional input 60 to stream buffer and communication interface 52 enables frame synchronization of output data. That data is received fromautomation control equipment 62 which is, in turn, controlled by asystem controller 64 that exerts overall operational control of the stream server 10 (see Fig. 1).System controller 64 responds to inputs from user control settop boxes 65 to cause commands to be generated that enablemedia streamer 10 to access a requested video presentation.System controller 64 is further provided with a user interface anddisplay facility 66 which enables a user to input commands, such as by hard or soft buttons, and other data to enable an identification of video presentations, the scheduling of video presentations, and control over the playing of a video presentation. - Each
control node 18 is configured as a PC and includes a switch interface module for interfacing withlow latency switch 12. Eachcontrol node 18 responds to inputs fromsystem controller 64 to provide information to thecommunication nodes 14 andstorage nodes low latency switch 12. Furthermore,control node 18 includes software for enabling staging of requested video data from one or more ofdisk storage nodes 16 and the delivery of the video data, via a stream delivery interface, to a user display terminal.Control node 18 further controls the operation of both tape anddisk storage nodes low latency switch 12. - The media streamer has three architected external interfaces, shown in Fig. 1. The external interfaces are:
- 1) Control Interface: an open system interface executing TCP/IP protocol (Ethernet LAN, TokenRing LAN, serial port, modem, etc.)
- 2) Stream Delivery Interface: one of several industry standard interfaces designed for the delivery of data streams (NTSC, D1, etc.).
- 3) Automation Control Interface: a collection of industry standard control interfaces for precise synchronization of stream outputs (GenLock, BlackBurst, SMPTE clock, etc.)
- Application commands are issued to
media streamer 10 over the control interface. When data load commands are issued, the control node breaks the incoming data file into segments (i.e. data blocks) and spreads it across one or more storage nodes. Material density and the number of simultaneous users of the data affect the placement of the data onstorage nodes - When commands are issued over the control interface to start the streaming of data to an end user,
control node 18 selects and activates anappropriate communication node 14 and passes control information indicating to it the location of the data file segments on thestorage nodes communications node 14 activates thestorage nodes low latency switch 12, to begin the movement of data. - Data is moved between
disk storage nodes 16 andcommunication nodes 14 vialow latency switch 12 and "just in time" scheduling algorithms. The technique used for scheduling and data flow control is more fully described below. The data stream that is emitted from acommunication node interface 14 is multiplexed to/fromdisk storage nodes 16 so that a single communication node stream uses a fraction of the capacity and bandwidth of eachdisk storage node 16. In this way,many communication nodes 14 may multiplex access to the same or different data on thedisk storage nodes 16. For example,media streamer 10 can provide 1500 individually controlled end user streams from the pool ofcommunication nodes 14, each of which is multiplexing accesses to a single multimedia file spread across thedisk storage nodes 16. This capability is termed "single copy multiple stream". - The commands that are received over the control interface are executed in two distinct categories. Those which manage data and do not relate directly to stream control are executed at "low priority". This enables an application to load new data into the
media streamer 10 without interfering with the delivery of data streams to end users. The commands that affect stream delivery (i.e. output) are executed at "high priority". - The control interface commands are shown in Fig. 2. The low priority data management commands for loading and managing data in
media streamer 10 include VS-CREATE, VS-OPEN, VS-READ, VS-WRITE, VS-GET_POSITION, VS-SET_POSITION, VS-CLOSE, VS-RENAME, VS-DELETE GET_ATTRIBUTES, and VS-GET_NAMES. - The high priority stream control commands for starting and managing stream outputs include VS-CONNECT, VS-PLAY, VS-RECORD, VS-SEEK, VS-PAUSE, VS-STOP and VS-DISCONNECT.
Control node 18 monitors stream control commands to assure that requests can be executed. This "admission control" facility incontrol node 18 may reject requests to start streams when the capabilities ofmedia streamer 10 are exceeded. This may occur in several circumstances: - 1) when some component fails in the system that prevents maximal operation;
- 2) when a specified number of simultaneous streams to a data file (as specified by parameters of a VS-CREATE command) is exceeded; and
- 3) when a specified number of simultaneous streams from the system, as specified by an installation configuration, is exceeded.
- The
communication nodes 14 are managed as a heterogeneous group, each with a potentially different bandwidth (stream) capability and physical definition. The VS-CONNECT command directsmedia streamer 10 to allocate acommunication node 14 and some or all of its associated bandwidth enabling isochronous data stream delivery. For example,media streamer 10 can play uncompressed data stream(s) through communication node(s) 14 at 270 MBits/Sec while simultaneously playing compressed data stream(s) at much lower data rates (usually 1-16 Mbits/Sec) onother communication nodes 14. -
Storage nodes media streamer 10 to allocate storage in one ormore storage nodes - Three additional commands support automation control systems in the broadcast industry: VS-CONNECT-LIST, VS-PLAY-AT-SIGNAL and VS-RECORD-AT-SIGNAL. VS-CONNECT-LIST allows applications to specify a sequence of play commands in a single command to the subsystem.
Media streamer 10 will execute each play command as if it were issued over the control interface but will transition between the delivery of one stream and the next seamlessly. An example sequence follows: - 1)
Control node 18 receives a VS-CONNECT-LIST command with play subcommands indicating that all or part of FILE1, FILE2 and FILE3 are to be played in sequence.Control node 18 determines the maximum data rate of the files and allocates that resource on acommunication node 14. The allocatedcommunication node 14 is given the detailed play list and initiates the delivery of the isochronous stream. - 2) Near the end of the delivery of FILE1, the
communication node 14 initiates the delivery of FILE2 but it does not enable it to the output port of the node. When FILE1 completes or a signal from the Automation Control Interface occurs, thecommunication node 14 switches the output port to the second stream from the first. This is done within 1/30th of a second or within one standard video frame time. - 3) The
communication node 14 deallocates resources associated with FILE1. - VS-PLAY-AT-SIGNAL and VS-RECORD-AT-SIGNAL allow signals from the external Automation Control Interface to enable data transfer for play and record operations with accuracy to a video fame boundary. In the previous example, the VS-CONNECT-LIST includes a PLAY-AT-SIGNAL subcommand to enable the transition from FILE1 to FILE2 based on the external automation control interface signal. If the subcommand were VS-PLAY instead, the transition would occur only when the FILE1 transfer was completed.
- Other commands that
media streamer 10 executes provide the ability to manage storage hierarchies. These commands are: VS-DUMP, VS-RESTORE, VS-SEND, VS-RECEIVE and VS-RECEIVE_AND_PLAY. Each causes one or more multimedia files to move betweenstorage nodes 16 and two externally defined hierarchical entities. - 1) VS-DUMP and VS-RESTORE enable movement of data between
disk storage nodes 16, and atape storage unit 17 accessible to controlnode 18. Data movement may be initiated by the controlling application or automatically bycontrol node 18. - 2) VS-SEND and VS-RECEIVE provide a method for transmitting a multimedia file to another media streamer. Optionally, the receiving media streamer can play the incoming file immediately to a preallocated communication node without waiting for the entire file.
- In addition to the modular design and function set defined in the media streamer architecture, data flow is optimized for isochronous data transfer to significantly reduce cost. In particular:
- 1) bandwidth of the low latency switch exceeds that of the attached nodes; communications between nodes is nearly non-blocking;
- 2) data movement into processor memory is avoided, more bandwidth is provided;
- 3) processing of data is avoided; expensive processing units are eliminated; and
- 4) data movement is carefully scheduled so that; large data caches are avoided.
- In traditional computer terms,
media streamer 10 functions as a system of interconnected adapters with an ability to perform peer-peer data movement between themselves through thelow latency switch 12. Thelow latency switch 12 has access to data storage and moves data segments from one adapter's memory to that of another without a "host computer" intervention. -
Media streamer 10 provides hierarchical storage elements. It exhibits a design that allows scalability from a very small video system to a very large system. It also provides a flexibility for storage management to adapt to the varied requirements necessary to satisfy functions of Video on Demand, Near Video on Demand, Commercial insertion, high quality uncompressed video storage, capture and playback. - In
media streamer 10, video presentations are moved from high performance digital tape to disk, to be played out at the much lower data rate required by the end user. In this way, only a minimum amount of video time is stored on the disk subsystem. If the system is "Near Video on Demand", then only, by example, 5 minutes of each movie need be in disk storage at any one time. This requires only 22 segments of 5 minutes each for a typical 2 hour movie. The result is that the total disk storage requirement for a video presentation is reduced, since not all of the video presentation is kept on the disk file at any one time. Only that portion of the presentation that is being played need be present in the disk file. - In other words, if a video presentation requires a time T to present in its entirety, and is stored as a digital representation having N data blocks, then each data block stores a portion of the video presentation that corresponds to approximately a T/N period of the video presentation. A last data block of the N data blocks may store less than a T/N period.
- As demand on the system grows and the number of streams increases, the statistical average is that about 25% of video stream requests will be for the same movie, but at different sub-second time intervals, and the distribution of viewers will be such that more than 50% of those sub-second demands will fall within a group of 15 movie segments.
- An aspect of this invention is the utilization of the most appropriate technology that will satisfy this demand. A random access cartridge loader (such as produced by the IBM Corporation) is a digital tape system that has high storage capacity per tape, mechanical robotic loading of 100 tapes per drawer, and up to 2 tape drives per drawer. The result is an effective tape library for movie-on-demand systems. However, the invention also enables very low cost digital tape storage library systems to provide the mass storage of the movies, and further enables low demand movies to be played directly from tape to speed-matching buffers and then on to video decompression and distribution channels.
- A second advantage of combining hierarchical tape storage to any video system is that it provides rapid backup to any movie that is stored on disk, in the event that a disk becomes inoperative. A typical system will maintain a "spare" disk such that if one disk unit fails, then movies can be reloaded from tape. This would typically be combined with a RAID or a RAID-like system.
- When demand for video streams increases to a higher level, it becomes more efficient to store an entire movie on disk and save the system performance overhead required to continually move video data from tape to disk. A typical system will still contain a library of movies that are stored on tape, since the usual number of movies in the library is 10x to 100x greater than the number that will be playing at any one time. When a user requests a specific movie, segments of it are loaded to a
disk storage node 16 and started from there. - When there are large numbers of users wanting to see the same movie, it is beneficial to keep the movie on disk. These movies are typically the "Hot" movies of the current week and are pre-loaded from tape to disk prior to peak viewing hours. This tends to reduce the work load on the system during peak hours.
- As demand for "hot" movies grows,
media streamer 10, through an MRU-based algorithm, decides to move key movies up into cache. This requires substantial cache memory, but in terms of the ratio of cost to the number of active streams, the high volume that can be supported out of cache lowers the total cost of themedia streamer 10. - Because of the nature of video data, and the fact that the system always knows in advance what videos are playing and what data will be required next, and for how long, methods are employed to optimize the use of cache, internal buffers, disk storage, the tape loader, bus performance, etc.
- Algorithms that control the placement and distribution of the content across all of the storage media enable delivery of isochronous data to a wide spectrum of bandwidth requirements. Because the delivery of isochronous data is substantially 100% predictable, the algorithms are very much different from the traditional ones used for other segments of the computer industry where caching of user-accessed data is not always predictable.
- As indicated above,
media streamer 10 delivers video streams to various outputs such as TV sets and set top boxes attached via a network, such as a LAN, ATM, etc. To meet the requirements for storage capacity and the number of simultaneous streams, a distributed architecture consisting of multiple storage and communication nodes is preferred. The data is stored onstorage nodes communication node 14 obtains the data fromappropriate storage nodes control node 18 provides a single system image to the external world. The nodes are connected by the cross-connect,low latency switch 12. - Data rates and the data to be delivered is predictable for each stream. The invention makes use of this predictability to construct a data flow architecture that makes full use of resources and which insures that the data for each stream is available at every stage when it is needed.
- Data flow between the
storage nodes communication nodes 14 can be set up in a number of different ways. - A
communication node 14 is generally responsible for delivering multiple streams. It may have requests outstanding for data for each of these streams, and the required data may come fromdifferent storage nodes communication node 14 amongdifferent storage nodes - The amount of required buffering can be determined as follows: the
communication node 14 determines the mean time required to send a request to thestorage node - Contention by the
storage nodes communication node 14 is eliminated by employing the following two criterion: - 1) A
storage node communication node 14 on receipt of a specific request. - 2) A given
communication node 14 serializes all requests for data to be read from storage nodes so that only one request for receiving data from thecommunication node 14 is outstanding at any time, independent of the number of streams thecommunication node 14 is delivering. - As was noted above, the reduction of latency relies on a just-in-time scheduling algorithm at every stage. The basic principle is that at every stage in the data flow for a stream, the data is available when the request for that data arrives. This reduces latency to the time needed for sending the request and performing any data transfer. Thus, when the
control node 18 sends a request to thestorage node 16 for data for a specific stream, thestorage node 16 can respond to the request almost immediately. This characteristic is important to the solution to the contention problem described above. - Since, in the media streamer environment, access to data is sequential and the data rate for a stream is predictable, a
storage node 16 can anticipate when a next request for data for a specific stream can be expected. The identity of the data to be supplied in response to the request is also known. Thestorage node 16 also knows where the data is stored and the expected requests for the other streams. Given this information and the expected time to process a read request from a disk, thestorage node 16 schedules a read operation so that the data is available just before the request from thecommunication node 14 arrives. For example, if the stream data rate is 250KB/sec, and astorage node 16 contains every 4th segment of a video, requests for data for that stream will arrive every 4 seconds. If the time to process a read request is 500 msec (with the requisite degree of confidence that the read request will complete in 500 msec) then the request is scheduled for at least 500 msec before the anticipated receipt of request from thecommunication node 14. - The
control node 18 function is to provide an interface betweenmedia streamer 10 and the external world for control flow. It also presents a single system image to the external world even if themedia streamer 10 is itself implemented as a distributed system. The control node functions are implemented by a defined Application Program Interface (API) . The API provides functions for creating the video content inmedia streamer 10 as well as for real-time functions such as playing/recording of video data. Thecontrol node 18 forwards real-time requests to play or stop the video to thecommunication nodes 14. - A
communication node 14 has the following threads (in the same process) dedicated to handle a real time video interface: a thread to handle connect/disconnect requests, a thread to handle play/stop and pause/resume requests, and a thread to handle a jump request (seek forward or seek backward). In addition it has an input thread that reads data for a stream from thestorage nodes 16 and an output thread that writes data to the output ports. - A data flow structure in a
communication node 14 for handling data during the playing of a video is depicted in Fig. 3. The data flow structure includes aninput thread 100 that obtains data from astorage node 16. Theinput thread 100 serializes receipt of data from storage nodes so that only one storage node is sending data at any one time. Theinput thread 100 ensures that when anoutput thread 102 needs to write out of a buffer for a stream, the buffer is already filled with data. In addition, there is ascheduler function 104 that schedules both the input and output operations for the streams. This function is used by both the input andoutput threads - Each thread works off a queue of requests. The
request queue 106 for theoutput thread 102 contains requests that identify the stream and that points to an associated buffer that needs to be emptied. These requests are arranged in order by a time at which they need to be written to the video output interface. When theoutput thread 102 empties a buffer, it marks it as empty and invokes thescheduler function 104 to queue the request in aninput queue 108 for the stream to the input thread (for the buffer to be filled). Thequeue 108 for theInput thread 100 is also arranged in order by a time at which buffers need to be filled. -
Input thread 100 also works off therequest queue 108 arranged by request time. Its task is to fill the buffer from astorage node 16. For each request in its queue, theinput thread 100 takes the following actions. Theinput thread 100 determines thestorage node 16 that has the next segment of data for the stream (the data for a video stream is preferably striped across a number of storage nodes). Theinput thread 100 then sends a request to the determined storage node (using messages through switch 12) requesting data for the stream, and then waits for the data to arrive. - This protocol ensures that only one
storage node 16 will be sending data to aparticular communications node 14 at any time, i.e., it removes the conflict that may arise if the storage nodes were to send data asynchronously to acommunications node 14. When the requested data is received from thestorage node 16, theinput thread 100 marks the buffer as full and invokes thescheduler 104 to buffer a request (based on the stream's data rate) to theoutput thread 102 to empty the buffer. - The structure of the
storage node 16 for data flow to support the playing of a stream is depicted in Fig. 4. Thestorage node 16 has a pool of buffers that contain video data. It has aninput thread 110 for each of the logical disk drives and anoutput thread 112 that writes data out to thecommunications nodes 14 via theswitch matrix 12. It also has ascheduler function 114 that is used by the input andoutput threads message thread 116 that processes requests fromcommunications nodes 14 requesting data. - When a message is received from a
communications node 14 requesting data, themessage thread 116 will normally find the requested data already buffered, and queues the request (queue 118) to the output thread. The requests are queued in time order. Theoutput thread 112 will empty the buffer and add it to the list of free buffers. Each of theinput threads 110 have their own request queues. For each of the active streams that have video data on the associated disk drive, aqueue 120 ordered by request time (based on the data rate, level of striping, etc.) to fill the next buffer is maintained. The thread takes the first request inqueue 120, associates a free buffer with it and issues an I/O request to fill the buffer with the data from the disk drive. When the buffer is filled, it is added to the list of full buffers. This is the list that is checked by themessage thread 116 when the request for data for the stream is received. When a message for data is received from acommunication node 14 and the required buffer is not full, it is considered to be a missed deadline. - A just-in-time scheduling technique is used in both the
communications nodes 14 and thestorage nodes 16. The technique employs the following parameters: - bc
- = buffer size at the
communications node 14; - bs
- = buffer size at the
storage node 16; - r
- = video stream data rate;
- n
- = number of stripes of video containing the data for the video stream;
- sr
- = stripe data rate; and
- sr
- = r/n.
- The algorithm used is as follows:
- (1) sfc = frequency of requests at the communications node for a stream = r/bc; and
- (2) dfc = frequency of disk read requests at the Storage Node = sr/bs.
- The "striping" of video data is described in detail below in section H.
- The requests are scheduled at a frequency determined by the expressions given above, and are scheduled so that they complete in advance of when the data is needed. This is accomplished by "priming" the data pipe with data at the start of playing a video stream.
- Calculations of sfc and dfc are made at connect time, in both the
communication node 14 playing the stream and thestorage nodes 16 containing the video data. The frequency (or its inverse, the interval) is used in scheduling input from disk in the storage node 16 (see Fig. 4) and in scheduling the output to the port (and input from the storage nodes) in the communication node 14 (see Fig. 3). - Play a stream at 2.0 mbits/sec (250,000 bytes/sec.) from a video striped on four storage nodes. Also assume that the buffer size at the communication node is 50,000 bytes and the buffer size at the disk node is 250,000 bytes. Also, assume that the data is striped in segments of 250,000 bytes/sec.
- The values for the various parameters in the Just-In-Time algorithm are as follows:
- bc =
- 250,00 bytes (buffer size at the communication node 14);
- bs =
- 250,000 bytes (buffer size at the storage node) 16;
- r =
- 250,000 bytes/sec (stream data rate);
- n =
- 4 (number of stripes that video for the stream is striped over);
- sr =
- r/n = 6250 bytes/sec. or 250,000/4 sec., i.e. 250,000 bytes every four seconds;
- sfc =
- r/bc = 1/sec, (frequency of requests at the communication node 14); and
- dfc =
- r/bs = 1/sec. (frequency of requests at the storage node 16).
- The
communication node 14 responsible for playing the stream will schedule input and output requests at the frequency of 1/sec. or at intervals of 1.0 seconds. Assuming that thecommunication node 14 has two buffers dedicated for the stream, thecommunication node 14 ensures that it has both buffers filled before it starts outputting the video stream. - At connect time the
communication node 14 will have sent messages to all fourstorage nodes 16 containing a stripe of the video data. The first two of the storage nodes will anticipate the requests for the first segment from the stripes and will schedule disk requests to fill the buffers. Thecommunication node 14 will schedule input requests (see Fig. 3) to read the first two segments into two buffers, each of size 250,000 bytes. When a play request comes, thecommunication node 14 will first insure that the two buffers are full, and then informs allstorage nodes 16 that play is about to commence. It then starts playing the stream. When the first buffer has been output (which at 2 Mbits/sec. (or 250,000 bytes/sec.) will take one second), thecommunication node 14 requests data from astorage node 16. Thecommunication node 14 then requests data from each of the storage nodes, in sequence, at intervals of one second, i.e. it will request data from a specific storage node at intervals of four seconds. It always requests 250,000 bytes of data at a time. The calculations for the frequency at which a communication node requests data from thestorage nodes 16 is done by thecommunication node 14 at connect time. - The
storage nodes 16 anticipate the requests for the stream data as follows. Thestorage node 16 containing stripe 3 (see section H below) can expect a request for the next 250,000 byte segment one second after the play has commenced, and every four seconds thereafter. Thestorage node 16 containingstripe 4 can expect a request two seconds after the play has commenced and every four seconds thereafter. Thestorage node 16 containingstripe 2 can expect a request four seconds after play has commenced and four seconds thereafter. That is, eachstorage node 16 schedules the input from disk at a frequency of 250,000 bytes every four seconds from some starting time (as described above). The scheduling is accomplished in thestorage node 16 after receipt of the play command and after a buffer for the stream has been output. The calculation of the request frequency is done at the time the connect request is received. - It is also possible to use different buffer sizes at the
communication node 14 and thestorage node 16. For example, the buffer size at thecommunication node 14 may be 50,000 bytes and the buffer size at thestorage node 16 may be 250,000 bytes. In this case, the frequency of requests at thecommunication node 14 will be (250,000/50,000) 5/sec. or every 0.2 seconds, while the frequency at thestorage node 16 will remain at 1/sec. Thecommunication node 14 reads the first two buffers (100,000 bytes) from the storage node containing the first stripe (note that the segment size is 250,000 bytes and thestorage node 16 containing the first segment will schedule the input from disk at connect time). When play commences, thecommunication node 14 informs thestorage nodes 16 of same and outputs the first buffer. When the buffer empties, thecommunication node 14 schedules the next input. The buffers will empty every 0.2 seconds and thecommunication node 14 requests input from thestorage nodes 16 at that frequency, and also schedules output at the same frequency. - In this example,
storage nodes 16 can anticipate five requests to arrive at intervals of 0.2 seconds (except for the first segment where 100,000 bytes have been already read, so initially three request will come after commencement of play every four seconds, i.e., the next sequence of five requests (each for 50,000 bytes) will arrive four seconds after the last request of the previous sequence). Since, the buffer size at the storage node is 250,000 bytes, thestorage nodes 16 will schedule the input from disk every four seconds (just as in the example above). - The following steps trace the control and data flow for the playing action of a stream. The steps are depicted in Figure 5 for setting up a video for play. The steps are in time order.
- 1. The user invokes a command to setup a port with a specific video that has been previously loaded. The request is sent to the
control node 18. - 2. A thread in the
control node 18 receives the request and a VS-CONNECT function. - 3. The control node thread opens a catalog entry for the video, and sets up a memory descriptor for the video with the striped file information.
- 4. The
control node 18 allocates acommunication node 14 and an output port on that node for the request. - 5. Then control
node 18 sends a message to the allocatedcommunication node 14. - 6. A thread in the
communication node 14 receives the message from thecontrol node 18. - 7. The communication node thread sends an open request to the
storage node 16 containing the stripe files. - 8,9. A thread in each
storage node 16 that the open request is sent to receive the request and opens the requested stripe file and allocate any needed resources, as well as scheduling input from disk (if the stripe file contains the first few segments). - 10. The storage node thread sends a response back to the
communication node 14 with the handle (identifier) for the stripe file. - 11. The thread in the
communication node 14 waits on responses from all of the storage nodes involved and on receiving successful responses allocates resources for the stream, including setting up the output port. - 12. The
communication node 14 then schedules input to prime the video data pipeline. - 13. The
communication node 14 then sends a response back to thecontrol node 18. - 14. The control node thread on receipt of a successful response from the
communication node 14 returns a handle for the stream to the user be used in subsequent requests related to this instance of the stream. - The following are the steps in time order for the actions that are taken on receipt of the play request after a video stream has been successfully set up. The steps are depicted in Fig. 6.
- 1. The user invokes the play command.
- 2. A thread in the
control node 18 receives the request. - 3. The thread in the
control node 18 verifies that the request is for a stream that is set up, and then sends a play request to the allocatedcommunication node 14. - 4. A thread in the
communication node 14 receives the play request. - 5. The
communication node 14 sends the play request to all of the involvedstorage nodes 16 so that they can schedule their own operations in anticipation of subsequent requests for this stream. An "involved" storage node is one that stores at least one stripe of the video presentation of interest. - 6. A thread in each
involved storage node 16 receives the request and sets up schedules for servicing future requests for the stream. Eachinvolved storage node 16 sends a response back to thecommunication node 14. - 7. The communication node thread ensures that the pipeline is primed (preloaded with video data) and enables the stream for output.
- 8. The
communication node 14 then sends a response back to thecontrol node 18. - 9. The
control node 18 sends a response back to the user that the stream is playing. - The input and output threads continue to deliver the video presentation to the specified port until a stop/pause command is received or the video completes.
-
Media streamer 10 is a passive server, which performs video server operations when it receives control commands from an external control system. Figure 7 shows a system configuration formedia streamer 10 applications and illustrates the interfaces present in the system. -
Media streamer 10 provides two levels of interfaces for users and application programs to control its operations:
a user interface ((A) in Fig. 7); and
an application program interface ((B) in Fig. 7). - Both levels of interface are provided on client control systems, which communicate with the
media streamer 10 through a remote procedure call (RPC) mechanism. By providing the interfaces on the client control systems, instead of on themedia streamer 10, the separation of application software frommedia streamer 10 is achieved. This facilitates upgrading or replacing themedia streamer 10, since it does not require changing or replacing the application software on the client control system. -
Media streamer 10 provides two types of user interfaces:
a command line interface; and
a graphical user interface. - The command line interface displays a prompt on the user console or interface (65,66 of Fig. 1). After the command prompt, the user enters a command, starting with a command keyword followed by parameters. After the command is executed, the interface displays a prompt again and waits for the next command input. The media streamer command line interface is especially suitable for the following two types of operations:
- Batch control involves starting execution of a command script that contains a series of video control commands. For example, in the broadcast industry, a command script can be prepared in advance to include pre-recorded, scheduled programs for an extended period of time. At the scheduled start time, the command script is executed by a single batch command to start broadcasting without further operator intervention.
- Automatic control involves executing a list of commands generated by a program to update/play materials stored on
media streamer 10. For example, a news agency may load new materials into themedia streamer 10 every day. An application control program that manages the new materials can generate media streamer commands (for example, Load, Delete, Unload) to update themedia streamer 10 with the new materials. The generated commands may be piped to the command line interface for execution. - Fig. 8 is an example of the media streamer graphical user interface. The interface resembles the control panel of a video cassette recorder, which has control buttons such as Play, Pause, Rewind, and Stop. In addition, it also provides selection panels when an operation involves a selection by the user (for example, load requires the user to select a video presentation to be loaded.) The graphical user interface is especially useful for direct user interactions.
- A "Batch"
button 130 and an "Import/Export"button 132 are included in the graphical user interface. Their functions are described below. -
Media streamer 10 provides three general types of user functions:
Import/Export;
VCR-like play controls; and
Advanced user controls. - Import/Export functions are used to move video data into and out of the
media streamer 10. When a video is moved into media streamer 10 (Import) from the client control system, the source of the video data is specified as a file or a device of the client control system. The target of the video data is specified with a unique name withinmedia streamer 10. When a video is moved out of media streamer 10 (Export) to the client control system, the source of the video data is specified by its name withinmedia streamer 10, and the target of the video data is specified as a file or a device of the client control system. - In the Import/Export category of user functions,
media streamer 10 also provides a "delete" function to remove a video and a "get attributes" function to obtain information about stored videos (such as name, data rate). - To invoke Import/Export functions through the graphical user interface, the user clicks on the "Import/Export" soft button 132 (Fig. 8). This brings up a new panel (not shown) that contains "Import", "Export", "Delete", "Get Attribute" buttons to invoke the individual functions.
-
Media streamer 10 provides a set of VCR-like play controls. The media streamer graphical user interface in Fig. 8 shows that the following functions are available: Load, Eject, Play, Slow, Pause, Stop, Rewind, Fast Forward and Mute. These functions are activated by clicking on the corresponding soft buttons on the graphical user interface. The media streamer command line interface provides a similar set of functions: - Setup - sets up a video for a specific output port. Analogous to loading a video cassette into a VCR.
- Play - initiates playing a video that has been set up or resumes playing a video that has been paused.
- Pause - pauses playing a video.
- Detach - analogous to ejecting a video cassette from a VCR.
- Status - displays the status of ports, such as which video is playing, elapsed playing time, etc.
- In order to support specific application requirements, such as the broadcasting industry, the present invention provides several advanced user controls:
Play list - set up multiple videos and their sequence to be played on a port
Play length - limit the time a video will be played
Batch operation - perform a list of operations stored in a command file. - The Play list and Play length controls are accomplished with a "Load"
button 134 on the graphical user interface. Each "setup" command will specify a video to be added to the Play list for a specific port. It also specifies a time limit that the video will be played. Fig. 9 shows the panel which appears in response to clicking on the "load"soft button 134 on the graphical user interface to select a video to be added to the play list and to specify the time limit for playing the video. When the user clicks on a file name in the "Files"box 136, the name is entered into "File Name"box 138. When the user clicks on the "Add"button 140, the file name in "File Name"box 138 is appended to the "Play List"box 142 with its time limit and displays the current play list (with time limit of each video on the play list). - The batch operation is accomplished by using a "Batch"
soft button 130 on the graphical user interface (see Fig. 8). - When the "Batch"
button 130 is activated, a batch selection panel is displayed for the user to select or enter the command file name (see Fig. 10). Pressing an "Execute"button 144 on the batch selection panel starts the execution of the commands in the selected command file. Fig. 10 is an example of the "Batch" and "Execute" operation on the graphical user interface. For example, the user has first created a command script in a file "batch2" in the c:/batchcmd directory. The user then clicks on "Batch"button 130 on the graphical user interface shown in Fig. 8 to bring up the Batch Selection panel. Next, the user clicks on "c:/batchcmd" in "Directory"box 146 of the Batch Selection panel. This results in the display of a list of files in "Files"box 148. Clicking on the "batch2" line in "Files"box 148 enters it into the "File Name"box 150. Finally, the user clicks on the "Execute"button 144 to execute in sequence the commands stored in the "batch2" file. -
Media streamer 10 provides the above-mentioned Application Program Interface (API) so that application control programs can interact withmedia streamer 10 and control its operations (reference may be made again to Fig. 7). - The API consists of remote procedure call (RPC)-based procedures. Application control programs invoke the API functions by making procedure calls. The parameters of the procedure call specify the functions to be performed. The application control programs invoke the API functions without regarding the logical and physical location of
media streamer 10. The identity of amedia streamer 10 to provide the video services is established at either the client control system startup time or, optionally, at the application control program initiation time. Once the identity ofmedia streamer 10 is established, the procedure calls are directed to thecorrect media streamer 10 for servicing. - Except as indicated below, API functions are processed synchronously, i.e., once a function call is returned to the caller, the function is completed and no additional processing at
media streamer 10 is needed. By configuring the API functions as synchronous operations, additional processing overheads for context switching, asynchronous signalling and feedbacks are avoided. This performance is important in video server applications due to the stringent real-time requirements. - The processing of API functions is performed in the order that requests are received. This ensures that user operations are processed in the correct order. For example, a video must be connected (setup) before it can be played. Another example is that switching the order of a "Play" request followed by a "Pause" request will have a completely different result to the user.
- A VS-PLAY function initiates the playing of the video and returns the control to the caller immediately (without waiting until the completion of the video play). The rationale for this architecture is that since the time for playing a video is typically long (minutes to hours) and unpredictable (there may be pause or stop commands), by making the VS-PLAY function asynchronous, it frees up the resources that would otherwise be allocated for an unpredictably, long period of time.
- At completion of video play,
media streamer 10 generates an asynchronous call to a system/port address specified by the application control program to notify the application control program of the video completion event. The system/port address is specified by the application control program when it calls the API VS-CONNECT function to connect the video. It should be noted that the callback system/port address for VS-PLAY is specified at the individual video level. That means the application control programs have the freedom of directing video completion messages to any control point. For example, one application may desire the use of one central system/port to process the video completion messages for many or all of the client control systems. In another application, several different system/port addresses may be employed to process the video completion messages for one client control system. - With the API architecture,
media streamer 10 is enabled to support multiple concurrent client control systems with heterogeneous hardware and software platforms, with efficient processing of both synchronous and asynchronous types of operations, while ensuring the correct sequencing of the operation requests. For example, themedia streamer 10 may use an IBM OS/2 operating system running on a PS/2 system, while a client control system may use an TBM AIX operating system running on an RS/6000 system (IBM, OS/2, PS/2, AIX, and RS/6000 are all trademarks of the International Business Machines Corporation). - Communications between a client control system and the
media streamer 10 is accomplished through, by example, a known type of Remote Procedure Call (RPC) facility. Fig. 11 shows the RPC structure for the communications between aclient control system 11 and themedia streamer 10. In calling media streamer functions, theclient control system 11 functions as the RPC client and themedia streamer 10 functions as the RPC server. This is indicated at (A) in Fig. 11. However, for an asynchronous function, i.e., VS-PLAY, its completion causesmedia streamer 10 to generate a call to theclient control system 11. In this case, theclient control system 11 functions as the RPC server, whilemedia streamer 10 is the RPC client. This is indicated at (B) in Fig. 11. - In the
client control system 11, the user command line interface is comprised of three internal parallel processes (threads). A first process parses a user command line input and performs the requested operation by invoking the API functions, which result in RPC calls to the media streamer 10 ((A) in Figure 11). This process also keeps track of the status of videos being set up and played for various output ports. A second process periodically checks the elapsed playing time of each video against their specified time limit. If a video has reached its time limit, the video is stopped and disconnected and the next video in the wait queue (if any) for the same output port is started. A third process in theclient control system 11 functions as an RPC server to receive the VS-PLAY asynchronous termination notification from the media streamer 10 (B) in Fig. 11). - During startup of
media streamer 10, two parallel processes (threads) are invoked in order to support the RPCs between the client control system(s) 11 andmedia streamer 10. A first process functions as an RPC server for the API function calls coming from the client control system 11 ((A) in Fig. 11). The first process receives the RPC calls and dispatches the appropriate procedures to perform the requested functions (such as VS-CONNECT, VS-PLAY, VS-DISCONNECT). A second process functions as an RPC client for calling the appropriate client control system addresses to notify the application control programs with asynchronous termination events. The process blocks itself waiting on an internal pipe, which is written by other processes that handle the playing of videos. When the latter reaches the end of a video or an abnormal termination condition, it writes a message to the pipe. The blocked process reads the message and makes an RPC call ((B) in Fig. 11 to the appropriateclient control system 11 port address so that the client control system can update its status and take actions accordingly. - An aspect of this invention provides integrated mechanisms for tailoring cache management and related I/O operations to the video delivery environment. This aspect of the invention is now described in detail.
- Prior art mechanisms for cache management are built into cache controllers and the file subsystems of operating systems. They are designed for general purpose use, and are not specialized to meet the needs of video delivery.
- Fig. 12 illustrates one possible way in which a conventional cache management mechanism may be configured for video delivery. This technique employs a video split between two
disk files 160, 162 (because it is too large for one file), and a processor containing a file system 164, amedia server 168, and avideo driver 170. Also illustrated are twovideo adapter ports disk file 160 into main storage, and to subsequently write the data to afirst video port 172, and also the data flow to read the same segment and write it to asecond video port 174. Fig. 12 is used to illustrate problems incurred by the prior art which are addressed and overcome by themedia streamer 10 of this invention. - Description of steps A1-A12 in Fig. 12.
- A1.
Media server 168 callsfile system 166 to read segment Sk into a buffer invideo driver 170. - A2.
File system 166 reads a part of Sk into a cache buffer infile system 166. - A3.
File system 166 copies the cache buffer into a buffer invideo driver 170. - Steps A2 and A3 are repeated multiple times.
- A4.
File system 166 callsvideo driver 170 to write Sk to video port 1 (176). - A5.
Video driver 170 copies part of Sk to a buffer invideo driver 170. - A6.
Video driver 170 writes the buffer to video port 1 (176). - Steps A5 and A6 are repeated multiple times.
- Steps A7-A12 function in a similar manner, except that
port 1 is changed toport 2. If a part of Sk is in the cache infile system 166 when needed forport 2, then step A8 may be skipped. - As can be realized, video delivery involves massive amounts of data being transferred over multiple data streams. The overall usage pattern fits neither of the two traditional patterns used to optimize caching; random and sequential. If the random option is selected, most cache buffers will probably contain data from video segments which have been recently read, but will have no video stream in line to read them before they have expired. If the sequential option is chosen, the most recently used cache buffers are re-used first, so there is even less chance of finding the needed segment part in the file system cache. As was described previously, an important element of video delivery is that the data stream be delivered isochronously, that is without breaks and interruptions that a viewer or user would find objectionable. Prior art caching mechanisms, as just shown, cannot ensure the isochronous delivery of a video data stream to a user.
- Additional problems illustrated by Fig. 12 are:
- a. Disk and video port I/O is done in relatively small segments to satisfy general file system requirements. This requires more processing time, disk seek overhead, and bus overhead than would be required by video segment size segments.
- b. The processing time to copy data between the file system cache buffers and media server buffers, and between media server buffers and video driver buffers, is an undesirable overhead that it would be desirable to eliminate.
- c. Using two video buffers (i.e. 172, 174) to contain copies of the same video segment at the same time is an inefficient use of main memory. There is even more waste when the same data is stored in the file system cache and also in the video driver buffers.
- There are three principal facets of the cache management operation in accordance with this aspect of the invention: sharing segment size cache buffers across streams; predictive caching; and synchronizing to optimize caching.
- Videos are stored and managed in fixed size segments. The segments are sequentially numbered so that, for example,
segment 5 would store a portion of a video presentation that is nearer to the beginning of the presentation than would a segment numbered 6. The segment size is chosen to optimize disk I/O, video I/O, bus usage and processor usage. A segment of a video has a fixed content, which depends only on the video name, and the segment number. All I/O to disk and to the video output, and all caching operations, are done aligned on segment boundaries. - This aspect of the invention takes two forms, depending on whether the underlying hardware supports peer-to-peer operations with data flow directly between disk and video output card in a
communications node 14, without passing through cache memory in the communications node. For peer-to-peer operations, caching is done at thedisk storage unit 16. For hardware which does not support peer-to-peer operations, data is read directly into page-aligned, contiguous cache memory (in a communications node 14) in segment-sized blocks to minimize I/O operations and data movement. (See F. Video Optimized Digital Memory Allocation, below). - The data remains in the same location and is written directly from this location until the video segment is no longer needed. While the video segment is cached, all video streams needing to output the video segment access the same cache buffer. Thus, a single copy of the video segment is used by many users, and the additional I/O, processor, and buffer memory usage to read additional copies of the same video segment is avoided. For peer to peer operations, half of the remaining I/O and almost all of the processor and main memory usage are avoided at the
communication nodes 14. - Fig. 13 illustrates an embodiment of the invention for the case of a system without peer-to-peer operations. The video data is striped on the
disk storage nodes 16 so that odd numbered segments are on firstdisk storage node 180 and even numbered segments are on second disk storage node 182 (see Section H below). - The data flow for this configuration is also illustrated in Fig. 13. As can be seen, segment Sk is to be read from
disk 182 into acache buffer 184 incommunication node 186, and is then to be written tovideo output ports cache buffer 184 with one I/O operation, and is then written toport 1. Next the SK video data segment is written fromcache buffer 184 toport 2 with one I/O operation. - As can be realized, all of the problems described for the conventional approach of Fig. 12 are overcome by the system illustrated in Fig. 13.
- Fig. 14 illustrates the data flow for a configuration containing support for peer-to-peer operations between a disk storage node and a video output card. A pair of
disk drives video ports communication node 14. - The data flow for this configuration is to read segment Sk from
disk 192 directly to port 1 (with one I/O operation) viadisk cache buffer 198. - If a call follows to read segment SK to
port 2, segment Sk is read directly fromdisk cache buffer 198 into port 2 (with one I/O operation). - When the data read into the
disk cache buffer 198 forport 1 is still resident for the write toport 2, a best possible use of memory, bus, and processor resources results in the transfer of the video segment toports - It is possible to combine the peer to peer and main memory caching mechanism, e.g., using peer to peer operations for video presentations which are playing to only one port of a
communication node 14, and caching in thecommunications node 14 for video presentations which are playing to multiple ports of thecommunication node 14. - A policy for dividing the caching responsibility between disk storage nodes and the communication node is chosen to maximize the number of video streams which can be supported with a given hardware configuration. If the number of streams to be supported known, then the amount and placement of caching storage can then be determined.
- A predictive caching mechanism meets the need for a caching policy well suited to video delivery. Video presentations are in general very predictable. Typically, they start playing at the beginning, play at a fixed rate for a fairly lengthy predetermined period, and stop only when the end is reached. The caching approach of the
media streamer 10 takes advantage of this predictability to optimize the set of video segments which are cached at any one time. - The predictability is used both to schedule a read operation to fill a cache buffer, and to drive the algorithm for reclaiming of cache buffers. Buffers whose contents are not predicted to be used before they would expire are reclaimed immediately, freeing the space for higher priority use. Buffers whose contents are in line for use within a reasonable time are not reclaimed, even if their last use was long ago.
- More particularly, given videos v1, v2,..., and streams s1, s2,... playing these videos, each stream sj plays one video, v(sj), and the time predicted for writing the k-th segment of v(sj) is a linear function:
where a(sj) depends on the start time and starting segment number, r(sj) is the constant time it takes to play a segment, and t(sj,k) is the scheduled time to play the k-th segment of stream sj. - This information is used both to schedule a read operation to fill a cache buffer, and to drive the algorithm for re-using cache buffers. Some examples of the operation of the cache management algorithm follow:
- A cache buffer containing a video segment which is not predicted to be played by any of the currently playing video streams is re-used before re-using any buffers which are predicted to be played. After satisfying this constraint, the frequency of playing the video and the segment number are used as weights to determine a priority for keeping the video segment cached. The highest retention priority within this group is assigned to video segments that occur early in a frequently played video.
- For a cache buffer containing a video segment which is predicted to be played, the next predicted play time and the number of streams left to play the video segment are used as weights to determine the priority for keeping the video segment cached. The weights essentially allow the retention priority of a cache buffer to be set to the difference between the predicted number of I/Os (for any video segment) with the cache buffer reclaimed, and the predicted number with it retained. For example, if
v5 is playing on s7,
v8 is playing on s2 and s3, with s2 running 5 seconds behind s3, and
v4 is playing on streams s12 to s20 with eachstream 30 seconds behind the next,
then:
buffers containing v5 data already used by s7 are reclaimed first, followed by buffers containing v8 data already used by s2, followed by buffers containing v4 data already used by s12, followed by remaining buffers with the lowest retention priority. - The cache management algorithm provides variations for special cases such as connection operations (where it is possible to predict that a video segment will be played in the near future, but not exactly when) and stop operations (when previous predictions must be revised).
- It is desirable to cluster all streams that require a given video segment, to minimize the time that the cache buffer containing that segment must remain in storage and thus leave more of the system capacity available for other video streams. For video playing, there is usually little flexibility in the rate at which segments are played. However, in some application of video delivery the rate of playing is flexible (that is, video and audio may be accelerated or decelerated slightly without evoking adverse human reactions). Moreover, videos may be delivered for purposes other than immediate human viewing. When a variation in rate is allowed, the streams out in front (timewise) are played at the minimum allowable rate and those in back (timewise) at a maximum allowable rate in order to close the gap between the streams and reduce the time that segments must remain buffered.
- The clustering of streams using a same video presentation is also taken into account during connection and play operations. For example, VS-PLAY-AT-SIGNAL can be used to start playing a video on multiple streams at the same time. This improves clustering, leaving more system resources for other video streams, enhancing the effective capacity of the system. More specifically, clustering, by delaying one stream for a short period so that it coincides in time with a second stream, enables one copy of segments in cache to be used for both streams and thus conserves processing assets.
- Digital video data has attributes unlike those of normal data processing data in that it is non-random, that is sequential, large, and time critical rather than content critical. Multiple streams of data must be delivered at high bit rates, requiring all nonessential overhead to be minimized in the data path. Careful buffer management is required to maximize the efficiency and capacity of the
media streamer 10. Memory allocation, deallocation, and access are key elements in this process, and improper usage can result in memory fragmentation, decreased efficiency, and delayed or corrupted video data. - The
media streamer 10 of this invention employs a memory allocation procedure which allows high level applications to allocate and deallocate non-swappable, page aligned, contiguous memory segments (blocks) for digital video data. The procedure provides a simple, high level interface to video transmission applications and utilizes low level operating system modules and code segments to allocate memory blocks in the requested size. The memory blocks are contiguous and fixed in physical memory, eliminating the delays or corruption possible from virtual memory swapping or paging, and the complexity of having to implement gather/scatter routines in the data transmission software. - The high level interface also returns a variety of addressing mode values for the requested memory block, eliminating the need to do costly dynamic address conversion to fit the various memory models that can be operating concurrently in a media streamer environment. The physical address is available for direct access by other device drivers, such as a fixed disk device, as well as the process linear and process segmented addresses that are used by various applications. A deallocation routine is also provided that returns a memory block to the system, eliminating fragmentation problems since the memory is all returned as a single block.
- Allocate the requested size memory block, a control block is returned with the various memory model addresses of the memory area, along with the length of the block.
- Return the memory block to the operating system and free the associated memory pointers.
- A device driver is defined in the system configuration files and is automatically initialized as the system starts. An application then opens the device driver as a pseudo device to obtain its label, then uses the interface to pass the commands and parameters. The supported commands are Allocate Memory and Deallocate Memory, the parameters are memory size and pointers to the logical memory addresses. These addresses are set by the device driver once the physical block of memory has been allocated and the physical address is converted to logical addresses. A null is returned if the allocation fails.
- Fig. 15 shows a typical set of applications that would use this procedure.
Buffer 1 is requested by a 32-bit application for data that is modified and then placed intobuffer 2. This buffer can then be directly manipulated by a 16 bit application using a segmented address, or by a physical device such as a fixed disk drive. By using this allocation scheme to preallocate the fixed, physical, and contiguous buffers, each application is enabled to use it's native direct addressing to access the data, eliminating the address translation and dynamic memory allocation delays. A video application may use this approach to minimize data movement by placing the digital video data in the buffer directly from the physical disk, then transferring it directly to the output device without moving it several times in the process. - It is important that video streams be delivered to their destination isochronously, that is without delays that can be perceived by the human eye as discontinuities in movement or by the ear as interruptions in sound. Current disk technology may involve periodic action, such as performing predictive failure analysis that may cause significant delays in data access. While most I/O operations complete within 100 ms, periodic delays of 100 ms are common and delays of three full seconds can occur.
- The
media streamer 10 must also be capable of efficiently sustaining high data transfer rates. A disk drive configured for general purpose data storage and retrieval will suffer inefficiencies in the use of memory, disk buffers, SCSI bus and disk capacity if not optimized for the video server application. - In accordance with an aspect of the invention, disk drives employed herewith are tailored for the role of smooth and timely delivery of large amounts of data by optimizing disk parameters. The parameters may be incorporated into the manufacture of disk drives specialized for video servers, or they may be variables that can be set through a command mechanism.
- Parameters controlling periodic actions are set to minimize or eliminate delays. Parameters affecting buffer usage are set to allow for transfer of very large amounts of data in a single read or write operation. Parameters affecting speed matching between a SCSI bus and a processor bus are tuned so that data transfer starts neither too soon nor too late. The disk media itself is formatted with a sector size that maximizes effective capacity and band-width.
- To accomplish optimization:
- The physical disk media is formatted with a maximum allowable physical sector size. This formatting option minimizes the amount of space wasted in gaps between sectors, maximizes device capacity, and maximizes the burst data rate. A preferred implementation is 744 byte sectors.
- Disks may have an associated buffer. This buffer is used for reading data from the disk media asynchronously from availability of the bus for the transfer of the data. Likewise the buffer is used to hold data arriving from the bus asynchronously from the transfer of that data to the disk media. The buffer may be divided into a number of segments and the number is controlled by a parameter. If there are too many segments, each may be too small to hold the amount of data requested in a single transfer. When the buffer is full, the device must initiate reconnection and begin transfer; if the bus/device is not available at this time, a rotational delay will ensue. In the preferred implementation, this value is set so that any buffer segment is at least as large as the data transfer size, e.g., set to one.
- As a buffer segment begins to fill on a read, the disk attempts to reconnect to the bus to effect a data transfer to the host. The point in time that the disk attempts this reconnection affects the efficiency of bus utilization. The relative speeds of the bus and the disk determine the best point in time during the fill operation to begin data transfer to the host. Likewise during write operations, the buffer will fill as data arrives from the host and, at a certain point in the fill process, the disk should attempt a reconnection to the bus. Accurate speed matching results in fewer disconnect/reselect cycles on the SCSI bus with resulting higher maximum throughput.
- The parameters that control when the reconnection is attempted are called "read buffer full ratio" and "write buffer empty ratio". For video data, the preferred algorithm for calculating these ratios in 256 x (Instantaneous SCSI Data Transfer Rate - Sustainable Disk Data Transfer Rate) / Instantaneous SCSI Data Transfer Rate. Presently preferred values for buffer-full and buffer-empty ratios are approximately 204.
- Some disk drive designs require periodic recalibration of head position with changes in temperature. Some of these disk drive types further allow control over whether thermal compensation is done for all heads in an assembly at the same time, or whether thermal compensation is done one head at a time. If all heads are done at once, delays of hundreds of milliseconds during a read operation for video data may ensue. Longer delays in read times results in the need for larger main memory buffers to smooth data flow and prevent artifacts in the multimedia presentation. The preferred approach is to program the Thermal Compensation Head Control function to allow compensation of one head at a time.
- The saving of error logs and the performance of predictive failure analysis can take several seconds to complete. These delays cannot be tolerated by video server applications without very large main memory buffers to smooth over the delays and prevent artifacts in the multimedia presentation. Limit Idle Time Function parameters can be used to inhibit the saving of error logs and performing idle time functions. The preferred implementation sets a parameter to limit these functions.
- In video applications, there is a need to deliver multiple streams from the same data (e.g., a movie). This requirement translates to a need to read data at a high data rate; that is, a data rate needed for delivering one stream multiplied by the number of streams simultaneously accessing the same data. Conventionally, this problem was generally solved by having multiple copies of the data and thus resulted in additional expense. The
media streamer 10 of this invention uses a technique for serving many simultaneous streams from a single copy of the data. The technique takes into account the data rate for an individual stream and the number of streams that may be simultaneously accessing the data. - The above-mentioned data striping involves the concept of a logical file whose data is partitioned to reside in multiple file components, called stripes. Each stripe is allowed to exist on a different disk volume, thereby allowing the logical file to span multiple physical disks. The disks may be either local or remote.
- When the data is written to the logical file, it is separated into logical lengths (i.e. segments) that are placed sequentially into the stripes. As depicted in Fig. 16, a logical file for a video,
video 1, is segmented into M segments or blocks each of a specific size, e.g. 256 KB. The last segment may only be partially filled with data. A segment of data is placed in the first stripe, followed by a next segment that is placed in the second stripe, etc. When a segment has been written to each of the stripes, the next segment is written to the first stripe. Thus, if a file is being striped into N stripes, thenstripe 1 will contain thesegments 1, N+1, 2*N+1, etc., andstripe 2 will contain thesegments 2, N+2, 2*N+2, etc., and so on. - A similar striping of data is known to be used in data processing RAID arrangements, where the purpose of striping is to assure data integrity in case a disk is lost. Such a RAID storage system dedicates one of N disks to the storage of parity data that is used when data recovery is required. The
disk storage nodes 16 of themedia streamer 10 are organized as a RAID-like structure, but parity data is not required (as a copy of the video data is available from a tape store). - Fig. 17 illustrates a first important aspect of this data arrangement, i.e., the separation of each video presentation into data blocks or segments that are spread across the available disk drives to enable each video presentation to be accessed simultaneously from multiple drives without requiring multiple copies. Thus, the concept is one of striping, not for data integrity reasons or performance reasons, per se, but for concurrency or bandwidth reasons. Thus, the
media stream 10 stripes video presentation by play segments, rather than by byte block, etc. - As is shown in Fig. 17, where a
video data file 1 is segmented into M segments and split into four stripes,stripe 1 is afile containing segments video file 1;stripe 2 is afile containing segments video file 1,stripe 3 is afile containing segments stripe 4 is a file containing thesegments video file 1, until all M segments ofvideo file 1 are contained in one of the four stripe files. - Given the described striping strategy, parameters are computed as follows to customize the striping of each individual video.
- First, the segment size is selected so as to obtain a reasonably effective data rate from the disk. However, it cannot be so large as to adversely affect the latency. Further it should be small enough to buffer/cache in memory. A preferred segment size is 256KB, and is constant for video presentations of data rates in ranges from 128KB/sec. to 512KB/sec. If the video data rate is higher, then it may be preferable to use a larger segment size. The segment size depends on the basic unit of I/O operation for the range of video presentations stored on the same media. The principle employed is to use a segment size that contains approximately 0.5 to 2 seconds of video data.
- Next, the number of stripes, i.e. the number of disks over which video data is distributed, is determined. This number must be large enough to sustain the total data rate required and is computed individually for each video, presentation based on an anticipated usage rate. More specifically, each disk has a logical volume associated with it. Each video presentation is divided into component files, as many components as the number of stripes needed. Each component file is stored on a different logical volume. For example, if video data has to be delivered at 250 KB/sec per stream and 30 simultaneous streams are supported from the same video, started at
say 15 second intervals, a total data rate of at least 7.5 MB/sec is obtained. If a disk drive can support on the average 3 MB/sec., at least 3 stripes are required for the video presentation. - The effective rate at which data can be read from a disk is influenced by the size of the read operation. For example, if data is read from the disk in 4KB blocks (from random positions on the disk), the effective data rate may be 1MB/sec. whereas if the data is read in 256KB blocks the rate may be 3 MB/sec. However, if data is read in very large blocks, the memory required for buffers also increases and the latency, the delay in using the data read, also increases because the operation has to complete before the data can be accessed. Hence there is a trade-off in selecting a size for data transfer. A size is selected based on the characteristics of the devices and the memory configuration. Preferably, the size of the data transfer is the selected segment size. For a given segment size the effective data rate from a device is determined. For example, for some disk drives, a 256KB segment size provides a good balance for the effective use of the disk drives (effective data rate of 3 MB/sec.) and buffer size (256 KB).
- If striping is not used, the maximum number of streams that can be supported is limited by the effective data rate of the disk, e.g. if the effective data rate is 3MB/s and a stream data rate is 200KB/s, then no more than 15 streams can be supplied from the disk. If, for instance, 60 streams of the same video are needed then the data has to be duplicated on 4 disks. However, if striping is used in accordance with this invention, 4 disks of 1/4 the capacity can be used. Fifteen streams can be simultaneously played from each of the 4 stripes for a total of 60 simultaneous streams from a single copy of the video data. The start times of the streams are skewed to ensure that the requests for the 60 streams are evenly spaced among the stripes. Note also that if the streams are started close to each other, the need for I/O can be reduced by using video data that is cached.
- The number of stripes for a given video is influenced by two factors, the first is the maximum number of streams that are to be supplied at any time from the video and the other is the total number of streams that need to be supplied at any time from all the videos stored on the same disks as the video.
-
- r =
- nominal data rate at which the stream is to be played;
- n =
- maximum number of simultaneous streams from this video presentation at the nominal rate;
- d =
- effective data rate from a disk
- m =
- maximum number of simultaneous streams at nominal rate from all disks that contains any part of this video; presentation; and
- s =
- number of stripes for a video presentation.
- The number of disks over which data for a video presentation is striped are managed as a set, and can be thought of as a very large physical disk. Striping allows a video file to exceed the size limit of the largest file that a system's physical file system will allow. The video data, in general, will not always require the same amount of storage on all the disks in the set. To balance the usage of the disk, when a video is striped, the striping is begun from the disk that has the most free space.
- As an example, consider the case of a video presentation that needs to be played at 2 mbits/sec. (250,000 bytes/sec.), i.e., r is equal to 250,000 bytes/sec., and assume that it is necessary to deliver up to 30 simultaneous streams from this video, i.e., n is 30. Assume in this example, that m is also 30, i.e., the total number of streams to be delivered from all disks is also 30. Further, assume that the data is striped in segments of 250,000 bytes and that the effective data rate from a disk for the given segment size (250,000 bytes) is 3,000,000 bytes/sec. Then n, the number of stripes needed, is (250,000 * 30 / 3,000,000) 2.5 which is rounded up to 3 (s = ceiling(r*n/d)).
- If the maximum number of streams from all disks that contain this data is, for
instance 45, then 250,000 * 45 / 3,000,000 or 3.75 stripes and needed, which is rounded up to 4 stripes. - Even though striping the video into 3 stripes is sufficient to meet the requirement for delivering the 30 streams from the single copy of the video, if disks containing the video also contain other content, and the total number of streams from that video to be supported is 45, then four disk drives are needed (striping level of 4).
- The manner in which the algorithm is used in the
media streamer 10 is as follows. The storage (number of disk drives) is divided into groups of disks. Each group has a certain capacity and capability to deliver a given number of simultaneous streams (at an effective data rate per disk based on a predetermined segment size). The segment size for each group is constant. Different groups may choose different segments sizes (and hence have different effective data rates). When a video presentation is to be striped, a group is first chosen by the following criteria. - The segment size is consistent with the data rate of the video, i.e., if the stream data rate is 250,000 bytes/sec., the segment size is in the range of 125K to 500 KB. The next criteria is to ensure that the number of disks in the group are sufficient to support the maximum number of simultaneous streams, i.e., the number of disks where "r" is the stream data rate and "n" the maximum number of simultaneous streams, and "d" the effective data rate of a disk in the group. Finally, it should be insured that the sum total of simultaneous streams that need to be supported from all of the videos in the disk group does not exceed its capacity. That is, if "m" is the capacity of the group, the "m - n" should be greater than or equal to the sum of all the streams that can be played simultaneously from the videos already stored in the group.
- The calculation is done in
control node 18 at the time the video data is loaded into themedia streamer 10. In the simplest case all disks will be in a single pool which defines the total capacity of themedia streamer 10, both for storage and the number of supportable streams. In this case the number of disks (or stripes) necessary to support a given number of simultaneous streams is calculated from the formula m*r/d, where m is the number of streams, r is the data rate for a stream, and d is the effective data rate for a disk. Note that if the streams can be of different rates, then m*r, in the above formula, should be replaced by: Max (sum of the data rates of all simultaneous streams). - The result of using this technique for writing the data is that the data can be read for delivering many streams at a specified rate without the need for multiple copies of the digital representation of the video presentation. By striping the data across multiple disk volumes the reading of one part of the file for delivering one stream does not interfere with the reading of another part of the file for delivering another stream.
- Conventionally video servers generally fit one of two profiles. Either they use PC technology to build a low cost (but also low bandwidth) video server or they use super-computing technology to build a high bandwidth (also expensive) video server. A object of this invention then is to deliver high bandwidth video, but without the high cost of super-computer technology.
- A preferred approach to achieving high bandwidth at low cost is to use the low latency switch (crossbar circuit switch matrix) 18 to interconnect low cost PC based "nodes" into a video server (as shown in Fig. 1). An important aspect of the media streamer architecture is efficient use of the video stream bandwidth that is available in each of the
storage nodes 16 andcommunication nodes 14. The bandwidth is maximized by combining the special time bandwidth allocation capability of a low-cost switch technology. - Fig. 18 shows a conventional logical connection between a switch interface and a storage node. The switch interface must be full duplex (i.e., information can be sent in either direction simultaneously) to allow the transfer of video (and control information) both into and out of the storage node. Because video content is written to the storage node once and then read many times, most of the bandwidth requirements for the storage node are in the direction towards the switch. In the case of a typical switch interface, the bandwidth of the storage node is under-utilized because that half of the bandwidth devoted to write capability is so infrequently used.
- Fig. 19 shows a switch interface in accordance with this invention. This interface dynamically allocates its total bandwidth in real time either into or out of the
switch 18 to meet the current demands of the node. (Thestorage node 16 is used as an example.) Thecommunication nodes 14 have similar requirements, but most of their bandwidth is in the direction from theswitch 18. - The dynamic allocation is achieved by grouping two or more of the physical switch interfaces, using appropriate routing headers for the
switch 12, into one logical switch interface 18a. The video data (on a read, for example) is then split between the two physical interfaces. This is facilitated by striping the data across multiple storage units as described previously. The receiving node combines the video data back into a single logical stream. - As an example, in Fig. 18 the switch interface is rated at 2X MB/sec. full duplex i.e., X MB/sec. in each direction. But video data is usually sent only in one direction (from the storage node into the switch). Therefore only X MB/sec. of video bandwidth is delivered from the storage node, even though the node has twice that capability (2X). The storage node is under utilized. The switch interface of Fig. 19 dynamically allocates the entire 2X MB/sec. bandwidth to transmitting video from the storage node into the switch. The result is increased bandwidth from the node, higher bandwidth from the video server, and a lower cost per video stream.
- Digital video data is sequential, continuous, large, and time critical, rather than content critical. Streams of video data must be delivered isochronously at high bit rates, requiring all nonessential overhead to be minimized in the data path. Typically, the receiving hardware is a video set top box or some other suitable video data receiver. Standard serial communication protocols insert additional bits and bytes of data into the stream for synchronization and data verification, often at the hardware level. This corrupts the video data stream if the receiver is not able to transparently remove the additional data. The additional overhead introduced by these bits and bytes also decreases the effective data rate which creates video decompression and conversion errors.
- It has been determined that the transmission of video data over standard communications adapters, to ensure isochronous delivery to a user, requires disabling most of the standard serial communications protocol attributes. The methods for achieving this vary depending on the communications adapters used, but the following describes the underlying concepts. In Fig. 20, a
serial communications chip 200 in acommunications node 14 disables data formatting and integrity information, such as the parity, start and stop bits, cyclic redundancy check codes and sync bytes, and prevents idle characters from being generated. Input FIFO buffers 202, 204, 206, etc. are employed to insure a constant (isochronous) output video data stream while allowing bus cycles for loading of the data blocks. A 1000byte FIFO buffer 208 simplifies the CPU and bus loading logic. - If
communications output chip 200 does not allow the disabling of an initial synchronization (sync) byte generation, then the value of the sync byte is programmed to the value of the first byte of each data block (and the data block pointer is incremented to the second byte). Byte alignment must also be managed with real data, since any padding bytes will corrupt the data stream if they are not part of the actual compressed video data. - To achieve the constant, high speed serial data outputs required for the high quality levels of compressed video data, either a circular buffer or a plurality of large buffers (e.g. 202, 204, 206) must be used. This is necessary to allow sufficient time to fill an input buffer while outputting data from a previously filled buffer. Unless buffer packing is done earlier in the video data stream path, the end of video condition can result in a very small buffer that will be output before the next buffer transfer can complete resulting in a data underrun. This necessitates a minimum of three large, independent buffers. A circular buffer in dual mode memory (writable while reading) is also a suitable embodiment.
- As described above, digital video data is moved from disk to buffer memory. Once enough data is in buffer memory, it is moved from memory to an interface adapter in a
communications node 14. The interfaces used are theSCSI 20 MB/sec., fast/wide interface or the SSA serial SCSI interface. The SCSI interface is expanded to handle 15 addresses and the SSA architecture supports up to 256. Other suitable interfaces include, but are not limited to, RS422, V.35, V.36, etc. - As shown in Fig. 21, video data from the interface is passed from a
communication node 14 across acommunications bus 210 to NTSC adapter 212 (see also Fig. 20) where the data is buffered.Adapter 212 pulls the data from alocal buffer 214, where multiple blocks of data are stored to maximize the performance of the bus. The key goal ofadapter 212 is to maintain an isochronous flow of data from thememory 214 toMPEG chips NTSC chip 220 and D/A 222, to insure that there are no interruptions in the delivery of video and/or audio. -
MPEG logic modules NTSC encoder 220 converts the signal into NTSC baseband analog signals.MPEG audio decoder 216 converts the digital audio into parallel digital data which is then passed through a Digital toAnalog converter 222 and filtered to generate audio Left and Right outputs. - The goal in creating a solution to the speed matching and Isochronous delivery problem is an approach that not only maximizes the bandwidth delivery of the system but also imposes the fewest performance constraints.
- Typically, application developers have used a bus structure, such as SSA and SCSI, for control and delivery of data between processors and mechanical storage devices such disk files, tape files, optical storage units, etc. Both of these buses contain attributes that make them suitable for high bandwidth delivery of video data, provided that means are taken to control the speed and isochronous delivery of video data.
- The SCSI bus allows for the bursting of data at 20 Mbytes/sec. which minimizes the amount of time that any one video signal is being moved from buffer memory to a specific NTSC adapter. The
adapter card 212 contains alarge buffer 214 with a performance capability to burst data into memory frombus 210 at high peak rates and to remove data frombuffer 214 at much lower rates for delivery toNTSC decoder chips Buffer 214 is further segmented into smaller buffers and connected via software controls to act as multiple buffers connected in a circular manner. - This allows the system to deliver varying block sizes of data to separate buffers and controls the sequence of playout. An advantage of this approach is that it frees the system software to deliver blocks of video data well in advance of any requirement for the video data, and at very high delivery rates. This provides the
media streamer 10 with the ability to manage many multiple video steams on a dynamic throughput requirement. When a processor in a communications node has time, it can cause delivery of several large blocks of data that will be played in sequence. Once this is done, the processor is free to control other streams without an immediate need to deliver slow continuous isochronous data to each port. - To further improve the cost effectiveness of the decoder system, a
small FIFO memory 224 is inserted between thelarger decoder buffer 214 andMPEG decoders FIFO memory 224 allowscontroller 226 to move smaller blocks, typically 512 bytes of data, frombuffer 214 toFIFO 224 which, in turn, converts the data into serial bit streams for delivery toMPEG decoders video decoder chips FIFO memory 224 occurs in an isochronous manner, or substantially isochronous manner, to ensure the delivery of an uninterrupted video presentation to a user or consumer of the video presentation. - As shown in Fig. 22, compressed digital video data and command streams from buffer memory are converted by device level software into SCSI commands and data streams, and are transmitted over
SCSI bus 210 to atarget adapter 212 at SCSI II fast data rates. The data is then buffered and fed at the required content output rate to MPEG logic for decompression and conversion to analog video and audio data. Feedback is provided acrossSCSI bus 210 to pace the data flow and insure proper buffer management. - The SCSI NTSC/
PAL adapter 212 provides a high level interface toSCSI bus 210, supporting a subset of the standard SCSI protocol. The normal mode of operation is to open theadapter 212, write data (video and audio) streams to it and, closing theadapter 212 only when completed.Adapter 212 pulls data as fast as necessary to keep its buffers full, with thecommunication nodes 14 andstorage nodes 16 providing blocks of data, that are sized to optimize the bus data transfer and minimize bus overhead. - System parameters can be overwritten via control packets using a Mode Select SCSI command if necessary. Video/Audio synchronization is internal to the
adapter 212 and no external controls are required. Errors are minimized, with automatic resynchronization and continued audio/video output. - A mix of direct access device and sequential device commands are used as well as standard common commands to fit the functionality of the SCSI video output adapter. As with all SCSI commands, a valid status byte is returned after every command, and the sense data area is loaded with the error conditions if a check condition is returned. The standard SCSI commands used include RESET, INQUIRY, REQUEST SENSE, MODE SELECT, MODE SENSE, READ, WRITE, RESERVE, RELEASE, TEST UNIT READY.
- The video control commands are user-level video output control commands, and are extensions to the standard commands listed above. They provide a simplified user level front end to the low level operating system or SCSI commands that directly interface to the SCSI
video output adapter 212. The implementation of each command employs microcode to emulate the necessary video device function and avoid video and audio anomalies caused by invalid control states. A single SCSI command; the SCSI START/STOP UNIT command, is used to translate video control commands to the target SCSIvideo output adapter 212, with any necessary parameters moved along with the command. This simplifies both the user application interface and theadapter card 212 microcode. The following commands are employed. - The data input into the MPEG chip set (216, 218) is halted, the audio is muted, and the video is blanked. The parameter field selects the stop mode. The normal mode is for the buffer and position pointer to remain current, so that PLAY continues at the same location in the video stream. A second (end of movie or abort) mode is to set the buffer pointers to the start of the next buffer and release the current buffer. A third mode is also for end of movie conditions, but the stop (mute and blank) is delayed until the data buffer runs empty. A fourth mode may be employed with certain MPEG decoder implementations to provide for a delayed stop with audio, but freeze frame for the last valid frame when the data runs out. In each of these cases, the
video adapter 212 microcode determines the stopping point so that the video and audio output is halted on the proper boundary to allow a clean restart. - The data input into the MPEG chip set (216, 218) is halted and the audio is muted, but the video is not blanked. This causes the MPEG video chip set (216, 218) to hold a freeze frame of the last good frame. This is limited to avoid burn-in of the video tube. A Stop command is preferably issued by the
control node 18 but the video output will automatically go to blank if no commands are received within 5 minutes. Theadapter 212 microcode maintains the buffer positions and decoder states to allow for a smooth transition back to play. - This command blanks the video output without impacting the audio output, mutes the audio output without impacting the video, or both. Both muting and blanking can be turned off with a single command using a Mode parameter, which allows a smoother transition and reduced command overhead. These are implemented on the
video adapter 212 after decompression and conversion to analog, with hardware controls to ensure a positive, smooth transition. - This command slows the data input rate into the MPEG chip set, (216, 218) causing it to intermittently freeze frame, simulating a slow play function on a VCR. The audio is muted to avoid digital error noise. The parameter field specifies a relative speed from 0 to 100. An alternative implementation disables the decoder chip set (216, 218) error handling, and then modifies the data clocking speed into the decoder chip set to the desired playing speed. This is dependent on the flexibility of the video adapter's clock architecture.
- This command starts the data feed process into the MPEG chip set (216, 218), enabling the audio and video outputs. A buffer selection number is passed to determine which buffer to begin the playing sequence from, and a zero value indicates that the current play buffer should be used (typical operation). A non-zero value is only accepted if the
adapter 212 is in STOPPED mode, if in PAUSED mode the buffer selection parameter is ignored and playing is resumed using the current buffer selection and position. - When 'PLAYING', the
controller 226 rotates through the buffers sequentially maintaining a steady stream of data into the MPEG chip set (216, 218). Data is read from the buffer at the appropriate rate into the MPEG bus starting at address zero until N bytes are read, then thecontroller 226 switches to the next buffer and continues reading data. The adapter bus and microcode provides sufficient bandwidth for both the SCSI Fast data transfer into the adapter buffers 214, and the steady loading of the data onto theoutput FIFO 224 that feeds the MPEG decompression chips (216, 218). - This command is used to scan through data in a manner that emulates fast forward on a VCR. There are two modes of operation that are determined by the rate parameter. A rate of 0 means that it is a rapid fast forward where the video and audio should be blanked and muted, the buffers flushed, and an implicit play is executed when data is received from a new position forward in the video stream. An integer value between 1 and 10 indicates the rate that the input stream is being forwarded. The video is 'sampled' by skipping over blocks of data to achieve the specified average data rate. The
adapter 212 plays a portion of data at nearly the normal rate, jumps ahead, then plays the next portion to emulate the fast forward action. - This command is used to scan backwards through data in a manner that emulates rewind on a VCR. There are two modes of operation that are determined by the rate parameter. A rate of 0 means that it is a rapid rewind where the video and audio should be blanked and muted, the buffers flushed, and an implicit play executed when data is received from a new position forward in the video stream. An integer value between 1 and 10 indicates the rate that the input stream is being rewound. The video is 'sampled' by skipping over blocks of data to achieve the specified average data rate. The rewind data stream is built by assembling small blocks of data that are 'sampled' from progressively early positions in the video stream. The
adapter card 212 smoothly handles the transitions and synchronization to play at the normal rate, skipping back to the next sampled portion to emulate rewind scanning. - Digital video servers provide data to many concurrent output devices, but digital video data decompression and conversion requires a constant data stream. Data buffering techniques are used to take advantage of the SCSI data burst mode transmission, while still avoiding data underrun or buffer overrun, allowing
media streamer 10 to transmit data to many streams with minimal intervention. SCSI video adapter card 212 (Figs. 21, 22) includes alarge buffer 214 for video data to allow full utilization of the SCSI burst mode data transfer process. An exemplary configuration would be onebuffer 214 of 768K, handled by local logic as a wrap-around circular buffer. Circular buffers are preferred to dynamically handle varying data block sizes, rather than fixed length buffers that are inefficient in terms of both storage and management overhead when transferring digital video data. - The
video adapter card 212 microcode supports several buffer pointers, keeping the last top of data as well as the current length and top of data. This allows a retry to overwrite failed transmission, or a pointer to be positioned to a byte position within the current buffer if necessary. The data block length is maintained exactly as transmitted (e.g., byte or word specific even if long word alignment is used by the intermediate logic) to insure valid data delivery to the decode chip set (216, 218). This approach minimizes the steady state operation overhead, while still allowing flexible control of the data buffers. - Assuming multiple sets of buffers are required, multiple pointers are available for all buffer related operations. For example, one set may be used to select the PLAY buffer and current position within that buffer, and a second set to select the write buffer and a position within that buffer (typically zero) for a data preload operation. A current length and maximum length value are maintained for each block of data received since variable length data blocks are also supported.
- The buffer operation is managed by the video adapter's
controller 226, placing the N bytes of data in the next available buffer space starting at address zero of that buffer.Controller 226 keeps track of the length of data in each buffer and if that data has been "played" or not. Whenever sufficient buffer space is free, the card accepts the next WRITE command and DMA's the data into that buffer. If not enough buffer space is free to accept the full data block (typically a Slow Play or Pause condition), the WRITE is not accepted and a buffer full return code is returned. - A LOCATE command is used to select a 'current' write buffer and position within that buffer (typically zero) for each buffer access command (Write, Erase, etc.). The buffer position is relative to the start of data for the last block of data that was successfully transmitted. This is done preferably for video stream transition management, with the automatic mode reactivated as soon as possible to minimize command overhead in the system.
- Digital video data transmission has different error management requirements than the random data access usage that SCSI is normally used for in data processing applications. Minor data loss is less critical than transmission interruption, so the conventional retries and data validation schemes are modified or disabled. The normal SCSI error handling procedures are followed with the status byte being returned during the status phase at the completion of each command. The status byte indicates either a GOOD (00) condition, a BUSY (8h) if the
target SCSI chip 227 is unable to accept a command, or a CHECK CONDITION (02h) if an error has occurred. - The
controller 226 of theSCSI video adapter 212 automatically generates a Request Sense command on a Check Condition response to load the error and status information, and determines if a recovery procedure is possible. The normal recovery procedure is to clear the error state, discard any corrupted data, and resume normal play as quickly as possible. In a worst case, theadapter 212 may have to be reset and the data reloaded before the play can resume. Error conditions are logged and reported back to the host system with the next INQUIRY or REQUEST SENSE SCSI operation. - For buffer full or device busy conditions, retries are automated up to X number of retries, where X is dependent on the stream data rate. This is allowed only to the point in time that the next data buffer arrives. At that point, an error is logged if the condition is unexpected (i.e., Buffer full but not PAUSED or in SLOW PLAY mode) and a device reset or clear may be necessary to recover and continue video play.
- Although described primarily in the context of delivering a video presentation to a user, it should be realized that bidirectional video adapters can be employed to receive a video presentation, to digitize the video presentation as a data representation thereof, and to transmit the data representation over the
bus 210 to acommunication node 14 for storage, vialow latency switch 18, within a storage node ornodes control node 18.
(Note that the effective data rate from disk is influenced by the segment size);
Claims (10)
- A media streamer, comprising:
at least one storage node for storing a digital representation of at least one video presentation, said at least one video presentation requiring a time T to present in its entirety, and stored as a plurality of N data blocks, each data block comprising a T/N portion of said at least one video presentation, said at least one storage node comprising a first data buffer for buffering at least one of said N data blocks;
a plurality of communication nodes each having an input port that is coupled via a circuit switch to an output of said first data buffer for sequentially receiving a plurality of said N data blocks therefrom, said sequentially received N data blocks being associated with a same video presentation or with different video presentations, each of said plurality of communication nodes further having a plurality of output ports, individual ones of said plurality of output ports outputting a digital representation of one video presentation, individual ones of said plurality of communication nodes further comprising a second data buffer for buffering at least one of said N data blocks prior to outputting said at least one of said N data blocks; and
at least one control node responsive to a first operating condition for causing transfer of one of said N data blocks from said first data buffer to an output port of a first communication node and also to an output port of a second communication node, said at least one control node being further responsive to a second operating condition for causing transfer of one of said N data blocks from said first data buffer to said second data buffer of one of said communication nodes, and for causing transfer of said one of said N data blocks from said second data buffer to a plurality of said output ports of said one of said communication nodes. - A media streamer as set forth in claim 1 and further including means for selectively retaining one of said N data blocks within said first data buffer if it is predicted that said one of said N data blocks will be output from at least one of said communications nodes within a predetermined period of time.
- A media streamer as set forth in claim 1 and further including means for selectively retaining one of said N data blocks within said second data buffer if it is predicted that said one of said N data blocks will be output from at least one of said output ports of a communications node within a predetermined period of time.
- A media streamer as set forth in claim 2 or claim 3, wherein, for one of said N data blocks that is not to be retained, said media streamer includes means for replacing said one of said N data blocks within said data buffer, said replacing means being responsive to a predicted demand for the associated video presentation and also to a location, within a corresponding data representation of said one of said N data blocks, for determining a priority of retaining said one of said N data blocks with respect to others of said N data blocks stored within said data buffer.
- A media streamer as set forth in claim 4 wherein a higher priority is assigned to a data block that is located at or near a beginning of a data representation than is assigned to a data block that is located at or near an end of said data representation.
- A media streamer as set forth in claim 2 or claim 3 wherein, for one of said N data blocks that is to be retained, said media streamer includes means for replacing said one of said N data blocks within said data buffer, said replacing means being responsive to a next predicted time that the said one of said N data blocks is required to be output from at least one of said communication nodes, and also to a number of output ports that are outputting a digital representation with which the said one of said N data blocks is associated.
- A media streamer as set forth in claim 1 wherein said at least one control node further includes means for synchronizing a first outputted data representation to a second outputted data representation such that said first data representation and said second data representation simultaneously output data from a same one of said N data blocks.
- A media streamer as set forth in claim 1 wherein said first data buffer and said second data buffer are of approximately equal size.
- A media streamer as set forth in claim 1 wherein said first data buffer and said second data buffer are components of a single data buffer that is distributed between said at least one storage node and said plurality of communication nodes.
- A data storage system comprising:
a mass storage unit storing a data entity that is partitioned into a plurality N of temporally-ordered segments;
a data buffer that is bidirectionally coupled to said mass storage unit for storing up to M of said temporally-ordered segments, wherein M is less than N, said data buffer having an output for outputting stored ones of said temporally-ordered segments; and
a data buffer manager for scheduling transfers of individual ones of said temporally-ordered segments between said mass storage unit and said data buffer, said data buffer manager scheduling said transfers in accordance with at least a predicted time that an individual one of said temporally-ordered segments will be required to be output from said data buffer.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US302619 | 1994-09-08 | ||
US08/302,619 US5586264A (en) | 1994-09-08 | 1994-09-08 | Video optimized media streamer with cache management |
Publications (1)
Publication Number | Publication Date |
---|---|
EP0702491A1 true EP0702491A1 (en) | 1996-03-20 |
Family
ID=23168532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP95305966A Withdrawn EP0702491A1 (en) | 1994-09-08 | 1995-08-25 | Video optimized media streamer with cache management |
Country Status (4)
Country | Link |
---|---|
US (1) | US5586264A (en) |
EP (1) | EP0702491A1 (en) |
JP (1) | JP3234752B2 (en) |
CA (1) | CA2153444A1 (en) |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1999001808A2 (en) * | 1997-07-02 | 1999-01-14 | Koninklijke Philips Electronics N.V. | System for supplying data streams |
WO1999014954A1 (en) * | 1997-09-18 | 1999-03-25 | Microsoft Corporation | Continuous media file server system and method for scheduling disk reads while playing multiple files having different transmission rates |
EP0915622A2 (en) * | 1997-11-04 | 1999-05-12 | Matsushita Electric Industrial Co., Ltd. | System for coding and displaying a plurality of pictures |
EP0929044A2 (en) * | 1997-12-10 | 1999-07-14 | Matsushita Electric Industrial Co., Ltd. | Rich text medium displaying method and picture information providing system |
EP0974909A2 (en) * | 1998-07-20 | 2000-01-26 | Hewlett-Packard Company | Data transfer management system for managing burst data transfer |
EP1025696A1 (en) * | 1997-09-04 | 2000-08-09 | Sedna Patent Services, LLC | Apparatus for video access and control over computer network, including image correction |
EP1065584A1 (en) * | 1999-06-29 | 2001-01-03 | Telefonaktiebolaget Lm Ericsson | Command handling in a data processing system |
EP1160672A2 (en) * | 2000-04-04 | 2001-12-05 | International Business Machines Corporation | System and method for caching sets of objects |
US6675386B1 (en) | 1996-09-04 | 2004-01-06 | Discovery Communications, Inc. | Apparatus for video access and control over computer network, including image correction |
US6947947B2 (en) * | 2001-08-17 | 2005-09-20 | Universal Business Matrix Llc | Method for adding metadata to data |
US7716349B1 (en) | 1992-12-09 | 2010-05-11 | Discovery Communications, Inc. | Electronic book library/bookstore system |
CN101854309A (en) * | 2010-06-18 | 2010-10-06 | 中兴通讯股份有限公司 | Method and apparatus for managing message output |
US7835989B1 (en) | 1992-12-09 | 2010-11-16 | Discovery Communications, Inc. | Electronic book alternative delivery systems |
US7849393B1 (en) | 1992-12-09 | 2010-12-07 | Discovery Communications, Inc. | Electronic book connection to world watch live |
US7861166B1 (en) | 1993-12-02 | 2010-12-28 | Discovery Patent Holding, Llc | Resizing document pages to fit available hardware screens |
US7865405B2 (en) | 1992-12-09 | 2011-01-04 | Discovery Patent Holdings, Llc | Electronic book having electronic commerce features |
US7865567B1 (en) | 1993-12-02 | 2011-01-04 | Discovery Patent Holdings, Llc | Virtual on-demand electronic book |
US8073695B1 (en) | 1992-12-09 | 2011-12-06 | Adrea, LLC | Electronic book with voice emulation features |
US8095949B1 (en) | 1993-12-02 | 2012-01-10 | Adrea, LLC | Electronic book with restricted access features |
EP2487609A1 (en) | 2011-02-07 | 2012-08-15 | Alcatel Lucent | A cache manager for segmented multimedia and corresponding method for cache management |
US8578410B2 (en) | 2001-08-03 | 2013-11-05 | Comcast Ip Holdings, I, Llc | Video and digital multimedia aggregator content coding and formatting |
US8621521B2 (en) | 2001-08-03 | 2013-12-31 | Comcast Ip Holdings I, Llc | Video and digital multimedia aggregator |
US9053640B1 (en) | 1993-12-02 | 2015-06-09 | Adrea, LLC | Interactive electronic book |
US9078014B2 (en) | 2000-06-19 | 2015-07-07 | Comcast Ip Holdings I, Llc | Method and apparatus for targeting of interactive virtual objects |
US9286294B2 (en) | 1992-12-09 | 2016-03-15 | Comcast Ip Holdings I, Llc | Video and digital multimedia aggregator content suggestion engine |
Families Citing this family (353)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7379900B1 (en) * | 1992-03-20 | 2008-05-27 | Variant Holdings Llc | System for marketing goods and services utilizing computerized central and remote facilities |
JP3456018B2 (en) * | 1993-07-26 | 2003-10-14 | ソニー株式会社 | Information transmission system |
JP3104953B2 (en) * | 1993-12-17 | 2000-10-30 | 日本電信電話株式会社 | Multiple read special playback method |
US5712976A (en) * | 1994-09-08 | 1998-01-27 | International Business Machines Corporation | Video data streamer for simultaneously conveying same one or different ones of data blocks stored in storage node to each of plurality of communication nodes |
CA2159416A1 (en) * | 1994-09-30 | 1996-03-31 | Masayoshi Hirashima | Signal distribution apparatus |
US5758151A (en) * | 1994-12-09 | 1998-05-26 | Storage Technology Corporation | Serial data storage for multiple access demand |
JP2833507B2 (en) * | 1995-01-31 | 1998-12-09 | 日本電気株式会社 | Server device data access control method |
JP3184763B2 (en) * | 1995-06-07 | 2001-07-09 | インターナショナル・ビジネス・マシーンズ・コーポレ−ション | Multimedia direct access storage device and format method |
JP3154921B2 (en) * | 1995-06-09 | 2001-04-09 | 富士通株式会社 | Video playback position identification method for video-on-demand system |
US5724646A (en) * | 1995-06-15 | 1998-03-03 | International Business Machines Corporation | Fixed video-on-demand |
US5724543A (en) * | 1995-06-19 | 1998-03-03 | Lucent Technologies Inc. | Video data retrieval method for use in video server environments that use striped disks |
US5758076A (en) * | 1995-07-19 | 1998-05-26 | International Business Machines Corporation | Multimedia server system having rate adjustable data retrieval based on buffer capacity |
US5787472A (en) * | 1995-07-31 | 1998-07-28 | Ibm Corporation | Disk caching system for selectively providing interval caching or segment caching of vided data |
US5815662A (en) * | 1995-08-15 | 1998-09-29 | Ong; Lance | Predictive memory caching for media-on-demand systems |
US5768681A (en) * | 1995-08-22 | 1998-06-16 | International Business Machines Corporation | Channel conservation for anticipated load surge in video servers |
JPH0983979A (en) * | 1995-09-08 | 1997-03-28 | Fujitsu Ltd | Multiplex video server |
JPH0981497A (en) * | 1995-09-12 | 1997-03-28 | Toshiba Corp | Real-time stream server, storing method for real-time stream data and transfer method therefor |
US5721823A (en) * | 1995-09-29 | 1998-02-24 | Hewlett-Packard Co. | Digital layout method suitable for near video on demand system |
US6160547A (en) * | 1995-10-12 | 2000-12-12 | Asc Audio Video Corporation | Shared video data storage system with separate video data and information buses |
US5933603A (en) * | 1995-10-27 | 1999-08-03 | Emc Corporation | Video file server maintaining sliding windows of a video data set in random access memories of stream server computers for immediate video-on-demand service beginning at any specified location |
US5948062A (en) * | 1995-10-27 | 1999-09-07 | Emc Corporation | Network file server using a cached disk array storing a network file directory including file locking information and data mover computers each having file system software for shared read-write file access |
US5829046A (en) * | 1995-10-27 | 1998-10-27 | Emc Corporation | On-line tape backup using an integrated cached disk array |
US5737747A (en) * | 1995-10-27 | 1998-04-07 | Emc Corporation | Prefetching to service multiple video streams from an integrated cached disk array |
US6061504A (en) * | 1995-10-27 | 2000-05-09 | Emc Corporation | Video file server using an integrated cached disk array and stream server computers |
US5781226A (en) * | 1995-11-13 | 1998-07-14 | General Instrument Corporation Of Delaware | Network virtual memory for a cable television settop terminal |
US6321293B1 (en) * | 1995-11-14 | 2001-11-20 | Networks Associates, Inc. | Method for caching virtual memory paging and disk input/output requests |
US6003061A (en) * | 1995-12-07 | 1999-12-14 | Microsoft Corporation | Method and system for scheduling the use of a computer system resource using a resource planner and a resource provider |
US5784571A (en) * | 1995-12-14 | 1998-07-21 | Minerva Systems, Inc. | Method for reducing the bandwidth requirement in a system including a video decoder and a video encoder |
US5719983A (en) * | 1995-12-18 | 1998-02-17 | Symbios Logic Inc. | Method and apparatus for placement of video data based on disk zones |
JP2914261B2 (en) * | 1995-12-20 | 1999-06-28 | 富士ゼロックス株式会社 | External storage control device and external storage device control method |
US5768520A (en) * | 1996-01-29 | 1998-06-16 | International Business Machines Corporation | Method for determining load capacity by grouping physical components into logical components whose loads represent fixed proportional loads of physical components |
US7577782B2 (en) | 1996-02-02 | 2009-08-18 | Sony Corporation | Application programming interface for data transfer and bus management over a bus structure |
US6631435B1 (en) | 1996-02-02 | 2003-10-07 | Sony Corporation | Application programming interface for data transfer and bus management over a bus structure |
US5991520A (en) * | 1996-02-02 | 1999-11-23 | Sony Corporation | Application programming interface for managing and automating data transfer operations between applications over a bus structure |
JP3456085B2 (en) * | 1996-03-05 | 2003-10-14 | ソニー株式会社 | Data processing system and data processing method |
US6233637B1 (en) | 1996-03-07 | 2001-05-15 | Sony Corporation | Isochronous data pipe for managing and manipulating a high-speed stream of isochronous data flowing between an application and a bus structure |
US6519268B1 (en) | 1996-03-07 | 2003-02-11 | Sony Corporation | Asynchronous data pipe for automatically managing asynchronous data transfers between an application and a bus structure |
US5797043A (en) * | 1996-03-13 | 1998-08-18 | Diamond Multimedia Systems, Inc. | System for managing the transfer of data between FIFOs within pool memory and peripherals being programmable with identifications of the FIFOs |
US5784649A (en) * | 1996-03-13 | 1998-07-21 | Diamond Multimedia Systems, Inc. | Multi-threaded FIFO pool buffer and bus transfer control system |
US5918012A (en) * | 1996-03-29 | 1999-06-29 | British Telecommunications Public Limited Company | Hyperlinking time-based data files |
US5870551A (en) * | 1996-04-08 | 1999-02-09 | Lucent Technologies Inc. | Lookahead buffer replacement method using ratio of clients access order offsets and buffer data block offsets |
US5745756A (en) * | 1996-06-24 | 1998-04-28 | International Business Machines Corporation | Method and system for managing movement of large multi-media data files from an archival storage to an active storage within a multi-media server computer system |
US5860091A (en) * | 1996-06-28 | 1999-01-12 | Symbios, Inc. | Method and apparatus for efficient management of non-aligned I/O write request in high bandwidth raid applications |
US5909693A (en) * | 1996-08-12 | 1999-06-01 | Digital Video Systems, Inc. | System and method for striping data across multiple disks for continuous data streaming and increased bus utilization |
US5737577A (en) * | 1996-08-12 | 1998-04-07 | Digital Video Systems, Inc. | Complementary block storage for breater minimumdata transfer rate |
US5893140A (en) * | 1996-08-14 | 1999-04-06 | Emc Corporation | File server having a file system cache and protocol for truly safe asynchronous writes |
US6298386B1 (en) | 1996-08-14 | 2001-10-02 | Emc Corporation | Network file server having a message collector queue for connection and connectionless oriented protocols |
EP0921155B1 (en) | 1996-08-23 | 2003-05-02 | Daikin Industries, Limited | Fluororubber coating composition |
US5894584A (en) * | 1996-08-28 | 1999-04-13 | Eastman Kodak Company | System for writing digitized X-ray images to a compact disc |
US5920700A (en) * | 1996-09-06 | 1999-07-06 | Time Warner Cable | System for managing the addition/deletion of media assets within a network based on usage and media asset metadata |
US5881245A (en) * | 1996-09-10 | 1999-03-09 | Digital Video Systems, Inc. | Method and apparatus for transmitting MPEG data at an adaptive data rate |
JPH1091360A (en) * | 1996-09-12 | 1998-04-10 | Fujitsu Ltd | Disk control system |
US5870553A (en) * | 1996-09-19 | 1999-02-09 | International Business Machines Corporation | System and method for on-demand video serving from magnetic tape using disk leader files |
KR100270354B1 (en) * | 1996-11-20 | 2000-11-01 | 정선종 | Relay server of heterogeneous manganese and real-time relay method |
US5873100A (en) * | 1996-12-20 | 1999-02-16 | Intel Corporation | Internet browser that includes an enhanced cache for user-controlled document retention |
US7069575B1 (en) | 1997-01-13 | 2006-06-27 | Sedna Patent Services, Llc | System for interactively distributing information services |
US6166730A (en) * | 1997-12-03 | 2000-12-26 | Diva Systems Corporation | System for interactively distributing information services |
US6253375B1 (en) * | 1997-01-13 | 2001-06-26 | Diva Systems Corporation | System for interactively distributing information services |
US6305019B1 (en) | 1997-01-13 | 2001-10-16 | Diva Systems Corporation | System for interactively distributing information services having a remote video session manager |
JPH10207639A (en) * | 1997-01-28 | 1998-08-07 | Sony Corp | High speed data recording/reproducing device and method therefor |
US6803964B1 (en) | 1997-03-21 | 2004-10-12 | International Business Machines Corporation | Method and apparatus for processing digital data |
US6654933B1 (en) | 1999-09-21 | 2003-11-25 | Kasenna, Inc. | System and method for media stream indexing |
GB2323963B (en) * | 1997-04-04 | 1999-05-12 | Sony Corp | Data transmission apparatus and data transmission method |
FR2762416B1 (en) * | 1997-04-16 | 1999-05-21 | Thomson Multimedia Sa | METHOD AND DEVICE FOR ACCESSING DATA SETS CONTAINED IN A MASS MEMORY |
JPH10303840A (en) * | 1997-04-25 | 1998-11-13 | Sony Corp | Multi-channel broadcast system |
US5892915A (en) * | 1997-04-25 | 1999-04-06 | Emc Corporation | System having client sending edit commands to server during transmission of continuous media from one clip in play list for editing the play list |
US5987621A (en) * | 1997-04-25 | 1999-11-16 | Emc Corporation | Hardware and software failover services for a file server |
US5974503A (en) * | 1997-04-25 | 1999-10-26 | Emc Corporation | Storage and access of continuous media files indexed as lists of raid stripe sets associated with file names |
US5991894A (en) * | 1997-06-06 | 1999-11-23 | The Chinese University Of Hong Kong | Progressive redundancy transmission |
JPH114446A (en) * | 1997-06-12 | 1999-01-06 | Sony Corp | Method and system for decoding information signal |
US6032219A (en) * | 1997-08-01 | 2000-02-29 | Garmin Corporation | System and method for buffering data |
US6070228A (en) * | 1997-09-30 | 2000-05-30 | International Business Machines Corp. | Multimedia data storage system and method for operating a media server as a cache device and controlling a volume of data in the media server based on user-defined parameters |
US6502137B1 (en) | 1997-10-09 | 2002-12-31 | International Business Machines Corporation | System and method for transferring information over a computer network |
FR2769727B1 (en) * | 1997-10-09 | 2000-01-28 | St Microelectronics Sa | METHOD AND SYSTEM FOR CONTROLLING SHARED ACCESS TO A RAM |
FR2769728B1 (en) * | 1997-10-09 | 2000-01-28 | St Microelectronics Sa | IMPROVED METHOD AND SYSTEM FOR CONTROLLING SHARED ACCESS TO A RAM |
US5933834A (en) * | 1997-10-16 | 1999-08-03 | International Business Machines Incorporated | System and method for re-striping a set of objects onto an exploded array of storage units in a computer system |
US6070254A (en) * | 1997-10-17 | 2000-05-30 | International Business Machines Corporation | Advanced method for checking the integrity of node-based file systems |
CA2251456C (en) * | 1997-10-31 | 2007-02-13 | Sony Corporation | An apparatus for storing and transmitting data |
US6374336B1 (en) | 1997-12-24 | 2002-04-16 | Avid Technology, Inc. | Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner |
US6415373B1 (en) | 1997-12-24 | 2002-07-02 | Avid Technology, Inc. | Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner |
US6292844B1 (en) | 1998-02-12 | 2001-09-18 | Sony Corporation | Media storage device with embedded data filter for dynamically processing data during read and write operations |
US6167496A (en) * | 1998-02-18 | 2000-12-26 | Storage Technology Corporation | Data stream optimization system for video on demand |
JP3886243B2 (en) * | 1998-03-18 | 2007-02-28 | 富士通株式会社 | Information distribution device |
US6961801B1 (en) * | 1998-04-03 | 2005-11-01 | Avid Technology, Inc. | Method and apparatus for accessing video data in memory across flow-controlled interconnects |
US6128627A (en) * | 1998-04-15 | 2000-10-03 | Inktomi Corporation | Consistent data storage in an object cache |
US6289358B1 (en) | 1998-04-15 | 2001-09-11 | Inktomi Corporation | Delivering alternate versions of objects from an object cache |
US6128623A (en) | 1998-04-15 | 2000-10-03 | Inktomi Corporation | High performance object cache |
US6202124B1 (en) * | 1998-05-05 | 2001-03-13 | International Business Machines Corporation | Data storage system with outboard physical data transfer operation utilizing data path distinct from host |
US7272298B1 (en) | 1998-05-06 | 2007-09-18 | Burst.Com, Inc. | System and method for time-shifted program viewing |
US6230162B1 (en) * | 1998-06-20 | 2001-05-08 | International Business Machines Corporation | Progressive interleaved delivery of interactive descriptions and renderers for electronic publishing of merchandise |
US6704846B1 (en) * | 1998-06-26 | 2004-03-09 | Lsi Logic Corporation | Dynamic memory arbitration in an MPEG-2 decoding System |
CN1867068A (en) | 1998-07-14 | 2006-11-22 | 联合视频制品公司 | Client-server based interactive television program guide system with remote server recording |
US6215486B1 (en) * | 1998-07-20 | 2001-04-10 | Hewlett-Packard Company | Event handling in a single logical screen display using multiple remote computer systems |
US8380041B2 (en) | 1998-07-30 | 2013-02-19 | Tivo Inc. | Transportable digital video recorder system |
US8577205B2 (en) | 1998-07-30 | 2013-11-05 | Tivo Inc. | Digital video recording system |
US7558472B2 (en) | 2000-08-22 | 2009-07-07 | Tivo Inc. | Multimedia signal processing system |
US6233389B1 (en) | 1998-07-30 | 2001-05-15 | Tivo, Inc. | Multimedia time warping system |
US6269431B1 (en) | 1998-08-13 | 2001-07-31 | Emc Corporation | Virtual storage and block level direct access of secondary storage for recovery of backup data |
US6366987B1 (en) | 1998-08-13 | 2002-04-02 | Emc Corporation | Computer data storage physical backup and logical restore |
US6353878B1 (en) * | 1998-08-13 | 2002-03-05 | Emc Corporation | Remote control of backup media in a secondary storage subsystem through access to a primary storage subsystem |
US6167471A (en) | 1998-10-14 | 2000-12-26 | Sony Corporation | Method of and apparatus for dispatching a processing element to a program location based on channel number of received data |
US6859799B1 (en) | 1998-11-30 | 2005-02-22 | Gemstar Development Corporation | Search engine for video and graphics |
US6624761B2 (en) | 1998-12-11 | 2003-09-23 | Realtime Data, Llc | Content independent data compression method and system |
US6330366B1 (en) * | 1998-12-21 | 2001-12-11 | Intel Corporation | Method and apparatus for buffer management in video processing |
US6389494B1 (en) * | 1998-12-30 | 2002-05-14 | Emc Corporation | System for interfacing a data storage system to a host utilizing a plurality of busses for carrying end-user data and a separate bus for carrying interface state data |
US7073020B1 (en) | 1999-01-04 | 2006-07-04 | Emc Corporation | Method for message transfer in computer storage system |
US7117275B1 (en) | 1999-01-04 | 2006-10-03 | Emc Corporation | Data storage system having separate data transfer section and message network |
JP3419334B2 (en) * | 1999-01-14 | 2003-06-23 | 日本電気株式会社 | Data processing apparatus and method |
US6601104B1 (en) | 1999-03-11 | 2003-07-29 | Realtime Data Llc | System and methods for accelerated data storage and retrieval |
US6604158B1 (en) | 1999-03-11 | 2003-08-05 | Realtime Data, Llc | System and methods for accelerated data storage and retrieval |
KR100746842B1 (en) * | 1999-03-23 | 2007-08-09 | 코닌클리케 필립스 일렉트로닉스 엔.브이. | Multimedia server |
US6247069B1 (en) | 1999-05-12 | 2001-06-12 | Sony Corporation | Automatically configuring storage array including a plurality of media storage devices for storing and providing data within a network of devices |
US6748440B1 (en) * | 1999-05-12 | 2004-06-08 | Microsoft Corporation | Flow of streaming data through multiple processing modules |
US6859846B2 (en) | 1999-05-12 | 2005-02-22 | Sony Corporation | Method of distributed recording whereby the need to transition to a second recording device from a first recording device is broadcast by the first recording device |
US7222155B1 (en) | 1999-06-15 | 2007-05-22 | Wink Communications, Inc. | Synchronous updating of dynamic interactive applications |
US7069571B1 (en) | 1999-06-15 | 2006-06-27 | Wink Communications, Inc. | Automated retirement of interactive applications using retirement instructions for events and program states |
US7634787B1 (en) | 1999-06-15 | 2009-12-15 | Wink Communications, Inc. | Automatic control of broadcast and execution of interactive applications to maintain synchronous operation with broadcast programs |
US6742019B1 (en) * | 1999-07-23 | 2004-05-25 | International Business Machines Corporation | Sieved caching for increasing data rate capacity of a heterogeneous striping group |
US6728776B1 (en) * | 1999-08-27 | 2004-04-27 | Gateway, Inc. | System and method for communication of streaming data |
US6467028B1 (en) * | 1999-09-07 | 2002-10-15 | International Business Machines Corporation | Modulated cache for audio on the web |
US6704772B1 (en) * | 1999-09-20 | 2004-03-09 | Microsoft Corporation | Thread based email |
US7318090B1 (en) * | 1999-10-20 | 2008-01-08 | Sony Corporation | Method for utilizing concurrent context switching to support isochronous processes |
US6721859B1 (en) | 1999-10-21 | 2004-04-13 | Sony Corporation | Multi-protocol media storage device implementing protocols optimized for storing and retrieving both asynchronous and isochronous data |
EP1103973A3 (en) * | 1999-11-18 | 2002-02-06 | Pioneer Corporation | Apparatus for and method of recording and reproducing information |
US6523108B1 (en) | 1999-11-23 | 2003-02-18 | Sony Corporation | Method of and apparatus for extracting a string of bits from a binary bit string and depositing a string of bits onto a binary bit string |
US6651113B1 (en) * | 1999-12-22 | 2003-11-18 | Intel Corporation | System for writing data on an optical storage medium without interruption using a local write buffer |
KR100364401B1 (en) * | 1999-12-31 | 2002-12-11 | 엘지전자 주식회사 | Multi Media Service System Using Virtual Server |
US6701528B1 (en) | 2000-01-26 | 2004-03-02 | Hughes Electronics Corporation | Virtual video on demand using multiple encrypted video segments |
US7631338B2 (en) * | 2000-02-02 | 2009-12-08 | Wink Communications, Inc. | Interactive content delivery methods and apparatus |
US7028327B1 (en) | 2000-02-02 | 2006-04-11 | Wink Communication | Using the electronic program guide to synchronize interactivity with broadcast programs |
US20010047473A1 (en) | 2000-02-03 | 2001-11-29 | Realtime Data, Llc | Systems and methods for computer initialization |
EP1287677A2 (en) * | 2000-03-13 | 2003-03-05 | Comnet Media Corporation | Video data management, transmission, and control system and method employing distributed video segments microcasting |
US7398312B1 (en) * | 2000-03-29 | 2008-07-08 | Lucent Technologies Inc. | Method and system for caching streaming multimedia on the internet |
US6993621B1 (en) | 2000-03-31 | 2006-01-31 | Emc Corporation | Data storage system having separate data transfer section and message network with plural directors on a common printed circuit board and redundant switching networks |
US7003601B1 (en) | 2000-03-31 | 2006-02-21 | Emc Corporation | Data storage system having separate data transfer section and message network with plural directions on a common printed circuit board |
US7010575B1 (en) | 2000-03-31 | 2006-03-07 | Emc Corporation | Data storage system having separate data transfer section and message network having bus arbitration |
US7007194B1 (en) | 2000-06-29 | 2006-02-28 | Emc Corporation | Data storage system having point-to-point configuration |
US7278153B1 (en) * | 2000-04-12 | 2007-10-02 | Seachange International | Content propagation in interactive television |
US6779071B1 (en) | 2000-04-28 | 2004-08-17 | Emc Corporation | Data storage system having separate data transfer section and message network with status register |
US6757796B1 (en) * | 2000-05-15 | 2004-06-29 | Lucent Technologies Inc. | Method and system for caching streaming live broadcasts transmitted over a network |
IL136176A (en) * | 2000-05-16 | 2004-02-19 | Lightscape Networks Ltd | Rearrangement of data streams |
US8082572B1 (en) | 2000-06-08 | 2011-12-20 | The Directv Group, Inc. | Method and apparatus for transmitting, receiving, and utilizing audio/visual signals and other information |
US6970937B1 (en) | 2000-06-15 | 2005-11-29 | Abacast, Inc. | User-relayed data broadcasting |
US7720821B1 (en) | 2000-06-30 | 2010-05-18 | Sony Corporation | Method of and apparatus for writing and reading time sensitive data within a storage device |
US8140859B1 (en) | 2000-07-21 | 2012-03-20 | The Directv Group, Inc. | Secure storage and replay of media programs using a hard-paired receiver and storage device |
US7203314B1 (en) | 2000-07-21 | 2007-04-10 | The Directv Group, Inc. | Super encrypted storage and retrieval of media programs with modified conditional access functionality |
US7203311B1 (en) | 2000-07-21 | 2007-04-10 | The Directv Group, Inc. | Super encrypted storage and retrieval of media programs in a hard-paired receiver and storage device |
US7457414B1 (en) | 2000-07-21 | 2008-11-25 | The Directv Group, Inc. | Super encrypted storage and retrieval of media programs with smartcard generated keys |
US7277956B2 (en) | 2000-07-28 | 2007-10-02 | Kasenna, Inc. | System and method for improved utilization of bandwidth in a computer system serving multiple users |
US20020059499A1 (en) * | 2000-09-06 | 2002-05-16 | Hudson Michael D. | System and methods for performing last-element streaming |
US7103906B1 (en) | 2000-09-29 | 2006-09-05 | International Business Machines Corporation | User controlled multi-device media-on-demand system |
US20030069985A1 (en) * | 2000-10-02 | 2003-04-10 | Eduardo Perez | Computer readable media for storing video data |
US9143546B2 (en) | 2000-10-03 | 2015-09-22 | Realtime Data Llc | System and method for data feed acceleration and encryption |
US7417568B2 (en) | 2000-10-03 | 2008-08-26 | Realtime Data Llc | System and method for data feed acceleration and encryption |
US8692695B2 (en) | 2000-10-03 | 2014-04-08 | Realtime Data, Llc | Methods for encoding and decoding data |
KR20190096450A (en) | 2000-10-11 | 2019-08-19 | 로비 가이드스, 인크. | Systems and methods for delivering media content |
US6904475B1 (en) | 2000-11-06 | 2005-06-07 | Sony Corporation | Programmable first-in first-out (FIFO) memory buffer for concurrent data stream handling |
US6993604B2 (en) * | 2000-11-15 | 2006-01-31 | Seagate Technology Llc | Dynamic buffer size allocation for multiplexed streaming |
US7206854B2 (en) * | 2000-12-11 | 2007-04-17 | General Instrument Corporation | Seamless arbitrary data insertion for streaming media |
US7246369B1 (en) * | 2000-12-27 | 2007-07-17 | Info Valve Computing, Inc. | Broadband video distribution system using segments |
US7263714B2 (en) * | 2001-01-18 | 2007-08-28 | Blackarrow, Inc. | Providing content interruptions |
US7519273B2 (en) * | 2001-01-19 | 2009-04-14 | Blackarrow, Inc. | Content with advertisement information segment |
US7085842B2 (en) | 2001-02-12 | 2006-08-01 | Open Text Corporation | Line navigation conferencing system |
US7386046B2 (en) | 2001-02-13 | 2008-06-10 | Realtime Data Llc | Bandwidth sensitive data compression and decompression |
US6973666B1 (en) * | 2001-02-28 | 2005-12-06 | Unisys Corporation | Method of moving video data thru a video-on-demand system which avoids paging by an operating system |
EP1374080A2 (en) | 2001-03-02 | 2004-01-02 | Kasenna, Inc. | Metadata enabled push-pull model for efficient low-latency video-content distribution over a network |
JP2002268999A (en) * | 2001-03-09 | 2002-09-20 | Toshiba Corp | Method and device for reproducing contents |
US6941252B2 (en) * | 2001-03-14 | 2005-09-06 | Mcdata Corporation | Striping data frames across parallel fibre channel links |
US7065587B2 (en) * | 2001-04-02 | 2006-06-20 | Microsoft Corporation | Peer-to-peer name resolution protocol (PNRP) and multilevel cache for use therewith |
US20020147827A1 (en) * | 2001-04-06 | 2002-10-10 | International Business Machines Corporation | Method, system and computer program product for streaming of data |
US6779057B2 (en) | 2001-04-18 | 2004-08-17 | International Business Machines Corporation | Method, system, and program for indicating data transmitted to an input/output device as committed |
US20020157113A1 (en) * | 2001-04-20 | 2002-10-24 | Fred Allegrezza | System and method for retrieving and storing multimedia data |
WO2002093389A1 (en) * | 2001-05-17 | 2002-11-21 | Decru, Inc. | A stream-oriented interconnect for networked computer storage |
US7124292B2 (en) | 2001-05-21 | 2006-10-17 | Sony Corporation | Automatically configuring storage array including a plurality of media storage devices for storing and providing data within a network of devices |
KR20040003052A (en) * | 2001-06-05 | 2004-01-07 | 노오텔 네트웍스 리미티드 | Multiple threshold scheduler for scheduling transmission of data packets to mobile terminals based on a relative throughput spread |
US7076560B1 (en) | 2001-06-12 | 2006-07-11 | Network Appliance, Inc. | Methods and apparatus for storing and serving streaming media data |
US7155531B1 (en) | 2001-06-12 | 2006-12-26 | Network Appliance Inc. | Storage methods and apparatus for streaming media data |
US7478164B1 (en) | 2001-06-12 | 2009-01-13 | Netapp, Inc. | Methods and apparatus for pacing delivery of streaming media data |
US6742082B1 (en) * | 2001-06-12 | 2004-05-25 | Network Appliance | Pre-computing streaming media payload method and apparatus |
US7054911B1 (en) | 2001-06-12 | 2006-05-30 | Network Appliance, Inc. | Streaming media bitrate switching methods and apparatus |
US6813690B1 (en) * | 2001-06-12 | 2004-11-02 | Network Appliance, Inc. | Caching media data using content-sensitive identifiers |
US7444662B2 (en) * | 2001-06-28 | 2008-10-28 | Emc Corporation | Video file server cache management using movie ratings for reservation of memory and bandwidth resources |
US7027247B2 (en) * | 2001-06-28 | 2006-04-11 | Stmicroelectronics, Inc. | Servo circuit having a synchronous servo channel and method for synchronously recovering servo data |
WO2003003743A2 (en) * | 2001-06-29 | 2003-01-09 | Lightmotive Technologies | Method and apparatus for synchronization of parallel media networks |
US20030005455A1 (en) * | 2001-06-29 | 2003-01-02 | Bowers J. Rob | Aggregation of streaming media to improve network performance |
US7296080B2 (en) * | 2001-07-17 | 2007-11-13 | Mcafee, Inc. | Method of simulating network communications |
US7162698B2 (en) * | 2001-07-17 | 2007-01-09 | Mcafee, Inc. | Sliding window packet management systems |
US7149189B2 (en) | 2001-07-17 | 2006-12-12 | Mcafee, Inc. | Network data retrieval and filter systems and methods |
US7277957B2 (en) * | 2001-07-17 | 2007-10-02 | Mcafee, Inc. | Method of reconstructing network communications |
US7047308B2 (en) * | 2001-08-31 | 2006-05-16 | Sharp Laboratories Of America, Inc. | System and method for simultaneous media playout |
US7039955B2 (en) | 2001-09-14 | 2006-05-02 | The Directv Group, Inc. | Embedded blacklisting for digital broadcast system security |
US7409562B2 (en) | 2001-09-21 | 2008-08-05 | The Directv Group, Inc. | Method and apparatus for encrypting media programs for later purchase and viewing |
JP4659357B2 (en) * | 2001-09-21 | 2011-03-30 | ザ・ディレクティービー・グループ・インコーポレイテッド | Method and apparatus for controlling paired operation of conditional access module and integrated receiver and decoder |
US6757694B2 (en) * | 2001-10-03 | 2004-06-29 | International Business Machines Corporation | System and method for logically assigning unique names to devices in a storage system |
US20030135632A1 (en) * | 2001-12-13 | 2003-07-17 | Sophie Vrzic | Priority scheduler |
US6839819B2 (en) * | 2001-12-28 | 2005-01-04 | Storage Technology Corporation | Data management appliance |
US7036043B2 (en) | 2001-12-28 | 2006-04-25 | Storage Technology Corporation | Data management with virtual recovery mapping and backward moves |
US20030131253A1 (en) * | 2001-12-28 | 2003-07-10 | Martin Marcia Reid | Data management appliance |
US7386627B1 (en) | 2002-01-29 | 2008-06-10 | Network Appliance, Inc. | Methods and apparatus for precomputing checksums for streaming media |
US7412531B1 (en) | 2002-01-29 | 2008-08-12 | Blue Coat Systems, Inc. | Live stream archiving method and apparatus |
CN1647480A (en) * | 2002-04-09 | 2005-07-27 | 皇家飞利浦电子股份有限公司 | Transmission method combining downloading and streaming |
US7657644B1 (en) | 2002-05-10 | 2010-02-02 | Netapp, Inc. | Methods and apparatus for streaming media multicast |
US6948104B2 (en) * | 2002-06-26 | 2005-09-20 | Microsoft Corporation | System and method for transparent electronic data transfer using error correction to facilitate bandwidth-efficient data recovery |
US8090761B2 (en) * | 2002-07-12 | 2012-01-03 | Hewlett-Packard Development Company, L.P. | Storage and distribution of segmented media data |
US8200747B2 (en) * | 2002-07-12 | 2012-06-12 | Hewlett-Packard Development Company, L.P. | Session handoff of segmented media data |
US7403993B2 (en) * | 2002-07-24 | 2008-07-22 | Kasenna, Inc. | System and method for highly-scalable real-time and time-based data delivery using server clusters |
US7120751B1 (en) | 2002-08-09 | 2006-10-10 | Networks Appliance, Inc. | Dynamic streaming buffer cache algorithm selection |
GB2391963B (en) * | 2002-08-14 | 2004-12-01 | Flyingspark Ltd | Method and apparatus for preloading caches |
US8272020B2 (en) * | 2002-08-17 | 2012-09-18 | Disney Enterprises, Inc. | System for the delivery and dynamic presentation of large media assets over bandwidth constrained networks |
EP1535469A4 (en) * | 2002-08-30 | 2010-02-03 | Wink Communications Inc | Carousel proxy |
US7000241B2 (en) | 2002-11-21 | 2006-02-14 | The Directv Group, Inc. | Method and apparatus for minimizing conditional access information overhead while ensuring conditional access information reception in multi-tuner receivers |
US7225458B2 (en) | 2002-11-21 | 2007-05-29 | The Directv Group, Inc. | Method and apparatus for ensuring reception of conditional access information in multi-tuner receivers |
CN1320433C (en) * | 2002-12-11 | 2007-06-06 | 皇家飞利浦电子股份有限公司 | Methods and apparatus for improving the breathing of disk scheduling alogorithms |
US7093256B2 (en) * | 2002-12-13 | 2006-08-15 | Equator Technologies, Inc. | Method and apparatus for scheduling real-time and non-real-time access to a shared resource |
US7493646B2 (en) | 2003-01-30 | 2009-02-17 | United Video Properties, Inc. | Interactive television systems with digital video recording and adjustable reminders |
US7322042B2 (en) * | 2003-02-07 | 2008-01-22 | Broadon Communications Corp. | Secure and backward-compatible processor and secure software execution thereon |
US7779482B1 (en) | 2003-02-07 | 2010-08-17 | iGware Inc | Delivery of license information using a short messaging system protocol in a closed content distribution system |
US8131649B2 (en) | 2003-02-07 | 2012-03-06 | Igware, Inc. | Static-or-dynamic and limited-or-unlimited content rights |
US20100017627A1 (en) | 2003-02-07 | 2010-01-21 | Broadon Communications Corp. | Ensuring authenticity in a closed content distribution system |
US7991905B1 (en) | 2003-02-12 | 2011-08-02 | Netapp, Inc. | Adaptively selecting timeouts for streaming media |
KR100556844B1 (en) * | 2003-04-19 | 2006-03-10 | 엘지전자 주식회사 | Method for error detection of moving picture transmission system |
US7260539B2 (en) * | 2003-04-25 | 2007-08-21 | At&T Corp. | System for low-latency animation of talking heads |
US7533184B2 (en) * | 2003-06-13 | 2009-05-12 | Microsoft Corporation | Peer-to-peer name resolution wire protocol and message format data structure for use therein |
US7739715B2 (en) * | 2003-06-24 | 2010-06-15 | Microsoft Corporation | Variable play speed control for media streams |
JP4357239B2 (en) * | 2003-08-27 | 2009-11-04 | 三洋電機株式会社 | Video signal processing device and video display device |
US7593336B2 (en) | 2003-10-31 | 2009-09-22 | Brocade Communications Systems, Inc. | Logical ports in trunking |
US7619974B2 (en) * | 2003-10-31 | 2009-11-17 | Brocade Communication Systems, Inc. | Frame traffic balancing across trunk groups |
US7920623B2 (en) * | 2003-11-14 | 2011-04-05 | General Instrument Corporation | Method and apparatus for simultaneous display of multiple audio/video programs transmitted over a digital link |
US8185475B2 (en) | 2003-11-21 | 2012-05-22 | Hug Joshua D | System and method for obtaining and sharing media content |
US20060265329A1 (en) * | 2003-11-21 | 2006-11-23 | Realnetworks | System and method for automatically transferring dynamically changing content |
US7882034B2 (en) * | 2003-11-21 | 2011-02-01 | Realnetworks, Inc. | Digital rights management for content rendering on playback devices |
US8996420B2 (en) | 2003-11-21 | 2015-03-31 | Intel Corporation | System and method for caching data |
US8738537B2 (en) | 2003-11-21 | 2014-05-27 | Intel Corporation | System and method for relicensing content |
US20060259436A1 (en) * | 2003-11-21 | 2006-11-16 | Hug Joshua D | System and method for relicensing content |
JP4100340B2 (en) * | 2003-12-22 | 2008-06-11 | ソニー株式会社 | Magnetic recording / reproducing device |
US8521830B2 (en) * | 2003-12-22 | 2013-08-27 | International Business Machines Corporation | Pull-configured distribution of imagery |
US7293278B2 (en) * | 2004-01-13 | 2007-11-06 | Comcast Cable Holdings, Llc | On-demand digital asset management and distribution method and system |
US7580523B2 (en) | 2004-01-16 | 2009-08-25 | The Directv Group, Inc. | Distribution of video content using client to host pairing of integrated receivers/decoders |
US7599494B2 (en) | 2004-01-16 | 2009-10-06 | The Directv Group, Inc. | Distribution of video content using a trusted network key for sharing content |
US7548624B2 (en) | 2004-01-16 | 2009-06-16 | The Directv Group, Inc. | Distribution of broadcast content for remote decryption and viewing |
US7801303B2 (en) * | 2004-03-01 | 2010-09-21 | The Directv Group, Inc. | Video on demand in a broadcast network |
US8688803B2 (en) * | 2004-03-26 | 2014-04-01 | Microsoft Corporation | Method for efficient content distribution using a peer-to-peer networking infrastructure |
US7590243B2 (en) | 2004-05-04 | 2009-09-15 | The Directv Group, Inc. | Digital media conditional access system for handling digital media content |
US20050259751A1 (en) * | 2004-05-21 | 2005-11-24 | Howard Brad T | System and a method for controlling audio/video presentation on a sink device |
US20050262530A1 (en) * | 2004-05-24 | 2005-11-24 | Siemens Information And Communication Networks, Inc. | Systems and methods for multimedia communication |
KR101046586B1 (en) * | 2004-05-28 | 2011-07-06 | 삼성전자주식회사 | Display device and display system using same |
US7228364B2 (en) * | 2004-06-24 | 2007-06-05 | Dell Products L.P. | System and method of SCSI and SAS hardware validation |
US8249114B2 (en) * | 2004-08-10 | 2012-08-21 | Arris Solutions, Inc. | Method and device for receiving and providing programs |
US7543317B2 (en) | 2004-08-17 | 2009-06-02 | The Directv Group, Inc. | Service activation of set-top box functionality using broadcast conditional access system |
US8086575B2 (en) | 2004-09-23 | 2011-12-27 | Rovi Solutions Corporation | Methods and apparatus for integrating disparate media formats in a networked media system |
US7752325B1 (en) | 2004-10-26 | 2010-07-06 | Netapp, Inc. | Method and apparatus to efficiently transmit streaming media |
CA2588630C (en) | 2004-11-19 | 2013-08-20 | Tivo Inc. | Method and apparatus for secure transfer of previously broadcasted content |
US7630318B2 (en) * | 2004-12-15 | 2009-12-08 | Agilent Technologies, Inc. | Filtering wireless network packets |
US20060143612A1 (en) * | 2004-12-28 | 2006-06-29 | International Business Machines Corporation | Deskside device-based suspend/resume process |
US7908080B2 (en) | 2004-12-31 | 2011-03-15 | Google Inc. | Transportation routing |
US20060168631A1 (en) * | 2005-01-21 | 2006-07-27 | Sony Corporation | Method and apparatus for displaying content information |
GB2423841A (en) * | 2005-03-04 | 2006-09-06 | Mackenzie Ward Res Ltd | Method and apparatus for conveying audio and/or visual material |
US8516093B2 (en) | 2005-04-22 | 2013-08-20 | Intel Corporation | Playlist compilation system and method |
US7496678B2 (en) * | 2005-05-11 | 2009-02-24 | Netapp, Inc. | Method and system for unified caching of media content |
US20060271948A1 (en) * | 2005-05-11 | 2006-11-30 | Ran Oz | Method and Device for Receiving and Providing Programs |
US7752059B2 (en) | 2005-07-05 | 2010-07-06 | Cardiac Pacemakers, Inc. | Optimization of timing for data collection and analysis in advanced patient management system |
US20070073726A1 (en) | 2005-08-05 | 2007-03-29 | Klein Eric N Jr | System and method for queuing purchase transactions |
US9325944B2 (en) | 2005-08-11 | 2016-04-26 | The Directv Group, Inc. | Secure delivery of program content via a removable storage medium |
US8255546B2 (en) * | 2005-09-30 | 2012-08-28 | Microsoft Corporation | Peer name resolution protocol simple application program interface |
TW200728997A (en) * | 2005-11-08 | 2007-08-01 | Nokia Corp | System and method for providing feedback and forward transmission for remote interaction in rich media applications |
US9319720B2 (en) | 2005-12-13 | 2016-04-19 | Audio Pod Inc. | System and method for rendering digital content using time offsets |
US11128489B2 (en) * | 2017-07-18 | 2021-09-21 | Nicira, Inc. | Maintaining data-plane connectivity between hosts |
US8285809B2 (en) | 2005-12-13 | 2012-10-09 | Audio Pod Inc. | Segmentation and transmission of audio streams |
US9681105B2 (en) | 2005-12-29 | 2017-06-13 | Rovi Guides, Inc. | Interactive media guidance system having multiple devices |
US8607287B2 (en) | 2005-12-29 | 2013-12-10 | United Video Properties, Inc. | Interactive media guidance system having multiple devices |
US7793329B2 (en) | 2006-02-06 | 2010-09-07 | Kasenna, Inc. | Method and system for reducing switching delays between digital video feeds using multicast slotted transmission technique |
US20070245882A1 (en) * | 2006-04-04 | 2007-10-25 | Odenwald Michael J | Interactive computerized digital media management system and method |
US20070233816A1 (en) * | 2006-04-04 | 2007-10-04 | Odenwald Michael J | Digital media management system and method |
EP2033350A2 (en) | 2006-05-02 | 2009-03-11 | Broadon Communications Corp. | Content management system and method |
AU2007249777A1 (en) * | 2006-05-11 | 2007-11-22 | Cfph, Llc | Methods and apparatus for electronic file use and management |
US7992175B2 (en) | 2006-05-15 | 2011-08-02 | The Directv Group, Inc. | Methods and apparatus to provide content on demand in content broadcast systems |
US8996421B2 (en) | 2006-05-15 | 2015-03-31 | The Directv Group, Inc. | Methods and apparatus to conditionally authorize content delivery at broadcast headends in pay delivery systems |
US8001565B2 (en) | 2006-05-15 | 2011-08-16 | The Directv Group, Inc. | Methods and apparatus to conditionally authorize content delivery at receivers in pay delivery systems |
US8095466B2 (en) | 2006-05-15 | 2012-01-10 | The Directv Group, Inc. | Methods and apparatus to conditionally authorize content delivery at content servers in pay delivery systems |
US8775319B2 (en) | 2006-05-15 | 2014-07-08 | The Directv Group, Inc. | Secure content transfer systems and methods to operate the same |
US9225761B2 (en) | 2006-08-04 | 2015-12-29 | The Directv Group, Inc. | Distributed media-aggregation systems and methods to operate the same |
US9178693B2 (en) | 2006-08-04 | 2015-11-03 | The Directv Group, Inc. | Distributed media-protection systems and methods to operate the same |
US7624276B2 (en) | 2006-10-16 | 2009-11-24 | Broadon Communications Corp. | Secure device authentication system and method |
US9218213B2 (en) * | 2006-10-31 | 2015-12-22 | International Business Machines Corporation | Dynamic placement of heterogeneous workloads |
US7613915B2 (en) | 2006-11-09 | 2009-11-03 | BroadOn Communications Corp | Method for programming on-chip non-volatile memory in a secure processor, and a device so programmed |
US8200961B2 (en) | 2006-11-19 | 2012-06-12 | Igware, Inc. | Securing a flash memory block in a secure device system and method |
CA2672089A1 (en) * | 2006-12-08 | 2008-06-19 | Xm Satellite Radio Inc. | System for insertion of locally cached information into received broadcast stream to implement tiered subscription services |
US20090019492A1 (en) | 2007-07-11 | 2009-01-15 | United Video Properties, Inc. | Systems and methods for mirroring and transcoding media content |
US8078729B2 (en) * | 2007-08-21 | 2011-12-13 | Ntt Docomo, Inc. | Media streaming with online caching and peer-to-peer forwarding |
US8165450B2 (en) | 2007-11-19 | 2012-04-24 | Echostar Technologies L.L.C. | Methods and apparatus for filtering content in a video stream using text data |
US8136140B2 (en) | 2007-11-20 | 2012-03-13 | Dish Network L.L.C. | Methods and apparatus for generating metadata utilized to filter content from a video stream using text data |
US8165451B2 (en) | 2007-11-20 | 2012-04-24 | Echostar Technologies L.L.C. | Methods and apparatus for displaying information regarding interstitials of a video stream |
US8606085B2 (en) | 2008-03-20 | 2013-12-10 | Dish Network L.L.C. | Method and apparatus for replacement of audio data in recorded audio/video stream |
CN101287107B (en) * | 2008-05-29 | 2010-10-13 | 腾讯科技(深圳)有限公司 | Demand method, system and device of media file |
US8156520B2 (en) | 2008-05-30 | 2012-04-10 | EchoStar Technologies, L.L.C. | Methods and apparatus for presenting substitute content in an audio/video stream using text data |
US7944946B2 (en) | 2008-06-09 | 2011-05-17 | Fortinet, Inc. | Virtual memory protocol segmentation offloading |
US8601526B2 (en) | 2008-06-13 | 2013-12-03 | United Video Properties, Inc. | Systems and methods for displaying media content and media guidance information |
US8375137B2 (en) * | 2008-07-22 | 2013-02-12 | Control4 Corporation | System and method for streaming audio using a send queue |
US10063934B2 (en) | 2008-11-25 | 2018-08-28 | Rovi Technologies Corporation | Reducing unicast session duration with restart TV |
US8510771B2 (en) | 2008-12-24 | 2013-08-13 | Echostar Technologies L.L.C. | Methods and apparatus for filtering content from a presentation stream using signature data |
US8588579B2 (en) | 2008-12-24 | 2013-11-19 | Echostar Technologies L.L.C. | Methods and apparatus for filtering and inserting content into a presentation stream using signature data |
US8407735B2 (en) | 2008-12-24 | 2013-03-26 | Echostar Technologies L.L.C. | Methods and apparatus for identifying segments of content in a presentation stream using signature data |
US8332365B2 (en) | 2009-03-31 | 2012-12-11 | Amazon Technologies, Inc. | Cloning and recovery of data volumes |
EP2264604A1 (en) * | 2009-06-15 | 2010-12-22 | Thomson Licensing | Device for real-time streaming of two or more streams in parallel to a solid state memory device array |
US8437617B2 (en) | 2009-06-17 | 2013-05-07 | Echostar Technologies L.L.C. | Method and apparatus for modifying the presentation of content |
US20110004750A1 (en) * | 2009-07-03 | 2011-01-06 | Barracuda Networks, Inc | Hierarchical skipping method for optimizing data transfer through retrieval and identification of non-redundant components |
US8280895B2 (en) * | 2009-07-03 | 2012-10-02 | Barracuda Networks Inc | Multi-streamed method for optimizing data transfer through parallelized interlacing of data based upon sorted characteristics to minimize latencies inherent in the system |
US8495423B2 (en) * | 2009-08-11 | 2013-07-23 | International Business Machines Corporation | Flash-based memory system with robust backup and restart features and removable modules |
US9014546B2 (en) | 2009-09-23 | 2015-04-21 | Rovi Guides, Inc. | Systems and methods for automatically detecting users within detection regions of media devices |
US8776158B1 (en) | 2009-09-30 | 2014-07-08 | Emc Corporation | Asynchronous shifting windows caching for forward and backward video streaming |
US8661487B2 (en) * | 2009-10-12 | 2014-02-25 | At&T Intellectual Property I, L.P. | Accessing remote video devices |
US8510785B2 (en) * | 2009-10-19 | 2013-08-13 | Motorola Mobility Llc | Adaptive media caching for video on demand |
US8934758B2 (en) | 2010-02-09 | 2015-01-13 | Echostar Global B.V. | Methods and apparatus for presenting supplemental content in association with recorded content |
US8635390B2 (en) * | 2010-09-07 | 2014-01-21 | International Business Machines Corporation | System and method for a hierarchical buffer system for a shared data bus |
US8837278B2 (en) * | 2010-11-19 | 2014-09-16 | Microsoft Corporation | Concurrently applying an image file while it is being downloaded using a multicast protocol |
US9021537B2 (en) * | 2010-12-09 | 2015-04-28 | Netflix, Inc. | Pre-buffering audio streams |
JP5857273B2 (en) * | 2011-05-17 | 2016-02-10 | パナソニックIpマネジメント株式会社 | Stream processing device |
US8661479B2 (en) | 2011-09-19 | 2014-02-25 | International Business Machines Corporation | Caching large objects with multiple, unknown, and varying anchor points at an intermediary proxy device |
US8805418B2 (en) | 2011-12-23 | 2014-08-12 | United Video Properties, Inc. | Methods and systems for performing actions based on location-based rules |
GB2511668A (en) * | 2012-04-12 | 2014-09-10 | Supercell Oy | System and method for controlling technical processes |
US20140152600A1 (en) * | 2012-12-05 | 2014-06-05 | Asustek Computer Inc. | Touch display device for vehicle and display method applied for the same |
US9674563B2 (en) | 2013-11-04 | 2017-06-06 | Rovi Guides, Inc. | Systems and methods for recommending content |
US9794135B2 (en) | 2013-11-11 | 2017-10-17 | Amazon Technologies, Inc. | Managed service for acquisition, storage and consumption of large-scale data streams |
US20150156264A1 (en) * | 2013-12-04 | 2015-06-04 | International Business Machines Corporation | File access optimization using strategically partitioned and positioned data in conjunction with a collaborative peer transfer system |
US9471585B1 (en) * | 2013-12-20 | 2016-10-18 | Amazon Technologies, Inc. | Decentralized de-duplication techniques for largescale data streams |
US9547553B1 (en) | 2014-03-10 | 2017-01-17 | Parallel Machines Ltd. | Data resiliency in a shared memory pool |
US9781027B1 (en) | 2014-04-06 | 2017-10-03 | Parallel Machines Ltd. | Systems and methods to communicate with external destinations via a memory network |
US9690713B1 (en) | 2014-04-22 | 2017-06-27 | Parallel Machines Ltd. | Systems and methods for effectively interacting with a flash memory |
US9477412B1 (en) | 2014-12-09 | 2016-10-25 | Parallel Machines Ltd. | Systems and methods for automatically aggregating write requests |
US9785510B1 (en) | 2014-05-09 | 2017-10-10 | Amazon Technologies, Inc. | Variable data replication for storage implementing data backup |
US9734021B1 (en) | 2014-08-18 | 2017-08-15 | Amazon Technologies, Inc. | Visualizing restoration operation granularity for a database |
US9753873B1 (en) | 2014-12-09 | 2017-09-05 | Parallel Machines Ltd. | Systems and methods for key-value transactions |
US9639407B1 (en) | 2014-12-09 | 2017-05-02 | Parallel Machines Ltd. | Systems and methods for efficiently implementing functional commands in a data processing system |
US9781225B1 (en) | 2014-12-09 | 2017-10-03 | Parallel Machines Ltd. | Systems and methods for cache streams |
US9639473B1 (en) | 2014-12-09 | 2017-05-02 | Parallel Machines Ltd. | Utilizing a cache mechanism by copying a data set from a cache-disabled memory location to a cache-enabled memory location |
US9632936B1 (en) | 2014-12-09 | 2017-04-25 | Parallel Machines Ltd. | Two-tier distributed memory |
US10423493B1 (en) | 2015-12-21 | 2019-09-24 | Amazon Technologies, Inc. | Scalable log-based continuous data protection for distributed databases |
US10567500B1 (en) | 2015-12-21 | 2020-02-18 | Amazon Technologies, Inc. | Continuous backup of data in a distributed data store |
US10853182B1 (en) | 2015-12-21 | 2020-12-01 | Amazon Technologies, Inc. | Scalable log-based secondary indexes for non-relational databases |
US10649655B2 (en) | 2016-09-30 | 2020-05-12 | Western Digital Technologies, Inc. | Data storage system with multimedia assets |
US10848802B2 (en) | 2017-09-13 | 2020-11-24 | Cisco Technology, Inc. | IP traffic software high precision pacer |
US10990581B1 (en) | 2017-09-27 | 2021-04-27 | Amazon Technologies, Inc. | Tracking a size of a database change log |
US10754844B1 (en) | 2017-09-27 | 2020-08-25 | Amazon Technologies, Inc. | Efficient database snapshot generation |
US11182372B1 (en) | 2017-11-08 | 2021-11-23 | Amazon Technologies, Inc. | Tracking database partition change log dependencies |
US11042503B1 (en) | 2017-11-22 | 2021-06-22 | Amazon Technologies, Inc. | Continuous data protection and restoration |
US11269731B1 (en) | 2017-11-22 | 2022-03-08 | Amazon Technologies, Inc. | Continuous data protection |
US10621049B1 (en) | 2018-03-12 | 2020-04-14 | Amazon Technologies, Inc. | Consistent backups based on local node clock |
US11126505B1 (en) | 2018-08-10 | 2021-09-21 | Amazon Technologies, Inc. | Past-state backup generator and interface for database systems |
US10848539B2 (en) | 2018-09-20 | 2020-11-24 | Cisco Technology, Inc. | Genlock mechanism for software pacing of media constant bit rate streams |
US11042454B1 (en) | 2018-11-20 | 2021-06-22 | Amazon Technologies, Inc. | Restoration of a data source |
US10534736B1 (en) * | 2018-12-31 | 2020-01-14 | Texas Instruments Incorporated | Shared buffer for multi-output display systems |
US11172269B2 (en) | 2020-03-04 | 2021-11-09 | Dish Network L.L.C. | Automated commercial content shifting in a video streaming system |
US20220070525A1 (en) * | 2020-08-26 | 2022-03-03 | Mediatek Singapore Pte. Ltd. | Multimedia device and related method for a video mute mode |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0368683A2 (en) * | 1988-11-11 | 1990-05-16 | Victor Company Of Japan, Limited | Data handling apparatus |
CA2117422A1 (en) * | 1992-02-11 | 1993-08-19 | Mark C. Koz | Adaptive video file server and methods for its use |
CA2071416A1 (en) * | 1992-06-17 | 1993-12-18 | Michel Fortier | Video store and forward apparatus and method |
GB2270791A (en) * | 1992-09-21 | 1994-03-23 | Grass Valley Group | Video disk storage array |
WO1994012937A2 (en) * | 1992-11-17 | 1994-06-09 | Starlight Networks, Inc. | Method of operating a disk storage system |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4679191A (en) * | 1983-05-04 | 1987-07-07 | Cxc Corporation | Variable bandwidth switching system |
US4616263A (en) * | 1985-02-11 | 1986-10-07 | Gte Corporation | Video subsystem for a hybrid videotex facility |
US5089885A (en) * | 1986-11-14 | 1992-02-18 | Video Jukebox Network, Inc. | Telephone access display system with remote monitoring |
IT1219727B (en) * | 1988-06-16 | 1990-05-24 | Italtel Spa | BROADBAND COMMUNICATION SYSTEM |
US4949187A (en) * | 1988-12-16 | 1990-08-14 | Cohen Jason M | Video communications system having a remotely controlled central source of video and audio data |
US5099319A (en) * | 1989-10-23 | 1992-03-24 | Esch Arthur G | Video information delivery method and apparatus |
CA2022302C (en) * | 1990-07-30 | 1995-02-28 | Douglas J. Ballantyne | Method and apparatus for distribution of movies |
US5166930A (en) * | 1990-12-17 | 1992-11-24 | At&T Bell Laboratories | Data channel scheduling discipline arrangement and method |
EP0529864B1 (en) * | 1991-08-22 | 2001-10-31 | Sun Microsystems, Inc. | Network video server apparatus and method |
DE69223996T2 (en) * | 1992-02-11 | 1998-08-06 | Intelligent Instr Corp | ADAPTIVE VIDEO FILE PROCESSOR AND METHOD FOR ITS APPLICATION |
US5274642A (en) * | 1992-06-05 | 1993-12-28 | Indra Widjaja | Output buffered packet switch with a flexible buffer management scheme |
US5289461A (en) * | 1992-12-14 | 1994-02-22 | International Business Machines Corporation | Interconnection method for digital multimedia communications |
US5414455A (en) * | 1993-07-07 | 1995-05-09 | Digital Equipment Corporation | Segmented video on demand system |
US5442390A (en) * | 1993-07-07 | 1995-08-15 | Digital Equipment Corporation | Video on demand with memory accessing and or like functions |
-
1994
- 1994-09-08 US US08/302,619 patent/US5586264A/en not_active Expired - Fee Related
-
1995
- 1995-07-07 CA CA002153444A patent/CA2153444A1/en not_active Abandoned
- 1995-08-25 EP EP95305966A patent/EP0702491A1/en not_active Withdrawn
- 1995-09-07 JP JP22994495A patent/JP3234752B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0368683A2 (en) * | 1988-11-11 | 1990-05-16 | Victor Company Of Japan, Limited | Data handling apparatus |
CA2117422A1 (en) * | 1992-02-11 | 1993-08-19 | Mark C. Koz | Adaptive video file server and methods for its use |
CA2071416A1 (en) * | 1992-06-17 | 1993-12-18 | Michel Fortier | Video store and forward apparatus and method |
GB2270791A (en) * | 1992-09-21 | 1994-03-23 | Grass Valley Group | Video disk storage array |
WO1994012937A2 (en) * | 1992-11-17 | 1994-06-09 | Starlight Networks, Inc. | Method of operating a disk storage system |
Non-Patent Citations (1)
Title |
---|
D. DELODDERE ET AL.: "Interactive Video on Demand", IEEE COMMUNICATIONS MAGAZINE, vol. 32, no. 5, May 1994 (1994-05-01), PISCATAWAY, NJ US, pages 82 - 88, XP000451098 * |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7849393B1 (en) | 1992-12-09 | 2010-12-07 | Discovery Communications, Inc. | Electronic book connection to world watch live |
US7835989B1 (en) | 1992-12-09 | 2010-11-16 | Discovery Communications, Inc. | Electronic book alternative delivery systems |
US7865405B2 (en) | 1992-12-09 | 2011-01-04 | Discovery Patent Holdings, Llc | Electronic book having electronic commerce features |
US7770196B1 (en) | 1992-12-09 | 2010-08-03 | Comcast Ip Holdings I, Llc | Set top terminal for organizing program options available in television delivery system |
US9286294B2 (en) | 1992-12-09 | 2016-03-15 | Comcast Ip Holdings I, Llc | Video and digital multimedia aggregator content suggestion engine |
US7716349B1 (en) | 1992-12-09 | 2010-05-11 | Discovery Communications, Inc. | Electronic book library/bookstore system |
US8073695B1 (en) | 1992-12-09 | 2011-12-06 | Adrea, LLC | Electronic book with voice emulation features |
US8095949B1 (en) | 1993-12-02 | 2012-01-10 | Adrea, LLC | Electronic book with restricted access features |
US7865567B1 (en) | 1993-12-02 | 2011-01-04 | Discovery Patent Holdings, Llc | Virtual on-demand electronic book |
US9053640B1 (en) | 1993-12-02 | 2015-06-09 | Adrea, LLC | Interactive electronic book |
US7861166B1 (en) | 1993-12-02 | 2010-12-28 | Discovery Patent Holding, Llc | Resizing document pages to fit available hardware screens |
US6675386B1 (en) | 1996-09-04 | 2004-01-06 | Discovery Communications, Inc. | Apparatus for video access and control over computer network, including image correction |
US6205525B1 (en) | 1997-07-02 | 2001-03-20 | U.S. Philips Corporation | System for supplying data steams |
WO1999001808A2 (en) * | 1997-07-02 | 1999-01-14 | Koninklijke Philips Electronics N.V. | System for supplying data streams |
WO1999001808A3 (en) * | 1997-07-02 | 1999-03-25 | Koninkl Philips Electronics Nv | System for supplying data streams |
EP1025696A4 (en) * | 1997-09-04 | 2002-01-02 | Discovery Communicat Inc | Apparatus for video access and control over computer network, including image correction |
EP1309194A1 (en) * | 1997-09-04 | 2003-05-07 | Discovery Communications, Inc. | Apparatus for video access and control over computer network, including image correction |
EP1025696A1 (en) * | 1997-09-04 | 2000-08-09 | Sedna Patent Services, LLC | Apparatus for video access and control over computer network, including image correction |
WO1999014954A1 (en) * | 1997-09-18 | 1999-03-25 | Microsoft Corporation | Continuous media file server system and method for scheduling disk reads while playing multiple files having different transmission rates |
US6457057B1 (en) | 1997-11-04 | 2002-09-24 | Matsushita Electric Industrial Co., Ltd. | System for displaying a plurality of pictures and apparatuses incorporating the same |
EP0915622A3 (en) * | 1997-11-04 | 2001-11-14 | Matsushita Electric Industrial Co., Ltd. | System for coding and displaying a plurality of pictures |
EP0915622A2 (en) * | 1997-11-04 | 1999-05-12 | Matsushita Electric Industrial Co., Ltd. | System for coding and displaying a plurality of pictures |
US6381620B1 (en) | 1997-12-10 | 2002-04-30 | Matsushita Electric Industrial Co., Ltd. | Rich text medium displaying method and picture information providing system using calculated average reformatting time for multimedia objects |
EP0929044A3 (en) * | 1997-12-10 | 2000-11-08 | Matsushita Electric Industrial Co., Ltd. | Rich text medium displaying method and picture information providing system |
EP0929044A2 (en) * | 1997-12-10 | 1999-07-14 | Matsushita Electric Industrial Co., Ltd. | Rich text medium displaying method and picture information providing system |
EP0974909A3 (en) * | 1998-07-20 | 2001-09-05 | Hewlett-Packard Company, A Delaware Corporation | Data transfer management system for managing burst data transfer |
EP0974909A2 (en) * | 1998-07-20 | 2000-01-26 | Hewlett-Packard Company | Data transfer management system for managing burst data transfer |
US9099097B2 (en) | 1999-06-25 | 2015-08-04 | Adrea, LLC | Electronic book with voice emulation features |
US8548813B2 (en) | 1999-06-25 | 2013-10-01 | Adrea, LLC | Electronic book with voice emulation features |
EP1065584A1 (en) * | 1999-06-29 | 2001-01-03 | Telefonaktiebolaget Lm Ericsson | Command handling in a data processing system |
EP1160672A3 (en) * | 2000-04-04 | 2002-05-02 | International Business Machines Corporation | System and method for caching sets of objects |
EP1160672A2 (en) * | 2000-04-04 | 2001-12-05 | International Business Machines Corporation | System and method for caching sets of objects |
US9813641B2 (en) | 2000-06-19 | 2017-11-07 | Comcast Ip Holdings I, Llc | Method and apparatus for targeting of interactive virtual objects |
US9078014B2 (en) | 2000-06-19 | 2015-07-07 | Comcast Ip Holdings I, Llc | Method and apparatus for targeting of interactive virtual objects |
US8621521B2 (en) | 2001-08-03 | 2013-12-31 | Comcast Ip Holdings I, Llc | Video and digital multimedia aggregator |
US8578410B2 (en) | 2001-08-03 | 2013-11-05 | Comcast Ip Holdings, I, Llc | Video and digital multimedia aggregator content coding and formatting |
US10140433B2 (en) | 2001-08-03 | 2018-11-27 | Comcast Ip Holdings I, Llc | Video and digital multimedia aggregator |
US10349096B2 (en) | 2001-08-03 | 2019-07-09 | Comcast Ip Holdings I, Llc | Video and digital multimedia aggregator content coding and formatting |
US6947947B2 (en) * | 2001-08-17 | 2005-09-20 | Universal Business Matrix Llc | Method for adding metadata to data |
CN101854309A (en) * | 2010-06-18 | 2010-10-06 | 中兴通讯股份有限公司 | Method and apparatus for managing message output |
WO2012107341A2 (en) | 2011-02-07 | 2012-08-16 | Alcatel Lucent | A cache manager for segmented multimedia and corresponding method for cache management |
EP2487609A1 (en) | 2011-02-07 | 2012-08-15 | Alcatel Lucent | A cache manager for segmented multimedia and corresponding method for cache management |
Also Published As
Publication number | Publication date |
---|---|
JPH0887385A (en) | 1996-04-02 |
US5586264A (en) | 1996-12-17 |
JP3234752B2 (en) | 2001-12-04 |
CA2153444A1 (en) | 1996-03-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0701371B1 (en) | Video media streamer | |
US5805821A (en) | Video optimized media streamer user interface employing non-blocking switching to achieve isochronous data transfers | |
CA2154038C (en) | Video optimized media streamer data flow architecture | |
US5603058A (en) | Video optimized media streamer having communication nodes received digital data from storage node and transmitted said data to adapters for generating isochronous digital data streams | |
US5586264A (en) | Video optimized media streamer with cache management | |
US5712976A (en) | Video data streamer for simultaneously conveying same one or different ones of data blocks stored in storage node to each of plurality of communication nodes | |
US5606359A (en) | Video on demand system with multiple data sources configured to provide vcr-like services | |
US5987621A (en) | Hardware and software failover services for a file server | |
US5790794A (en) | Video storage unit architecture | |
US5892915A (en) | System having client sending edit commands to server during transmission of continuous media from one clip in play list for editing the play list | |
US6005599A (en) | Video storage and delivery apparatus and system | |
US5974503A (en) | Storage and access of continuous media files indexed as lists of raid stripe sets associated with file names | |
EP0701373B1 (en) | Video server system | |
EP1175776B1 (en) | Video on demand system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 19960921 |