US5678061A - Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks - Google Patents

Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks Download PDF

Info

Publication number
US5678061A
US5678061A US08/504,096 US50409695A US5678061A US 5678061 A US5678061 A US 5678061A US 50409695 A US50409695 A US 50409695A US 5678061 A US5678061 A US 5678061A
Authority
US
United States
Prior art keywords
data
disks
disk
program
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US08/504,096
Inventor
Antoine N. Mourad
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nokia of America Corp
Original Assignee
Lucent Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lucent Technologies Inc filed Critical Lucent Technologies Inc
Assigned to AT&T IPM CORP. reassignment AT&T IPM CORP. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MOURAD, ANTOINE N.
Priority to US08/504,096 priority Critical patent/US5678061A/en
Priority to EP96305095A priority patent/EP0755009A2/en
Priority to JP8185827A priority patent/JPH09114605A/en
Assigned to LUCENT TECHNOLOGIES INC. reassignment LUCENT TECHNOLOGIES INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AT&T CORP.
Publication of US5678061A publication Critical patent/US5678061A/en
Application granted granted Critical
Assigned to THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT reassignment THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT CONDITIONAL ASSIGNMENT OF AND SECURITY INTEREST IN PATENT RIGHTS Assignors: LUCENT TECHNOLOGIES INC. (DE CORPORATION)
Assigned to LUCENT TECHNOLOGIES INC. reassignment LUCENT TECHNOLOGIES INC. TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS Assignors: JPMORGAN CHASE BANK, N.A. (FORMERLY KNOWN AS THE CHASE MANHATTAN BANK), AS ADMINISTRATIVE AGENT
Assigned to ALCATEL-LUCENT USA INC. reassignment ALCATEL-LUCENT USA INC. MERGER (SEE DOCUMENT FOR DETAILS). Assignors: LUCENT TECHNOLOGIES INC.
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/18Error detection or correction; Testing, e.g. of drop-outs
    • G11B20/1803Error detection or correction; Testing, e.g. of drop-outs by redundancy in data representation
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/18Error detection or correction; Testing, e.g. of drop-outs
    • G11B20/1806Pulse code modulation systems for audio signals
    • G11B20/1809Pulse code modulation systems for audio signals by interleaving
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/70Masking faults in memories by using spares or by reconfiguring
    • G11C29/74Masking faults in memories by using spares or by reconfiguring using duplex memories, i.e. using dual copies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends

Definitions

  • the invention relates to striping (interleaving) data across multiple disks to improves concurrent access to the data by a plurality of different users.
  • Disk striping is used to provide increased bandwidth in the transmission of data by transferring data in parallel to or from multiple disks.
  • Disk striping, or interleaving more particularly, consists of combining the memory capacity of multiple disks and spreading the storage of data across the disks such that the first striping unit is on the first disk, the second striping unit is on the second disk, and the N th striping unit is on disk (N-1 mod M)+1, where M is the number of disks involved in the storage of the data.
  • Disk striping is particularly useful in storing video programs that will be used to implement video-on-demand (VOD) servers.
  • VOD video-on-demand
  • Disk-based VOD servers use disk striping to ensure an acceptable level of concurrent access to each video program (object) stored in the server to ensure an acceptable level of access time to a stored video segment that is to be transmitted to a user/subscriber.
  • Disk striping makes a server (VOD system) more vulnerable to component (especially disk) failures, since a single component failure could cause an entire disk array (group) to become inoperable, thereby preventing users from having access to video programs stored on the disk array.
  • the various hardware approaches taken thus far to deal with this problem have proved out to be costly, and handle only disk failures. Moreover, such approaches require additional data buffering in the server.
  • a fraction e.g., one-half, of each disk is used as primary storage while the remaining fraction is used as a backup storage, such that the data (e.g., the contents of a video program) is partitioned into data units which are striped across the primary storage area of a plurality of disks, e.g., N disks.
  • a backup copy is made such that the contents of each such primary storage area is striped across the backup storage area of the other disks.
  • FIG. 1 is a broad block diagram of a data server in which the principles of the invention may be practiced
  • FIG. 2 is an illustrative example of the way in which data may be striped across a plurality of disks in accord with an aspect of the invention
  • FIG. 3 illustrates a schedule that a storage node of FIG. 1 may create for the unloading of data blocks stored in an associated disk
  • FIG. 4 is a flow chart of the program that the host processor of FIG. 1 invokes to create a schedule to supply the data blocks on the disks of FIG. 1 to a user;
  • FIG. 5 is an example of a schedule for a system having four disks and three data streams per cycle
  • FIG. 6 is a flow chart of the program that the host processor of FIG. 1 uses to reschedule the provisioning of data blocks that were to be supplied by a disk that failed;
  • FIG. 7 is an illustrative example of the result generated by the program of FIG. 6.
  • FIG. 8 illustrates an alternative to the program of FIG. 6.
  • VOD Video-On-Demand
  • multiplexer 75 receives data in the form of packets from storage nodes 50-1 through 50-N, respectively. Multiplexer 75 then routes the received packets over respective virtual circuit connections via communications paths 75-1 through 75-N to their intended destinations, in which a packet may carry a segment of a video program and in which the content of the video program may be doubly striped and mirrored in accord with the invention across storage disks 40-11 through 40-j4, as will be explained below in detail.
  • a storage node 50i may supply a packet carrying a respective portion of a segment of a particular video program to multiplexer 75 via a virtual channel assigned to the segments that are stored in storage disks 40k associated with storage node 50i and that are to be delivered to the same destination (subscriber), where N, j, i and k>1.
  • segments of the content of a video program striped across the storage disks may be supplied to multiplexer 75 via respective ones of a number of different virtual data channels assigned for that purpose. Once such contents have been so delivered, then the assigned virtual data channels may be reassigned for some other but similar purpose (or left idle).
  • server 100 includes host processor 25 which interfaces a user with server 100 for the purpose of processing an order (request) received from a user, e.g., one of the users 10-1 through 10-i, in which the request may pertain to a particular video program (such as a movie).
  • the requested program may be one that has been segmented and striped across disks 40-11 through 40-j4, which segments are then unloaded in sequence and transmitted (routed) to the user via multiplexer 75 and the appropriate one of the communications paths 76-1 through 76-N.
  • the request may be directed to the stopping, restarting, etc., of the video program.
  • Processor 25 more particularly, first determines if it has the resources available to service the request. That is, if a disk is able to support n data streams and the content of a program is striped over N disks, then nN streams (users) can be supported from the array of N disks, where n and N>1. Thus, server 100 may service the user's request if the current number of data streams that are being supplied by the array of disks 40-11 through 40-j4 to multiplexer 75 is less than nd. Assuming that is the case, then processor 25 communicates with multiplexer 75 to obtain channel assignments that the storage nodes may use to sequentially transmit their respective video segments that form the requested video to multiplexer 75.
  • processor 25 establishes a schedule that the storage nodes 50i are to follow in the delivery of the segments to the user via multiplexer 75.
  • Processor 25 then supplies the assigned channels and schedule to each of the storage nodes 50i via Local Area Network (LAN) 30, as discussed below.
  • the schedule includes the identity of the storage node, e.g., 50-1, and associated disk, e.g., disk 40-11, storing the initial (first) segment of the requested program.
  • the schedule also includes the time of day that the first segment of the program is to be unloaded from disk and supplied to multiplexer 75.
  • Processor 25 then forms a message containing, inter alia, the (a) schedule established for running the program, (b) identity of the program, (c) identity of the storage node containing the first segment of the program, (d) channel that has been assigned to the node whose address is contained in the header of the message, and (e) time of day that the first segment is to be transmitted to the user.
  • Processor 25 then sends the message to the storage node identified in the message via LAN 30.
  • Processor 25 then updates the message so that it is applicable to a next one of the storage nodes and sends the message to that node via LAN 30.
  • processor 25 sends the message first to the storage node 50i containing the first segment of the requested program.
  • Processor 25 sequentially sends the message to the remaining storage nodes based on the order of their respective addresses and updates the message following each such transmission.
  • storage node 50-1 includes microprocessor 52-3 for communicating with host 25, in which the communications are typically directed to setting up a schedule for the delivery of respective video segments stored in buffer 52-2 to multiplexer 75 via the assigned data channel and for controlling the storing and unloading of the segments from buffer 52-2.
  • Buffer 52-2 represents a dual buffer arrangement in which microprocessor 52-3 unloads segments of video from respective ones of the disks 40-11 through 40-14 and stores the unloaded segments in a first one of the dual buffers 52-2 during a first cycle.
  • microprocessor 52-3 in turn unloads portions of respective segments of respective videos stored in the second one of the dual buffers 52-2 during a previous cycle.
  • Adapter 52-1 reads a packet from the buffer and transmits the packet to multiplexer 75 via communication path 51-1 (which may be, e.g., optical fiber) and the channel assigned for that particular purpose.
  • OC3 adapter 52-1 implements the well-known OC3 protocol for interfacing a data terminal, e.g., storage node 50-1, with an optical fiber communications path, e.g., path 51-1.
  • SCSI adapter 52-4 implements a Small Computer System Interface between microprocessor 52-3 and its associated disks 40-11 through 40-14 via bus 45-1.
  • Microprocessor 52-3 repeats the foregoing process during a next cycle, at which time the first buffer is being unloaded and the second buffer is being loaded with video segments obtained from disks 40-11 through 40-14, and so on.
  • a video program is doubly striped and mirrored across disks 40-1 through 40-j4.
  • the striping of a video is confined to a predefined portion of each such disk, i.e., a primary section of a disk, and a backup copy of a disk is then made by striping the contents of the disk across another predefined portion, i.e., a secondary section, of each of the other disks.
  • the contents of a video program are striped across N disks by dividing the program into consecutive blocks, D0 through Da, of fixed size U, e.g., three megabits, and striping the blocks across the primary sections of the disks in round-robin order.
  • the contents of each disk are then striped across the secondary sections of the other disks to create a backup copy of such contents.
  • An example of the inventive striping is shown in FIG. 2
  • disks 65-1 through 65-4 of a server 60 are shown.
  • the other elements of the server 60 e.g., storage node, host processor, etc., are not shown.
  • the host processor has divided a video program into a plurality of sequential data blocks (segments) D0 through Di for storage in the disks, in accord with the invention.
  • the host processor stores the data blocks D0 through Di in sequential order (i.e., round-robin order) in the primary sections P of disks 65-1 through 65-4, respectively, as shown in FIG. 2. That is, the host processor stripes the data blocks across the primary sections of disks 65-1 through 65-4.
  • the host processor then makes a backup copy of the contents of the primary sections of the disks 65i, for example, 65-1. It does this, in accord with an aspect of the invention, by striping the data blocks forming such contents across the secondary sections of the other disks, i.e., disks 65-2 through 65-4, in round-robin order. For example, it is seen from FIG. 2 that the first three data blocks D0, D4 and D8 stored in disk 65-1 are also stored in the secondary (backup) sections S of disks 65-2 through 65-4, respectively. This is also the case for data blocks D12, D16 and D20 as well as the remaining contents of the primary section of disk 65-1.
  • the contents of the primary section of disk 65-2 are striped across disks 65-3, 65-4 and 65-1 in that order.
  • data blocks D1, D5 and D9 of the primary section of disk 65-2 are striped across disks 65-3, 65-4 and 65-1, respectively, and so on.
  • Such backup striping is shown for the contents of the primary sections of disks 65-3 and 65-4.
  • the unloading of the data that forms a video program is done, for example, sequentially, e.g., D0 to D3, then D4 to D7, and so on. It is understood, of course, that the start of a video program does not have to begin from the first disk, e.g., disk 65-1, but may start with any one of the disks, since the starting point may be mapped to any one of the disks. However, for the sake of simplicity, it is assumed herein that the program starts at disk 65-1 with block D0.
  • the unloading of data blocks (segments) from the disks and storage thereof is done cyclically, such that an unloaded block of data is stored in a first buffer during a first cycle and the next block is unloaded during the next cycle and stored in a second buffer while the first block from the first buffer is being transmitted to the user.
  • node 60-1 Upon receipt of the request, node 60-1 generates a schedule for the unloading of the blocks of the program that are stored in associated disk 65-1 and stores the schedule in internal memory (not shown). The schedule so generated starts with the unloading of block D0 at time t0 minus 1 second for delivery to the associated server 300 multiplexer at time t0.
  • next entry, D4 is scheduled to be unloaded at time t0 plus 3 seconds for delivery at t0 plus 4 seconds.
  • the next entry D8 is scheduled to be unloaded at time t0 plus 7 seconds for delivery at t0 plus 8 seconds, and so on.
  • An illustrative example of such a schedule is shown in FIG. 3.
  • Storage node 60-2 generates a similar schedule with respect to data blocks D1, D5, D9, etc. That is, a storage node is programmed so that it determines its order in delivering the sequence of data blocks 35 forming the requested program to the associated multiplexer as a function of the identity of the node 60i having the first block of such data. Accordingly, the schedule that node 60-2 generates will indicate that data block D1, D5, D9, etc., are to be respectively delivered during cycles t0+1, t0+5, t0+9, and so on. Similarly, storage nodes 60-3 and 60-4 generate their own delivery schedules with respect to the data blocks of the requested program that are stored in their associated disks.
  • a number of users may request the same program.
  • a first data stream composed sequentially of data blocks D0, D1, D2, D3, . . . D8, etc. may be unloaded in that order during respective cycles to respond to a request entered by one user and that concurrently therewith data blocks D13, D14, D15, etc., may be unloaded in that order during the same respective cycles to respond to a request entered earlier by another user.
  • a third user may be accessing another data stream involving blocks D29 etc. This can occur, for example, if the second user started accessing the performance at an earlier time and has already passed through the sequence of blocks D0 to D12 and the third user has already passed through the sequence of D0 to D28, and so on.
  • a new video performance may start if the number of video streams currently being served is less than nN.
  • the disk containing the starting address for the requested stream is identified.
  • the first I/O request for that session cannot be initiated until the beginning of a cycle in which the disk services less than n sessions.
  • Playback starts at the end of the cycle in which the first I/O has been requested.
  • the upper bound on the playback delay is (N+1)C.
  • FIG. 4 illustrates in flow chart form the program that is implemented in processor 25 to process a user request to view a video program or to restart an interrupted stream.
  • the operation starts at step 400 in response to receipt of a request for a new stream or to restart a previously stopped stream.
  • the program is entered at block 400 where it proceeds to block 401 to check to see if the number of data streams (i.e., the number of users) that are actively being served equal nN, where n and hl are defined above. If that is the case, then the program proceeds to block 402 where it delays the processing of the user's request until system capacity becomes available.
  • the program proceeds to block 403 where it increments the number of data streams (users) that are being processed and then determines the starting point (address) of the first data block of the requested video program (or the address of the restart data block), in which the address includes the address of the corresponding disk j.
  • the program (block 404) then sets a variable i to correspond to the next batch of data streams that will be processed using disk j.
  • the program (block 405) then checks to see if the number of data streams in batch i equal the maximum n. If so, then the program (block 406) initiates a wait of a duration equal to C(U/R) and sets i to the address of the next batch and returns to block 115 to determine if the request can be added to that batch.
  • the program increments (block 407) the number of data streams in batch i and sets a variable m to the address of node j.
  • the program assigns the user's request to batch i for transmission via bus m and the appropriate storage node 50m.
  • a disk may fail during the playing of a video program.
  • a disk failure is readily handled. in accordance with an aspect of the invention, by striping the contents of a disk across the secondary storage areas of the other disks, as mentioned above.
  • One aspect of the claimed invention then is to ensure that, following a disk failure, the processing load being handled by a failed disk is assumed equally by the operable (surviving) disks. Accordingly, a scheduling algorithm described below ensures that the "video streams" that were being provided by the failed disk(s) may be serviced seamlessly from the surviving disks. This may be achieved by reserving about 1/Nth of the I/O bandwidth of each disk system so that video streams may be reassigned thereto to handle the load of the failed disk.
  • a N-disk subsystem may support N (n- n/N!) streams.
  • the schedule for a given disk is a periodic schedule that repeats every Nth cycle, in which a disk may provide up to n- n/N!streams each cycle.
  • a different set of video (or data) streams is launched in each of the N cycles in a periodic schedule.
  • the first cycle in each column 66i shows the schedule for disk 65i under normal operation. Assume that disk 65-1 fails at the end of cycle 67.
  • the provision of the data streams that disk 65-1 provides are reassigned to the operable disks 65-2 through 65-4 on a cycle by cycle basis.
  • the reassignment is completed once for all N cycles in a system period. That is, for each cycle, the video streams provided by the failed disk are inserted in the schedules of the surviving disks in the same cycle.
  • Such video streams may be reassigned in round-robin order so that the additional load is shared equally among the disks.
  • each surviving disk will be assigned (n- n/N!)/(N-1) additional video streams and will provide up to n- n/N!+(n- n/N!)/(N-1) ( ⁇ n) streams in one cycle.
  • the processor 25 program that implements such scheduling is illustrated in FIG. 6. Specifically, when the program (block 500) is entered it sets (block 501) variable j representing the address of the first batch of data streams that the program will process to a value of 1. Assume that the number of batches that will be processed is equal to N. Using that number the program (block 502) sets the number of data streams in batch j that is provided by disk i to nj and sets k to equal 1, in which k represents a particular data stream in the batch--initially the first data stream. The program (block 503) then sets a variable p representing the address of a respective system period (frame) to a one.
  • the program at block 504 assigns the first data stream to the first batch of data streams to be processed, in the first period following the disk failure, by the disk identified by ((i+k+p-2)modN)+1. If that disk contains the backup data block that will be used to restart the first data stream, then that data block will be restarted in period p, initially the first period. Following the foregoing, the program increments p and checks to see (block 505) if the value of p exceeds the total number of disks (less the failed disk). If not, then the program returns to the beginning of block 504.
  • the program increments k and checks (block 507) to see if the new value of k exceeds the number of data streams that were served by the failed disk in batch j. If not, then the program returns to block 503 to continue the rescheduling process for the new value of k. Otherwise, the program (block 508) increments the value of j to identify the next batch of data streams that the program will process and then checks (block 509) to see if it is done, i.e., that the value of j is greater than the number of batches N. If that is the case, then the program exits. Otherwise, it returns to block 502 to continue the rescheduling process.
  • FIG. 7 illustrates an example of such rescheduling and assumes that disk 65-1 has failed. It is seen from the FIG. that the rescheduling procedure causes the provisioning of data streams 1 through 12 provided by failed disk 65-1 to be shared among the remaining disks 65-2 through 65-4, in the manner discussed above.
  • the aforementioned rescheduling process may cause a slight disruption in service. This occurs as a result of not being able to restart a data stream immediately because the disk it is being reassigned to may not contain in the associated backup the data block to restart (continue) the program at the point of failure.
  • the particular request being serviced may be delayed for a number of N-cycle periods until the program is rescheduled on the disk containing the desired data block.
  • a worst case delay would be equal to N-1 periods or N(N-1)C, where N and C respectively equal the number of disks in the system (or in a striping group) and the length of the cycle time to schedule concurrent requests from the disk.
  • the location where the data stream is to be restarted may be used in the reassignment algorithm.
  • This procedure guarantees that the data stream may be restarted in the next N-cycle period. Up to N-1 such passes may be necessary to reassign all of the data streams. The foregoing procedure thus attempts to assign the data stream to the disk nearest and to the left of the one holding its restart data block.
  • FIG. 8 illustrates the program which implements the improved rescheduling scheme in processor 25.
  • the program proceeds to block 701 where it sets variable j representing the batch of data streams that need to be rescheduled to a value of one (i.e., the first batch, where the first batch would be, e.g., data streams 1, 2 and 3 illustrated in column 66-1 of FIG. 8.
  • the program (block 702) then sets variable m t to equal the number of data streams forming batch j and supplied by the failed disk i, in which the restart data blocks for the latter data streams are on the backup of disk t, and in which disk t is other than disk i.
  • the program then sets a variable q t to equal the capacity available on disk t to accommodate additional data streams.
  • a capacity of 1/N th is reserved to handle the additional data streams that were to be served by a failed disk, where n is the maximum number of users that may be supplied from a disk and N is the number of disks contained in the server system.
  • the program then sets a variable k that tracks the number of "stages" (or passes) that are used by the program to assign data streams. In stage one, the program tries to assign the streams directly to the disks that contain the restart data blocks in their backup sections. The program then tries to assign data streams that were not assigned during stage one.
  • the program (block 703) then starts the rescheduling process by setting ⁇ t ⁇ to equal a value of one and then checks (block 704) to see if the current value of t equals i (i.e., the number of the failed disk). If yes, then the program (block 705) increments t. If t is found not to be greater than the value of N (block 706) then the program returns to block 704. This time, the program will take the no leg from block 704 to block 711 where it sets the variable p to a value of one, thereby defining a first period of cycles following the failure of disk i. Note that before the disk failure, there were N batches in a repeating period. Following the failure, there will be N-1 periods each containing N batches or cycles. The (N-1) periods are repeated and form a new periodic schedule.
  • the program then sets the restart schedule for the assigned block(s). That is, if the disk contains the restart data block, then the program schedules the data stream for restart in the current period p, which is true of the data streams that are assigned during the first stage (pass) of rescheduling process. The program then goes on to complete the scheduling.
  • the program increments the value of p and checks (block 713) to see if it has completed the rescheduling for all of the periods.
  • the number of periods i.e., distinct periods in the new schedule
  • N-1 after the failure, but will be repeated.
  • the program exits the ⁇ no ⁇ leg of block 713, then it returns to repeat the assignment for the next period and continues to do so for N-1 periods.
  • the program has looped through block 712 N-1 times, then it updates with respect to the next adjacent disk (i.e., the disk next in higher order with respect to the disk involved in the rescheduling during the previous pass through block 12).
  • the program loops through block 12 N-1 times during stage i to complete the rescheduling of the data streams provided by the failed disk for successive periods p.
  • the program (block 714) updates the values of m t and q.sub. ⁇ to continue the rescheduling with the next disk vis-a-vis those streams of the batch being processed that were not rescheduled during current stage.
  • the program (block 705) then increments t, and returns to block 704 if the current value of t is not greater than N. Otherwise, the program (block 707) increments k to the next stage and proceeds to block 3 to determine if one or more data streams had not been rescheduled during the previous stage. If so, then the program proceeds to block 703. Otherwise, the program (block 709) sets j to point to the next batch of data streams and then checks to see (block 710) if all batches have been rescheduled. If so, then the program exits. Otherwise, it returns to block 702.

Abstract

The reliability of supplying data stored in a plurality of different memories to different users is enhanced by (a) dividing each of the memories into primary and secondary sections, (b) partitioning the data into successive blocks and (c) storing the blocks of data in sequence in respective ones of the primary sections. Then storing in sequence the blocks of data that have been stored in the primary section of one of the memories in respective ones of the secondary sections of the other ones of said disks.

Description

FIELD OF THE INVENTION
The invention relates to striping (interleaving) data across multiple disks to improves concurrent access to the data by a plurality of different users.
BACKGROUND OF THE INVENTION
So-called disk striping (or interleaving) is used to provide increased bandwidth in the transmission of data by transferring data in parallel to or from multiple disks. Disk striping, or interleaving, more particularly, consists of combining the memory capacity of multiple disks and spreading the storage of data across the disks such that the first striping unit is on the first disk, the second striping unit is on the second disk, and the Nth striping unit is on disk (N-1 mod M)+1, where M is the number of disks involved in the storage of the data.
Disk striping is particularly useful in storing video programs that will be used to implement video-on-demand (VOD) servers. Disk-based VOD servers use disk striping to ensure an acceptable level of concurrent access to each video program (object) stored in the server to ensure an acceptable level of access time to a stored video segment that is to be transmitted to a user/subscriber. Disk striping, however, makes a server (VOD system) more vulnerable to component (especially disk) failures, since a single component failure could cause an entire disk array (group) to become inoperable, thereby preventing users from having access to video programs stored on the disk array. The various hardware approaches taken thus far to deal with this problem have proved out to be costly, and handle only disk failures. Moreover, such approaches require additional data buffering in the server.
SUMMARY OF THE INVENTION
The foregoing problem is addressed and the relevant art is advanced by interleaving the content of data (e.g., a video program) across multiple storage disks using what I call a doubly-striped mirror arrangement, which allows a data system to use a large fraction of the available disk bandwidth in the server, as compared to prior mirrored approaches. Specifically, in accord with an aspect of the invention, a fraction, e.g., one-half, of each disk is used as primary storage while the remaining fraction is used as a backup storage, such that the data (e.g., the contents of a video program) is partitioned into data units which are striped across the primary storage area of a plurality of disks, e.g., N disks. When a "primary" copy of the content of the data has been so stored, then a backup copy is made such that the contents of each such primary storage area is striped across the backup storage area of the other disks.
The following detailed description and drawings identify other aspects of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
In the drawing:
FIG. 1 is a broad block diagram of a data server in which the principles of the invention may be practiced;
FIG. 2 is an illustrative example of the way in which data may be striped across a plurality of disks in accord with an aspect of the invention;
FIG. 3 illustrates a schedule that a storage node of FIG. 1 may create for the unloading of data blocks stored in an associated disk;
FIG. 4 is a flow chart of the program that the host processor of FIG. 1 invokes to create a schedule to supply the data blocks on the disks of FIG. 1 to a user;
FIG. 5 is an example of a schedule for a system having four disks and three data streams per cycle;
FIG. 6 is a flow chart of the program that the host processor of FIG. 1 uses to reschedule the provisioning of data blocks that were to be supplied by a disk that failed;
FIG. 7 is an illustrative example of the result generated by the program of FIG. 6; and
FIG. 8 illustrates an alternative to the program of FIG. 6.
DETAILED DESCRIPTION
The invention will be described in the context of a Video-On-Demand (VOD) system. It is to be understood, of course, that such a description should not be construed as a limitation, since it may be appreciated from the ensuing description that the claimed invention may be readily used in other data applications, i.e., an information delivery system of a generalized form.
With that in mind, in an illustrative embodiment of the invention, multiplexer 75, FIG. 1, which may be, for example, an Asynchronous Transport Mode switch, receives data in the form of packets from storage nodes 50-1 through 50-N, respectively. Multiplexer 75 then routes the received packets over respective virtual circuit connections via communications paths 75-1 through 75-N to their intended destinations, in which a packet may carry a segment of a video program and in which the content of the video program may be doubly striped and mirrored in accord with the invention across storage disks 40-11 through 40-j4, as will be explained below in detail. Because of such striping, a storage node 50i may supply a packet carrying a respective portion of a segment of a particular video program to multiplexer 75 via a virtual channel assigned to the segments that are stored in storage disks 40k associated with storage node 50i and that are to be delivered to the same destination (subscriber), where N, j, i and k>1. Thus, segments of the content of a video program striped across the storage disks may be supplied to multiplexer 75 via respective ones of a number of different virtual data channels assigned for that purpose. Once such contents have been so delivered, then the assigned virtual data channels may be reassigned for some other but similar purpose (or left idle).
It is seen from FIG. 1, that server 100 includes host processor 25 which interfaces a user with server 100 for the purpose of processing an order (request) received from a user, e.g., one of the users 10-1 through 10-i, in which the request may pertain to a particular video program (such as a movie). The requested program may be one that has been segmented and striped across disks 40-11 through 40-j4, which segments are then unloaded in sequence and transmitted (routed) to the user via multiplexer 75 and the appropriate one of the communications paths 76-1 through 76-N. The request, on the other hand, may be directed to the stopping, restarting, etc., of the video program.
Processor 25, more particularly, first determines if it has the resources available to service the request. That is, if a disk is able to support n data streams and the content of a program is striped over N disks, then nN streams (users) can be supported from the array of N disks, where n and N>1. Thus, server 100 may service the user's request if the current number of data streams that are being supplied by the array of disks 40-11 through 40-j4 to multiplexer 75 is less than nd. Assuming that is the case, then processor 25 communicates with multiplexer 75 to obtain channel assignments that the storage nodes may use to sequentially transmit their respective video segments that form the requested video to multiplexer 75. Included in such communication is a request to establish a virtual connection between each of the assigned channels and a channel of one of the communications paths 76-1 through 76-N that will be used to route the program to the user. In addition, processor 25 establishes a schedule that the storage nodes 50i are to follow in the delivery of the segments to the user via multiplexer 75. Processor 25 then supplies the assigned channels and schedule to each of the storage nodes 50i via Local Area Network (LAN) 30, as discussed below. The schedule includes the identity of the storage node, e.g., 50-1, and associated disk, e.g., disk 40-11, storing the initial (first) segment of the requested program. The schedule also includes the time of day that the first segment of the program is to be unloaded from disk and supplied to multiplexer 75. Processor 25 then forms a message containing, inter alia, the (a) schedule established for running the program, (b) identity of the program, (c) identity of the storage node containing the first segment of the program, (d) channel that has been assigned to the node whose address is contained in the header of the message, and (e) time of day that the first segment is to be transmitted to the user. Processor 25 then sends the message to the storage node identified in the message via LAN 30. Processor 25 then updates the message so that it is applicable to a next one of the storage nodes and sends the message to that node via LAN 30. In an illustrative embodiment of the invention, processor 25 sends the message first to the storage node 50i containing the first segment of the requested program. Processor 25 sequentially sends the message to the remaining storage nodes based on the order of their respective addresses and updates the message following each such transmission.
Since the storage nodes 50i are similar to one another, a discussion of one such node equally pertains to the other storage nodes. It is thus seen that storage node 50-1, FIG. 1, includes microprocessor 52-3 for communicating with host 25, in which the communications are typically directed to setting up a schedule for the delivery of respective video segments stored in buffer 52-2 to multiplexer 75 via the assigned data channel and for controlling the storing and unloading of the segments from buffer 52-2. Buffer 52-2 represents a dual buffer arrangement in which microprocessor 52-3 unloads segments of video from respective ones of the disks 40-11 through 40-14 and stores the unloaded segments in a first one of the dual buffers 52-2 during a first cycle. During that same cycle, e.g., a time period of one second, microprocessor 52-3 in turn unloads portions of respective segments of respective videos stored in the second one of the dual buffers 52-2 during a previous cycle. Adapter 52-1 reads a packet from the buffer and transmits the packet to multiplexer 75 via communication path 51-1 (which may be, e.g., optical fiber) and the channel assigned for that particular purpose. OC3 adapter 52-1 implements the well-known OC3 protocol for interfacing a data terminal, e.g., storage node 50-1, with an optical fiber communications path, e.g., path 51-1. SCSI adapter 52-4, on the other hand, implements a Small Computer System Interface between microprocessor 52-3 and its associated disks 40-11 through 40-14 via bus 45-1.
Microprocessor 52-3 repeats the foregoing process during a next cycle, at which time the first buffer is being unloaded and the second buffer is being loaded with video segments obtained from disks 40-11 through 40-14, and so on.
As mentioned above, a video program is doubly striped and mirrored across disks 40-1 through 40-j4. Specifically, in accord with an aspect of the invention, the striping of a video is confined to a predefined portion of each such disk, i.e., a primary section of a disk, and a backup copy of a disk is then made by striping the contents of the disk across another predefined portion, i.e., a secondary section, of each of the other disks. To say it another way, the contents of a video program are striped across N disks by dividing the program into consecutive blocks, D0 through Da, of fixed size U, e.g., three megabits, and striping the blocks across the primary sections of the disks in round-robin order. The contents of each disk are then striped across the secondary sections of the other disks to create a backup copy of such contents. An example of the inventive striping is shown in FIG. 2
For the sake of simplicity and clarity, only disks 65-1 through 65-4 of a server 60, FIG. 2, are shown. (The other elements of the server 60, e.g., storage node, host processor, etc., are not shown.) Assume that the host processor has divided a video program into a plurality of sequential data blocks (segments) D0 through Di for storage in the disks, in accord with the invention. To do so, the host processor stores the data blocks D0 through Di in sequential order (i.e., round-robin order) in the primary sections P of disks 65-1 through 65-4, respectively, as shown in FIG. 2. That is, the host processor stripes the data blocks across the primary sections of disks 65-1 through 65-4. The host processor then makes a backup copy of the contents of the primary sections of the disks 65i, for example, 65-1. It does this, in accord with an aspect of the invention, by striping the data blocks forming such contents across the secondary sections of the other disks, i.e., disks 65-2 through 65-4, in round-robin order. For example, it is seen from FIG. 2 that the first three data blocks D0, D4 and D8 stored in disk 65-1 are also stored in the secondary (backup) sections S of disks 65-2 through 65-4, respectively. This is also the case for data blocks D12, D16 and D20 as well as the remaining contents of the primary section of disk 65-1. Similarly, the contents of the primary section of disk 65-2 are striped across disks 65-3, 65-4 and 65-1 in that order. For example, it is also seen from FIG. 2 that data blocks D1, D5 and D9 of the primary section of disk 65-2 are striped across disks 65-3, 65-4 and 65-1, respectively, and so on. Such backup striping is shown for the contents of the primary sections of disks 65-3 and 65-4.
The unloading of the data that forms a video program is done, for example, sequentially, e.g., D0 to D3, then D4 to D7, and so on. It is understood, of course, that the start of a video program does not have to begin from the first disk, e.g., disk 65-1, but may start with any one of the disks, since the starting point may be mapped to any one of the disks. However, for the sake of simplicity, it is assumed herein that the program starts at disk 65-1 with block D0. As mentioned above, the unloading of data blocks (segments) from the disks and storage thereof is done cyclically, such that an unloaded block of data is stored in a first buffer during a first cycle and the next block is unloaded during the next cycle and stored in a second buffer while the first block from the first buffer is being transmitted to the user.
Assume at this point that the host processor has received a request for the video program from a user and the host has communicated that request to each of the storage nodes 60-1 through 60-4 with the indication that the start of the program block D0 is to be delivered at time t0. Further assume that a cycle is one second and a block of data is three (3) megabits. Upon receipt of the request, node 60-1 generates a schedule for the unloading of the blocks of the program that are stored in associated disk 65-1 and stores the schedule in internal memory (not shown). The schedule so generated starts with the unloading of block D0 at time t0 minus 1 second for delivery to the associated server 300 multiplexer at time t0. The next entry, D4 is scheduled to be unloaded at time t0 plus 3 seconds for delivery at t0 plus 4 seconds. The next entry D8 is scheduled to be unloaded at time t0 plus 7 seconds for delivery at t0 plus 8 seconds, and so on. An illustrative example of such a schedule is shown in FIG. 3.
Storage node 60-2 generates a similar schedule with respect to data blocks D1, D5, D9, etc. That is, a storage node is programmed so that it determines its order in delivering the sequence of data blocks 35 forming the requested program to the associated multiplexer as a function of the identity of the node 60i having the first block of such data. Accordingly, the schedule that node 60-2 generates will indicate that data block D1, D5, D9, etc., are to be respectively delivered during cycles t0+1, t0+5, t0+9, and so on. Similarly, storage nodes 60-3 and 60-4 generate their own delivery schedules with respect to the data blocks of the requested program that are stored in their associated disks.
Further, a number of users may request the same program. For example, a first data stream composed sequentially of data blocks D0, D1, D2, D3, . . . D8, etc., may be unloaded in that order during respective cycles to respond to a request entered by one user and that concurrently therewith data blocks D13, D14, D15, etc., may be unloaded in that order during the same respective cycles to respond to a request entered earlier by another user. Still further, a third user may be accessing another data stream involving blocks D29 etc. This can occur, for example, if the second user started accessing the performance at an earlier time and has already passed through the sequence of blocks D0 to D12 and the third user has already passed through the sequence of D0 to D28, and so on.
A new video performance may start if the number of video streams currently being served is less than nN. When a new video performance is initiated, the disk containing the starting address for the requested stream is identified. However, the first I/O request for that session cannot be initiated until the beginning of a cycle in which the disk services less than n sessions. Hence there might be a time delay of up to NC before an I/O request for that session is queued. Playback starts at the end of the cycle in which the first I/O has been requested. Hence, the upper bound on the playback delay is (N+1)C. FIG. 4 illustrates in flow chart form the program that is implemented in processor 25 to process a user request to view a video program or to restart an interrupted stream.
In FIG. 4, the operation starts at step 400 in response to receipt of a request for a new stream or to restart a previously stopped stream. Specifically, responsive to such a request, the program is entered at block 400 where it proceeds to block 401 to check to see if the number of data streams (i.e., the number of users) that are actively being served equal nN, where n and hl are defined above. If that is the case, then the program proceeds to block 402 where it delays the processing of the user's request until system capacity becomes available. Otherwise, the program proceeds to block 403 where it increments the number of data streams (users) that are being processed and then determines the starting point (address) of the first data block of the requested video program (or the address of the restart data block), in which the address includes the address of the corresponding disk j. The program (block 404) then sets a variable i to correspond to the next batch of data streams that will be processed using disk j. The program (block 405) then checks to see if the number of data streams in batch i equal the maximum n. If so, then the program (block 406) initiates a wait of a duration equal to C(U/R) and sets i to the address of the next batch and returns to block 115 to determine if the request can be added to that batch.
If the program exits block 405 via the `no` path, then it increments (block 407) the number of data streams in batch i and sets a variable m to the address of node j. The program then assigns the user's request to batch i for transmission via bus m and the appropriate storage node 50m. The program (block 409) then sets C=(i+1)mod N and sets m=(m+1) mod N. The program (block 410) then determines if m=j and exits if that is the case. Otherwise, the program returns to block 408.
It can be appreciated that a disk may fail during the playing of a video program. As mentioned above, a disk failure is readily handled. in accordance with an aspect of the invention, by striping the contents of a disk across the secondary storage areas of the other disks, as mentioned above. One aspect of the claimed invention then is to ensure that, following a disk failure, the processing load being handled by a failed disk is assumed equally by the operable (surviving) disks. Accordingly, a scheduling algorithm described below ensures that the "video streams" that were being provided by the failed disk(s) may be serviced seamlessly from the surviving disks. This may be achieved by reserving about 1/Nth of the I/O bandwidth of each disk system so that video streams may be reassigned thereto to handle the load of the failed disk.
Hence, under normal operation, about (1-1/N)th of the available disk bandwidth may be fully used to serve video streams. Accordingly, a N-disk subsystem may support N (n- n/N!) streams. The quantity M/N! designates the smallest integer larger than or equal to n/N. For example, if N=10 and n=8, then 88% of the disk I/O bandwidth may be exploited.
It is clear from the foregoing that when a disk (or disk system) fails, the processing load (schedule) being handled thereby needs to be re-assigned to the disks that have not failed. Specifically, the schedule for a given disk is a periodic schedule that repeats every Nth cycle, in which a disk may provide up to n- n/N!streams each cycle. A different set of video (or data) streams is launched in each of the N cycles in a periodic schedule. FIG. 5 illustrates a sample schedule where N=4 and n=4. The first cycle in each column 66i shows the schedule for disk 65i under normal operation. Assume that disk 65-1 fails at the end of cycle 67. Upon detection of the failure, the provision of the data streams that disk 65-1 provides are reassigned to the operable disks 65-2 through 65-4 on a cycle by cycle basis. The reassignment is completed once for all N cycles in a system period. That is, for each cycle, the video streams provided by the failed disk are inserted in the schedules of the surviving disks in the same cycle.
Such video streams may be reassigned in round-robin order so that the additional load is shared equally among the disks. In such an instance, each surviving disk will be assigned (n- n/N!)/(N-1) additional video streams and will provide up to n- n/N!+(n- n/N!)/(N-1) (≦n) streams in one cycle.
Data streams from the other N-1 cycles in the N-cycle period are assigned in a similar fashion. For the next N-cycle period, the reassigned schedule of the failed disk is shifted to the right by one such that a given video stream on the failed disk is reassigned to disk i in one period, and is reassigned to disk (i+1) mod (N-1) in the next period. This result occurs because the backup data for the particular video stream (data blocks) on the failed disk are striped across the backup sections of the other disks. Hence, the new schedule that results from reassigning the video streams is also a periodic schedule equal to N(N-1)C, where C is the cycle time.
The processor 25 program that implements such scheduling is illustrated in FIG. 6. Specifically, when the program (block 500) is entered it sets (block 501) variable j representing the address of the first batch of data streams that the program will process to a value of 1. Assume that the number of batches that will be processed is equal to N. Using that number the program (block 502) sets the number of data streams in batch j that is provided by disk i to nj and sets k to equal 1, in which k represents a particular data stream in the batch--initially the first data stream. The program (block 503) then sets a variable p representing the address of a respective system period (frame) to a one. Initially, the program at block 504 assigns the first data stream to the first batch of data streams to be processed, in the first period following the disk failure, by the disk identified by ((i+k+p-2)modN)+1. If that disk contains the backup data block that will be used to restart the first data stream, then that data block will be restarted in period p, initially the first period. Following the foregoing, the program increments p and checks to see (block 505) if the value of p exceeds the total number of disks (less the failed disk). If not, then the program returns to the beginning of block 504. Otherwise, the program (block 506) increments k and checks (block 507) to see if the new value of k exceeds the number of data streams that were served by the failed disk in batch j. If not, then the program returns to block 503 to continue the rescheduling process for the new value of k. Otherwise, the program (block 508) increments the value of j to identify the next batch of data streams that the program will process and then checks (block 509) to see if it is done, i.e., that the value of j is greater than the number of batches N. If that is the case, then the program exits. Otherwise, it returns to block 502 to continue the rescheduling process.
FIG. 7 illustrates an example of such rescheduling and assumes that disk 65-1 has failed. It is seen from the FIG. that the rescheduling procedure causes the provisioning of data streams 1 through 12 provided by failed disk 65-1 to be shared among the remaining disks 65-2 through 65-4, in the manner discussed above.
As mentioned above, the aforementioned rescheduling process may cause a slight disruption in service. This occurs as a result of not being able to restart a data stream immediately because the disk it is being reassigned to may not contain in the associated backup the data block to restart (continue) the program at the point of failure. Hence, the particular request being serviced may be delayed for a number of N-cycle periods until the program is rescheduled on the disk containing the desired data block. A worst case delay would be equal to N-1 periods or N(N-1)C, where N and C respectively equal the number of disks in the system (or in a striping group) and the length of the cycle time to schedule concurrent requests from the disk. To reduce the delay, the location where the data stream is to be restarted may be used in the reassignment algorithm. Specifically, when assigning streams from a given cycle of the failed disk schedule, assign as many streams as possible to disks containing their restart location with the constraint that no more than n/N! additional streams are assigned to any given disk, where n is the maximum number of streams in a batch or the maximum number of streams served concurrently by a disk in one cycle. An attempt is then made to assign those data streams that could not be assigned during the first pass to disks disposed one position to the left of the disk containing the restart location. This procedure guarantees that the data stream may be restarted in the next N-cycle period. Up to N-1 such passes may be necessary to reassign all of the data streams. The foregoing procedure thus attempts to assign the data stream to the disk nearest and to the left of the one holding its restart data block.
FIG. 8 illustrates the program which implements the improved rescheduling scheme in processor 25. Specifically, when entered as result of a disk failure, the program proceeds to block 701 where it sets variable j representing the batch of data streams that need to be rescheduled to a value of one (i.e., the first batch, where the first batch would be, e.g., data streams 1, 2 and 3 illustrated in column 66-1 of FIG. 8. The program (block 702) then sets variable mt to equal the number of data streams forming batch j and supplied by the failed disk i, in which the restart data blocks for the latter data streams are on the backup of disk t, and in which disk t is other than disk i. The program then sets a variable qt to equal the capacity available on diskt to accommodate additional data streams. In an illustrative embodiment of the invention, a capacity of 1/Nth is reserved to handle the additional data streams that were to be served by a failed disk, where n is the maximum number of users that may be supplied from a disk and N is the number of disks contained in the server system. The program then sets a variable k that tracks the number of "stages" (or passes) that are used by the program to assign data streams. In stage one, the program tries to assign the streams directly to the disks that contain the restart data blocks in their backup sections. The program then tries to assign data streams that were not assigned during stage one.
The program (block 703) then starts the rescheduling process by setting `t` to equal a value of one and then checks (block 704) to see if the current value of t equals i (i.e., the number of the failed disk). If yes, then the program (block 705) increments t. If t is found not to be greater than the value of N (block 706) then the program returns to block 704. This time, the program will take the no leg from block 704 to block 711 where it sets the variable p to a value of one, thereby defining a first period of cycles following the failure of disk i. Note that before the disk failure, there were N batches in a repeating period. Following the failure, there will be N-1 periods each containing N batches or cycles. The (N-1) periods are repeated and form a new periodic schedule.
The program (block 712) then assigns the min of m, and q, of the m, data streams whose respective restart data blocks are stored in the backup section of disk, to batch j of the disk containing the restart data blocks in its backup section for the first period. (It is noted that the first expression in block 712 equates to disk t for p=1 and k=1.) The program then sets the restart schedule for the assigned block(s). That is, if the disk contains the restart data block, then the program schedules the data stream for restart in the current period p, which is true of the data streams that are assigned during the first stage (pass) of rescheduling process. The program then goes on to complete the scheduling. That is, assigning a data stream to one disk for one period may not be sufficient to successfully restart the data stream. Therefore, the program increments the value of p and checks (block 713) to see if it has completed the rescheduling for all of the periods. As mentioned above, the number of periods (i.e., distinct periods in the new schedule) will equal N-1 after the failure, but will be repeated.
If the program exits the `no` leg of block 713, then it returns to repeat the assignment for the next period and continues to do so for N-1 periods. When the program has looped through block 712 N-1 times, then it updates with respect to the next adjacent disk (i.e., the disk next in higher order with respect to the disk involved in the rescheduling during the previous pass through block 12). Thus, the program loops through block 12 N-1 times during stage i to complete the rescheduling of the data streams provided by the failed disk for successive periods p.
After completing such loops, the program (block 714) updates the values of mt and q.sub.τ to continue the rescheduling with the next disk vis-a-vis those streams of the batch being processed that were not rescheduled during current stage. The program (block 705) then increments t, and returns to block 704 if the current value of t is not greater than N. Otherwise, the program (block 707) increments k to the next stage and proceeds to block 3 to determine if one or more data streams had not been rescheduled during the previous stage. If so, then the program proceeds to block 703. Otherwise, the program (block 709) sets j to point to the next batch of data streams and then checks to see (block 710) if all batches have been rescheduled. If so, then the program exits. Otherwise, it returns to block 702.
The foregoing is merely illustrative of the principles of the invention. Those skilled in the art will be able to devise numerous arrangements, which, although not explicitly shown or described herein, nevertheless embody those principles that are within the spirit and scope of the invention. For example, the invention may be readily implemented in a local central office as well as a private branch exchange. It may also be implemented at the head end of a cable TV system. As a further example, although the invention has been discussed in terms of a video program and the data blocks forming the video program, it is clear that it may also be used in conjunction with other types of data.

Claims (2)

The invention claimed is:
1. A method of storing data in a plurality of N independent disks comprising the steps of
dividing the storage area of each of said disks into primary and secondary sections,
partitioning said data into a plurality of successive blocks of data and storing said successive blocks of data in sequence in respective ones of the primary sections of said disks in a predetermined order,
storing in sequence the blocks of data that have been stored in one of said primary sections of one of said disks in respective secondary sections of the other ones of said disks,
unloading in sequence a block of data from the primary section of each of the N disks during respective Nth cycles and supplying the unloaded block of data to a particular one of a plurality of data streams being served by the N disks, where N>1, and
responsive to one of the N disks failing, re-assigning the data streams scheduled to be supplied by the failed one of said disks for a respective one of the cycles to respective ones of the remaining disks containing a restart data block for a respective one of such data streams, in which said re-assigning is constrained so that no more than n/N additional streams are assigned to any one of the remaining disks, where n is the maximum number of data streams that can be served concurrently by any one of the disks in one cycle.
2. The method of claim 1 further comprising the step of, responsive to not being able to reassign a particular one of the data streams to the disk containing the restart data block for a current cycle, then reassigning that data stream to the preceding disk in the sequence.
US08/504,096 1995-07-19 1995-07-19 Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks Expired - Lifetime US5678061A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US08/504,096 US5678061A (en) 1995-07-19 1995-07-19 Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks
EP96305095A EP0755009A2 (en) 1995-07-19 1996-07-10 Data server employing doubly striped mirroring of data across multiple storage disks
JP8185827A JPH09114605A (en) 1995-07-19 1996-07-16 Data storage method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/504,096 US5678061A (en) 1995-07-19 1995-07-19 Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks

Publications (1)

Publication Number Publication Date
US5678061A true US5678061A (en) 1997-10-14

Family

ID=24004830

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/504,096 Expired - Lifetime US5678061A (en) 1995-07-19 1995-07-19 Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks

Country Status (3)

Country Link
US (1) US5678061A (en)
EP (1) EP0755009A2 (en)
JP (1) JPH09114605A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870761A (en) * 1996-12-19 1999-02-09 Oracle Corporation Parallel queue propagation
US5926649A (en) * 1996-10-23 1999-07-20 Industrial Technology Research Institute Media server for storage and retrieval of voluminous multimedia data
US6317803B1 (en) * 1996-03-29 2001-11-13 Intel Corporation High-throughput interconnect having pipelined and non-pipelined bus transaction modes
US6332177B1 (en) 1998-10-19 2001-12-18 Lsi Logic Corporation N-way raid 1 on M drives block mapping
US20020035667A1 (en) * 1999-04-05 2002-03-21 Theodore E. Bruning Apparatus and method for providing very large virtual storage volumes using redundant arrays of disks
US20020066050A1 (en) * 2000-11-28 2002-05-30 Lerman Jesse S. Method for regenerating and streaming content from a video server using raid 5 data striping
US6425052B1 (en) * 1999-10-28 2002-07-23 Sun Microsystems, Inc. Load balancing configuration for storage arrays employing mirroring and striping
US6505281B1 (en) 1998-06-02 2003-01-07 Raymond C. Sherry Hard disk drives employing high speed distribution bus
US20030115282A1 (en) * 2001-11-28 2003-06-19 Rose Steven W. Interactive broadband server system
US20040073862A1 (en) * 1999-03-31 2004-04-15 Armstrong James B. Method of performing content integrity analysis of a data stream
US20040122888A1 (en) * 2002-12-18 2004-06-24 Carmichael Ronnie Jerome Massively parallel computer network-utilizing multipoint parallel server (MPAS)
US6772302B1 (en) 1999-11-12 2004-08-03 International Business Machines Corporation Virtual copy method for data spanning storage boundaries
US20040236986A1 (en) * 2003-05-19 2004-11-25 Hitachi Global Storage Technologies System and method for sparing in RAID-1 system
US20050114350A1 (en) * 2001-11-28 2005-05-26 Interactive Content Engines, Llc. Virtual file system
US20050114538A1 (en) * 2001-11-28 2005-05-26 Interactive Content Engines, Llc Synchronized data transfer system
US20060107101A1 (en) * 2004-11-02 2006-05-18 Nec Corporation Disk array subsystem, method for distributed arrangement, and signal-bearing medium embodying a program of a disk array subsystem
US7778960B1 (en) 2005-10-20 2010-08-17 American Megatrends, Inc. Background movement of data between nodes in a storage cluster
US7809892B1 (en) 2006-04-03 2010-10-05 American Megatrends Inc. Asynchronous data replication
US7908448B1 (en) 2007-01-30 2011-03-15 American Megatrends, Inc. Maintaining data consistency in mirrored cluster storage systems with write-back cache
US8010485B1 (en) 2005-10-20 2011-08-30 American Megatrends, Inc. Background movement of data between nodes in a storage cluster
US8010829B1 (en) * 2005-10-20 2011-08-30 American Megatrends, Inc. Distributed hot-spare storage in a storage cluster
USRE42860E1 (en) 1995-09-18 2011-10-18 Velez-Mccaskey Ricardo E Universal storage management system
US8046548B1 (en) 2007-01-30 2011-10-25 American Megatrends, Inc. Maintaining data consistency in mirrored cluster storage systems using bitmap write-intent logging
US20120011541A1 (en) * 2010-07-12 2012-01-12 Cox Communications, Inc. Systems and Methods for Delivering Additional Content Utilizing a Virtual Channel
US8108580B1 (en) 2007-04-17 2012-01-31 American Megatrends, Inc. Low latency synchronous replication using an N-way router
US8498967B1 (en) 2007-01-30 2013-07-30 American Megatrends, Inc. Two-node high availability cluster storage solution using an intelligent initiator to avoid split brain syndrome
US8639878B1 (en) 2005-10-20 2014-01-28 American Megatrends, Inc. Providing redundancy in a storage system
US20160179642A1 (en) * 2014-12-19 2016-06-23 Futurewei Technologies, Inc. Replicated database distribution for workload balancing after cluster reconfiguration

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002149352A (en) * 2000-11-16 2002-05-24 Sony Corp Data storage control method, data storage controller and recording medium storing data storage control program
US20020156971A1 (en) * 2001-04-19 2002-10-24 International Business Machines Corporation Method, apparatus, and program for providing hybrid disk mirroring and striping
US7130229B2 (en) * 2002-11-08 2006-10-31 Intel Corporation Interleaved mirrored memory systems
US7017017B2 (en) * 2002-11-08 2006-03-21 Intel Corporation Memory controllers with interleaved mirrored memory modes

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4688168A (en) * 1984-08-23 1987-08-18 Picker International Inc. High speed data transfer method and apparatus
US4796098A (en) * 1981-12-04 1989-01-03 Discovision Associates Banded and interleaved video disc format with duplicate information stored at different disc locations
US4849978A (en) * 1987-07-02 1989-07-18 International Business Machines Corporation Memory unit backup using checksum
US5088081A (en) * 1990-03-28 1992-02-11 Prime Computer, Inc. Method and apparatus for improved disk access
US5235601A (en) * 1990-12-21 1993-08-10 Array Technology Corporation On-line restoration of redundancy information in a redundant array system
US5303244A (en) * 1991-03-01 1994-04-12 Teradata Fault tolerant disk drive matrix
US5404454A (en) * 1991-02-28 1995-04-04 Dell Usa, L.P. Method for interleaving computer disk data input-out transfers with permuted buffer addressing
US5463758A (en) * 1993-08-02 1995-10-31 International Business Machines Corporation System and method for reducing seek time for read operations in mirrored DASD files
US5471640A (en) * 1992-07-06 1995-11-28 Hewlett-Packard Programmable disk array controller having n counters for n disk drives for stripping data where each counter addresses specific memory location by a count n
US5487160A (en) * 1992-12-04 1996-01-23 At&T Global Information Solutions Company Concurrent image backup for disk storage system
US5497478A (en) * 1991-03-20 1996-03-05 Hewlett-Packard Company Memory access system and method modifying a memory interleaving scheme so that data can be read in any sequence without inserting wait cycles
US5499337A (en) * 1991-09-27 1996-03-12 Emc Corporation Storage device array architecture with solid-state redundancy unit
US5519844A (en) * 1990-11-09 1996-05-21 Emc Corporation Logical partitioning of a redundant array storage system
US5519435A (en) * 1994-09-01 1996-05-21 Micropolis Corporation Multi-user, on-demand video storage and retrieval system including video signature computation for preventing excessive instantaneous server data rate

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4796098A (en) * 1981-12-04 1989-01-03 Discovision Associates Banded and interleaved video disc format with duplicate information stored at different disc locations
US4688168A (en) * 1984-08-23 1987-08-18 Picker International Inc. High speed data transfer method and apparatus
US4849978A (en) * 1987-07-02 1989-07-18 International Business Machines Corporation Memory unit backup using checksum
US5088081A (en) * 1990-03-28 1992-02-11 Prime Computer, Inc. Method and apparatus for improved disk access
US5519844A (en) * 1990-11-09 1996-05-21 Emc Corporation Logical partitioning of a redundant array storage system
US5235601A (en) * 1990-12-21 1993-08-10 Array Technology Corporation On-line restoration of redundancy information in a redundant array system
US5404454A (en) * 1991-02-28 1995-04-04 Dell Usa, L.P. Method for interleaving computer disk data input-out transfers with permuted buffer addressing
US5303244A (en) * 1991-03-01 1994-04-12 Teradata Fault tolerant disk drive matrix
US5497478A (en) * 1991-03-20 1996-03-05 Hewlett-Packard Company Memory access system and method modifying a memory interleaving scheme so that data can be read in any sequence without inserting wait cycles
US5499337A (en) * 1991-09-27 1996-03-12 Emc Corporation Storage device array architecture with solid-state redundancy unit
US5471640A (en) * 1992-07-06 1995-11-28 Hewlett-Packard Programmable disk array controller having n counters for n disk drives for stripping data where each counter addresses specific memory location by a count n
US5487160A (en) * 1992-12-04 1996-01-23 At&T Global Information Solutions Company Concurrent image backup for disk storage system
US5463758A (en) * 1993-08-02 1995-10-31 International Business Machines Corporation System and method for reducing seek time for read operations in mirrored DASD files
US5519435A (en) * 1994-09-01 1996-05-21 Micropolis Corporation Multi-user, on-demand video storage and retrieval system including video signature computation for preventing excessive instantaneous server data rate

Non-Patent Citations (34)

* Cited by examiner, † Cited by third party
Title
"A Performance Study of Three High Availabilty Data Replication Strategies" by Hsiao et al, IEEE 1991, pp. 18-28.
"A Synchronous Disk Interleaving", by Kim et al, 1991 IEEE, pp. 801-810.
"An Approach to Cost--Effective Terabyte Memory Systems", by Katz et al, IEEE 1992, pp. 395-400.
"Analytic Modeling and Comparisons of striping Strategies for Replicated Disk Arrays", by Merchant et al, IEEE 1995, pp. 419-433.
"Chained Declustering", by Golubchik et al, IEEE 1992, pp. 88-95.
"Chained Declustering", by Hsiao et al, IEEE 1990, pp. 456-465.
"Communications--Intensive Workstations", by Katseff et al, IEEE 1992, pp. 24-28.
"Diamonds are Forever", by Radding, Alan, Midrance Systems, Feb. 9, 1993, v6, n3, p. 23(2).
"Disk Subsystem Load Balancing", by Ganger et al, IEEE 1993, pp. 40-49.
"Introduction to Redundant Arrays of Inexpensive Disks (RAID)", by Patterson et al, IEEE 1989, pp. 112-117.
"LAN Lirchin", by Harbison, Robert, Feb. 1994, LAN Magazine, v9, n2, p. 93(14).
"Performance Analysis of a Dual Striping Strategy for Replicated Disk Arrays", by Merchant et al, 1993, pp. 148-157.
"Replicated Data Management in the Gamma Database Machine", by Hsiao et al, 1990 IEEE, pp. 79-84.
"Systems Reliability and Availability Prediction", by Daya Perera, IEEE 1993, pp. 33-40.
"Tuning of Striping Units in Disk--Array--Based File System", by Weikum et al, IEEE 1992, pp. 80-87.
A Performance Study of Three High Availabilty Data Replication Strategies by Hsiao et al, IEEE 1991, pp. 18 28. *
A Synchronous Disk Interleaving , by Kim et al, 1991 IEEE, pp. 801 810. *
An Approach to Cost Effective Terabyte Memory Systems , by Katz et al, IEEE 1992, pp. 395 400. *
Analytic Modeling and Comparisons of striping Strategies for Replicated Disk Arrays , by Merchant et al, IEEE 1995, pp. 419 433. *
Chained Declustering , by Golubchik et al, IEEE 1992, pp. 88 95. *
Chained Declustering , by Hsiao et al, IEEE 1990, pp. 456 465. *
Communications Intensive Workstations , by Katseff et al, IEEE 1992, pp. 24 28. *
Diamonds are Forever , by Radding, Alan, Midrance Systems, Feb. 9, 1993, v6, n3, p. 23(2). *
Disk Subsystem Load Balancing , by Ganger et al, IEEE 1993, pp. 40 49. *
Introduction to Redundant Arrays of Inexpensive Disks (RAID) , by Patterson et al, IEEE 1989, pp. 112 117. *
LAN Lirchin , by Harbison, Robert, Feb. 1994, LAN Magazine, v9, n2, p. 93(14). *
Performance Analysis of a Dual Striping Strategy for Replicated Disk Arrays , by Merchant et al, 1993, pp. 148 157. *
Proceedings of 26th Hawaii Intl Conf. on System Sciences , v. 1, 1993 IEEE Disk Subsystem Load Balancing: Disk Striping vs. Conventional Data Placement , G. R. Ganger et al, pp. 40 49. *
Proceedings of 26th Hawaii Intl Conf. on System Sciences, v. 1, 1993 IEEE "Disk Subsystem Load Balancing: Disk Striping vs. Conventional Data Placement", G. R. Ganger et al, pp. 40-49.
Replicated Data Management in the Gamma Database Machine , by Hsiao et al, 1990 IEEE, pp. 79 84. *
Systems Reliability and Availability Prediction , by Daya Perera, IEEE 1993, pp. 33 40. *
Third Intl Workshop , Nov. 1992, The Design and Implementation of a Continuous Media Storage Server , P. Lougher et al, pp. 69 80. *
Third Intl Workshop, Nov. 1992, "The Design and Implementation of a Continuous Media Storage Server", P. Lougher et al, pp. 69-80.
Tuning of Striping Units in Disk Array Based File System , by Weikum et al, IEEE 1992, pp. 80 87. *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE42860E1 (en) 1995-09-18 2011-10-18 Velez-Mccaskey Ricardo E Universal storage management system
US6317803B1 (en) * 1996-03-29 2001-11-13 Intel Corporation High-throughput interconnect having pipelined and non-pipelined bus transaction modes
US5926649A (en) * 1996-10-23 1999-07-20 Industrial Technology Research Institute Media server for storage and retrieval of voluminous multimedia data
US5870761A (en) * 1996-12-19 1999-02-09 Oracle Corporation Parallel queue propagation
US6505281B1 (en) 1998-06-02 2003-01-07 Raymond C. Sherry Hard disk drives employing high speed distribution bus
US6332177B1 (en) 1998-10-19 2001-12-18 Lsi Logic Corporation N-way raid 1 on M drives block mapping
US20040073862A1 (en) * 1999-03-31 2004-04-15 Armstrong James B. Method of performing content integrity analysis of a data stream
US20020035667A1 (en) * 1999-04-05 2002-03-21 Theodore E. Bruning Apparatus and method for providing very large virtual storage volumes using redundant arrays of disks
US7000069B2 (en) * 1999-04-05 2006-02-14 Hewlett-Packard Development Company, L.P. Apparatus and method for providing very large virtual storage volumes using redundant arrays of disks
US6425052B1 (en) * 1999-10-28 2002-07-23 Sun Microsystems, Inc. Load balancing configuration for storage arrays employing mirroring and striping
US6772302B1 (en) 1999-11-12 2004-08-03 International Business Machines Corporation Virtual copy method for data spanning storage boundaries
US20020066050A1 (en) * 2000-11-28 2002-05-30 Lerman Jesse S. Method for regenerating and streaming content from a video server using raid 5 data striping
US6996742B2 (en) * 2000-11-28 2006-02-07 Sedna Patent Services, Llc Method for regenerating and streaming content from a video server using RAID 5 data striping
US20030115282A1 (en) * 2001-11-28 2003-06-19 Rose Steven W. Interactive broadband server system
US7437472B2 (en) 2001-11-28 2008-10-14 Interactive Content Engines, Llc. Interactive broadband server system
US20050114350A1 (en) * 2001-11-28 2005-05-26 Interactive Content Engines, Llc. Virtual file system
US20050114538A1 (en) * 2001-11-28 2005-05-26 Interactive Content Engines, Llc Synchronized data transfer system
US7788396B2 (en) 2001-11-28 2010-08-31 Interactive Content Engines, Llc Synchronized data transfer system
US7644136B2 (en) 2001-11-28 2010-01-05 Interactive Content Engines, Llc. Virtual file system
US20040122888A1 (en) * 2002-12-18 2004-06-24 Carmichael Ronnie Jerome Massively parallel computer network-utilizing multipoint parallel server (MPAS)
US20090094650A1 (en) * 2002-12-18 2009-04-09 Carmichael Ronnie G Massively parallel computer network utilizing multipoint parallel server (mpas) with enhanced personal storage device (e-psd)
US7552192B2 (en) * 2002-12-18 2009-06-23 Ronnie Gerome Carmichael Massively parallel computer network-utilizing MPACT and multipoint parallel server (MPAS) technologies
US20040236986A1 (en) * 2003-05-19 2004-11-25 Hitachi Global Storage Technologies System and method for sparing in RAID-1 system
US7062673B2 (en) * 2003-05-19 2006-06-13 Hitachi Global Technologies System and method for sparing in RAID-1 system
US20060107101A1 (en) * 2004-11-02 2006-05-18 Nec Corporation Disk array subsystem, method for distributed arrangement, and signal-bearing medium embodying a program of a disk array subsystem
US7519853B2 (en) * 2004-11-02 2009-04-14 Nec Corporation Disk array subsystem, method for distributed arrangement, and signal-bearing medium embodying a program of a disk array subsystem
US8479037B1 (en) 2005-10-20 2013-07-02 American Megatrends, Inc. Distributed hot-spare storage in a storage cluster
US8639878B1 (en) 2005-10-20 2014-01-28 American Megatrends, Inc. Providing redundancy in a storage system
US8010485B1 (en) 2005-10-20 2011-08-30 American Megatrends, Inc. Background movement of data between nodes in a storage cluster
US8010829B1 (en) * 2005-10-20 2011-08-30 American Megatrends, Inc. Distributed hot-spare storage in a storage cluster
US7778960B1 (en) 2005-10-20 2010-08-17 American Megatrends, Inc. Background movement of data between nodes in a storage cluster
US7809892B1 (en) 2006-04-03 2010-10-05 American Megatrends Inc. Asynchronous data replication
US8595455B2 (en) 2007-01-30 2013-11-26 American Megatrends, Inc. Maintaining data consistency in mirrored cluster storage systems using bitmap write-intent logging
US8244999B1 (en) 2007-01-30 2012-08-14 American Megatrends, Inc. Maintaining data consistency in mirrored cluster storage systems with write-back cache
US8498967B1 (en) 2007-01-30 2013-07-30 American Megatrends, Inc. Two-node high availability cluster storage solution using an intelligent initiator to avoid split brain syndrome
US8046548B1 (en) 2007-01-30 2011-10-25 American Megatrends, Inc. Maintaining data consistency in mirrored cluster storage systems using bitmap write-intent logging
US7908448B1 (en) 2007-01-30 2011-03-15 American Megatrends, Inc. Maintaining data consistency in mirrored cluster storage systems with write-back cache
US8108580B1 (en) 2007-04-17 2012-01-31 American Megatrends, Inc. Low latency synchronous replication using an N-way router
US20120011541A1 (en) * 2010-07-12 2012-01-12 Cox Communications, Inc. Systems and Methods for Delivering Additional Content Utilizing a Virtual Channel
US20160179642A1 (en) * 2014-12-19 2016-06-23 Futurewei Technologies, Inc. Replicated database distribution for workload balancing after cluster reconfiguration
US10102086B2 (en) * 2014-12-19 2018-10-16 Futurewei Technologies, Inc. Replicated database distribution for workload balancing after cluster reconfiguration

Also Published As

Publication number Publication date
JPH09114605A (en) 1997-05-02
EP0755009A2 (en) 1997-01-22

Similar Documents

Publication Publication Date Title
US5678061A (en) Method for employing doubly striped mirroring of data and reassigning data streams scheduled to be supplied by failed disk to respective ones of remaining disks
CA2142795C (en) Look-ahead scheduling to support video-on-demand applications
EP0660605B1 (en) Video storage and delivery apparatus and method
US5808607A (en) Multi-node media server that provides video to a plurality of terminals from a single buffer when video requests are close in time
US6012080A (en) Method and apparatus for providing enhanced pay per view in a video server
KR101126859B1 (en) Load balancing and admission scheduling in pull-based parallel video servers
US5926649A (en) Media server for storage and retrieval of voluminous multimedia data
US5608448A (en) Hybrid architecture for video on demand server
EP0673159B1 (en) Scheduling policies with grouping for providing VCR control functions in a video server
US5381413A (en) Data throttling system for a communications network
US5774668A (en) System for on-line service in which gateway computer uses service map which includes loading condition of servers broadcasted by application servers for load balancing
EP0740247A2 (en) Data stream server system
EP0716370A2 (en) A disk access method for delivering multimedia and video information on demand over wide area networks
EP0753966A2 (en) Disk striping method for use in video server environments
JP2006180537A (en) Method and system for scheduling transfer of data sequence
WO1999014687A2 (en) Continuous media file server and method for scheduling network resources
JP2003228533A (en) Method, video server and video server management controller for transmitting content to a plurality of clients
JP4151991B2 (en) System for retrieving data in a video server
Reddy Scheduling and data distribution in a multiprocessor video server
WO1995026095A2 (en) Video system
WO1995026095A9 (en) Video system
US5699503A (en) Method and system for providing fault tolerance to a continuous media server system
WO2002037784A2 (en) Parallel network data transmission of segments of data stream
Sheu et al. Virtual batching: A new scheduling technique for video-on-demand servers
Baqai et al. Network resource management for enterprise wide multimedia services

Legal Events

Date Code Title Description
AS Assignment

Owner name: AT&T IPM CORP., FLORIDA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MOURAD, ANTOINE N.;REEL/FRAME:007605/0426

Effective date: 19950719

AS Assignment

Owner name: LUCENT TECHNOLOGIES INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:AT&T CORP.;REEL/FRAME:008684/0163

Effective date: 19960329

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: THE CHASE MANHATTAN BANK, AS COLLATERAL AGENT, TEX

Free format text: CONDITIONAL ASSIGNMENT OF AND SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:LUCENT TECHNOLOGIES INC. (DE CORPORATION);REEL/FRAME:011722/0048

Effective date: 20010222

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: LUCENT TECHNOLOGIES INC., NEW JERSEY

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENT RIGHTS;ASSIGNOR:JPMORGAN CHASE BANK, N.A. (FORMERLY KNOWN AS THE CHASE MANHATTAN BANK), AS ADMINISTRATIVE AGENT;REEL/FRAME:018584/0446

Effective date: 20061130

FPAY Fee payment

Year of fee payment: 12

AS Assignment

Owner name: ALCATEL-LUCENT USA INC., NEW JERSEY

Free format text: MERGER;ASSIGNOR:LUCENT TECHNOLOGIES INC.;REEL/FRAME:032646/0773

Effective date: 20081101