CA2365694A1 - Slow responses in redundant arrays of inexpensive disks - Google Patents
Slow responses in redundant arrays of inexpensive disks Download PDFInfo
- Publication number
- CA2365694A1 CA2365694A1 CA002365694A CA2365694A CA2365694A1 CA 2365694 A1 CA2365694 A1 CA 2365694A1 CA 002365694 A CA002365694 A CA 002365694A CA 2365694 A CA2365694 A CA 2365694A CA 2365694 A1 CA2365694 A1 CA 2365694A1
- Authority
- CA
- Canada
- Prior art keywords
- disks
- level
- data
- blocks
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
- G06F11/1088—Reconstruction on already foreseen single or plurality of spare disks
Abstract
A redundant array includes a plurality of disks, a bus coupling the disks, a receiving device, and a device to reconstruct a block stored in one of the disks. The device reconstructs the block with associated data parity blocks from other disks. The device transmits the reconstructed block to the receiving device (110) in response to the one of the disks being slowly responding (106). A method includes requesting a first disk to transmit a first block (104), reconstructing, when necessary, the first block from associated data stored in other disks of a RAID configuration, and transmitting the reconstructed block directly to a receiving device. The transmitting is in response to the first disk not transmitting the block in a predetermined time.
Description
SLOW RESPONSES IN REDUNDANT ARRAYS OF INEXPENSIVE DISKS
Background of the Invention This invention relates generally to the transmission and storage of data and, more particularly, to managing response times in redundant arrays of inexpensive disks.
Digital video and television systems need high bandwidth data transmission and low latencies. Redundant arrays of inexpensive disks (RAID) support high bandwidth data transfers and very low latencies. RAID configurations employ redundancy and/or parity blocks to mask the failure of a disk.
RAID configurations divide a received data stream into a sequence of blocks and write consecutive blocks of the sequence to different disks in the array. To retrieve data, the RAID configuration reads the blocks from the disks of the array and reconstitutes the original data stream from the read blocks. To increase reception and transmission speeds, the RAID configuration may write to and read from the various disks of the array in parallel.
Individual disks of a RAID configuration will occasionally stall or respond slowly to an access request due to disk surface defects and bad block revectoring.
During a slow response, the entire RAID configuration may wait while one disk transmits requested data. Thus, a single slowly responding disk can cause a long latency for a read operation from the RAID configuration.
For digital video and cable systems, one slowly responding disk can cause a disaster, because data needs to arrive at a video receiver at a substantially constant rate to keep the receiver's input buffer full. Continued long transmission latencies can deplete the input buffer. A
receiver's input buffer is typically only large enough to store about 1 to 2 seconds of video data, i.e. several megabytes of data. If a slow RAID configuration causes a transmission gap of longer than about 1 to 2 seconds, the receiver's input buffer may completely empty. If the receiver's input buffer empties, a viewer may perceive a noticeable pause in the video being viewed. Defect-free transmission of video requires that such pauses be absent.
RAID configurations are economically attractive, because they provide low latencies and high bandwidth data storage using inexpensive disks. But, contemporary inexpensive disks often have bad regions, which occasionally lead to bad block revectoring and slow disk responses. A
bad region can cause a read, which normally lasts about 10 milliseconds (ms), to take 1,000 ms or more. Thus, slow responses can cause unpredictable read latencies. These latencies make RAID configurations less acceptable in video transmitters, because transmission latencies can lead to the above-discussed problems in video reception.
The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.
Summary of the Invention One object of the invention is to reduce the number of transmission gaps caused by slowly responding disks of a RAID configuration.
Another object of the invention is to provide a RAID
configuration with predictable read latencies.
In a first aspect, the invention provides a RAID
configuration. The RAID configuration includes a plurality of disks, a bus coupled to the disks to transmit data blocks, and a device to reconstruct a block stored in any one of the disks. The device reconstructs the block with associated data and parity blocks received from other disks.
The device transmits the reconstructed block to a receiving device in response to one of the disks responding slowly.
In a second aspect, the invention provides a method of transmitting data from a RAID configuration. The method includes requesting that a first disk of the RAID
Configuration transmit a first block, reconstructing the first block from associated blocks stored in other disks of the RAID configuration, and transmitting the reconstructed first block directly to a receiving device. The step of transmitting is performed if the first disk does not complete transmission of the first data block within a predetermined time.
In a third aspect, the invention provides a RAID
configuration, which stores parity and data blocks in stripes across the disks. The RAID configuration includes a plurality of disks and a processor connected to the disks.
The processor is adapted to write a plurality of groups of associated data and parity blocks to the disks. The processor writes the data and parity blocks of each group to different ones of the disks and writes at least two blocks from different groups to one stripe.
In a fourth aspect, the invention provides a RAID
Configuration to transmit data blocks to a receiving device.
The RAID configuration includes a plurality of disks, a processor to control reads from and writes to the disks, and a device to reconstruct blocks. The disks store blocks and transmit stored blocks to the receiving device. The Processor determines if disks are slowly responding. The device reconstructs a block stored in a slowly responding one of the disks from associated blocks stored in the remaining disks if the processor determines that the one of the disks is slowly responding.
Brief Description of the Drawinas Other objects, features, and advantages of the invention will be apparent from the following description taken together with the drawings, in which:
FIG. 1 shows one embodiment of a redundant array of inexpensive disks (RAID) configuration having a predictable read latency;
FIG. 2A shows a fragment of a data stream sent to the RAID configuration of FIG. 1 for storage therein;
FIG. 2B is a schematic illustration of how the RAID
configuration of FIG. 1 stores the data fragment of FIG. 2A;
FIG. 3 illustrates an embodiment of a reconstructor of data blocks for use in the RAID configuration of FIG. 1;
FIG. 4 is a flow chart illustrating a method of transmitting data from the RAID configuration of FIG. 1;
FIG. 5 illustrates a video transmission and reception system using the RAID configuration of FIG. 1;
FIG. 6 shows a two-level RAID configuration employing three of the RAID configurations shown in FIG. 1.
Description of the Preferred Embodiments U.S. Patent Application Serial No. 08/547,565, filed October 24, 1995, discloses several types of RAID
configurations and is incorporated by reference herein in its entirety.
FIG. 1 shows a RAID configuration 10 having three storage disks 12, 13, 14. The RAID configuration 10 has a bus 16 for data writes to and reads of the three disks 12-14. Generally, embodiments may have N disks. A processor controls writes to and reads of the disks 12-14. The writes and reads are for data and/or parity blocks. The 20 Processor 20 includes a reconstructor 22 to reconstruct data blocks of slowly responding disks. The processor 20 transmits data blocks over an interface or line 17, for example, a bus or a cable, to a receiving device 19.
In some embodiments the bus 16 has separate data and control lines (not shown) for each of the disks 12-14.
Then, reads and writes may be parallel accesses to all or to a subset of the disks 12-14. In other embodiments a single set of data and control lines connects to each disk 12-14 of the RAID configuration 10. Then, the processor 20 performs serial writes to and reads from the separate disks 12-14 over the shared data line. In this case, the bus 16 may be a single SCSI bus or another type of shared or dedicated interconnect.
A disk is slowly responding if the disk does not Complete a requested read within a predetermined time, but still sends signals, e.g., to the processor 20, indicating that the read is progressing. The predetermined time is longer than a normal time for completing the requested read.
A slowly responding disk may store the requested data in a readable form and may eventually complete the requested read, i.e. the disk is responding and not stalled.
FIG. 2A shows a fragment 40 of a data stream to store in the RAID configuration device 10 of FIG. 1. In this illustrative embodiment, the processor 20 divides the fragment 40 into an ordered sequence of blocks D(0), D(1), ... D(11) and produces a parity block P(i, i+1) (i - 0, 2, 4~ ...) to associate with consecutive pairs 42, 44 of the data blocks D ( i ) , D ( i+1 ) . The parity block P ( i , i+1 ) encodes at least one parity bit for each pair of equivalent bits of the associated pair 42, 44 of data blocks D(i), D(i+1). The processor 20 may write each associated pair 42, 44 of data blocks D(i), D(i+1) and parity block P(i, i+1) to the three disks 12-14 in parallel or serially as explained with respect to FIG. 1.
Henceforth, a stripe refers to a correspondingly positioned set of storage locations in each disk 12-14 of the RAID configuration 10. Each stripe includes the same number of storage locations from each disk 12-14.
Nevertheless, an array of disks may allow several definitions of stripes. For example, an array with disks A
and B may assign storage locations 101 to 200 of both disks A and B to a first stripe and assign storage locations 201 to 300 of both disks A and B to a second stripe. In the same array, a second definition may assign locations 101 to 200 of disk A and locations 201 to 300 of disk B to the first stripe and assign locations 201 to 300 of disk A and locations 101 to 200 of disk B to a second stripe.
FIG. 2B schematically illustrates how the processor 20 writes data and parity blocks in the disks 12-14. The storage locations of the three disks 12-14 are arranged in stripes S1-S6. Each stripe S1-S6 stores a group of three associated blocks, which includes a consecutive pair of data blocks D ( i ) , D ( i+1 ) and the parity block P ( i , i+1 ) constructed from the pair. The portion of each disk 12-14 in a particular stripe S1-S6 stores either one of the data blocks D(i), D(i+1) or the associated parity block P(i, i+1). The processor 20 writes the parity blocks P(i, i+1) associated with sequential pairs to different ones of the disks 12-14 by cyclically shifting the storage location of P(i, i+1) in each consecutive stripe. This is referred to as rotating the parity blocks P(i, i+1) across the disks 12-14. Rotating the storage location of the parity block more uniformly distributes the data blocks D(j) among the disks 12-14 thereby spreading the access burdens more uniformly across the different disks 12-14 during data reads and writes.
The configuration shown in FIGS. 1 and 2B is often referred to as a RAID-5 configuration.
FIG. 3 illustrates an embodiment 60 of the reconstructor 22 of FIG. 1, which includes a memory device 62 and a hardware processor 64. Both the memory device 62 and the processor 64 couple to the bus 16. The memory device 62 receives data and/or parity blocks from the disks 12-14 via the bus 16. The memory device 62 stores the associated data and parity blocks for reconstructing the associated block of a slowly responding disk 12-14.
The processor 64 performs an exclusive OR (XOR) of the associated parity and data blocks to reconstruct the data block of the stalled disk 12-14. To perform the XOR, the processor 64 reads the associated blocks from the memory device 62. Then, the processor 64 XOR's corresponding bits of the read associated parity and data blocks in a bit-by-bit manner. Finally, the processor 64 writes the results of the XOR back to the memory device 62. The reconstructor 60 can make a reconstructed block for any one of the disks 12-14.
FIG. 4 is a flow chart illustrating one method 100 of transmitting data from the RAID configuration 10 shown in FIGS. 1 and 2B. At step 102, the processor 20 selects to transmit the associated data blocks of the stripe S1. At step 104, the processor 20 requests that the disks 13-14 to transmit the data blocks of the selected stripe S1. At step 106, the processor 20 determines whether any of the disks 13-14 is slowly responding. At step 107, the processor 20 transmits the requested data blocks if neither disk 13-14 is slowly responding. At step 108, the reconstructor 22 reconstructs the data block of a slowly responding disk 13-14, from the associated data block and parity (from disk 12). The reconstructor 22 receives the associated data and parity blocks from storage locations of the same stripe S1 of the other disks 12-14, which are not slowly responding.
At step 110, the reconstructor 22 transmits the reconstructed data block to the data receiver 19. At step 112, the processor 20 selects the next stripe S2 of associated data blocks to transmit in response to completing transmission of the data blocks of the stripe Sl at step 106 or 110.
Referring to FIGS. 1 and 2B, the RAID configuration 10 uses a timer 34 to determine whether any of the disks 12-14 are slowly responding. The processor 20 resets the timer 34 at the start of each cycle for transmitting the data blocks from one of the stripes S1-S6. The timer 34 counts a predetermined time and signals the processor 20 when the time has elapsed. In response to the signal from the timer 34, the processor 20 determines whether each disk 12-14 has Completed transmission of the data block stored therein, i.e. whether any disk 12-14 is slowly responding.
The processor 20 may determine that one of the disks 12-14 is slowly responding even though the disk 12-14 continues to send "handshaking" signals to the processor 20 indicating normal operation.
Referring to FIGS. 1-3, the processor 20 controls the reconstruction and the transmission of reconstructed data blocks. First, the processor 20 orders the remaining disks 12-14 to transmit the associated blocks to the reconstructor 22, e.g., to the memory device 62, if a slowly responding disk 12-14 is detected. In FIG. 2B, the associated data and parity blocks are stored in the same stripe Sl-S6 as the untransmitted data block from the slowly responding disk 12-14. Thus, the processor 20 orders reads of the associated stripe Sl-S6 to obtain the associated blocks. Next, the processor 20 signals the reconstructor 22 to reconstruct the data block from a slowly responding disk, e.g., by a signal sent to the processor 64 of FIG. 3. Then, the processor 20 reads the reconstructed block from the reconstructor 22, e.g., the memory device 62, and transmits the reconstructed block to the interface or line 17.
Referring to FIGS. 1-3, the processor 20 does not interrupt a slowly responding disk 12-14 from recovering by sending to the disk 12-14 a second request to transmit data.
Instead the processor 20 orders the reconstructor 22 to reconstruct the missing data from the associated data blocks in the normally responding disks 12-14.
FIG. 5 illustrates a video transmission system 114, which uses the RAID configuration 10 of FIG. 1. A receiver 115 receives data blocks transmitted from the interface or line 17 at an input terminal 116. Transmission between the SID configuration 10 and receiver 116 may be by radio wave, light, and/or cable transmission. The input terminal 116 couples to a input data buffer 117, e.g., a first-in-first-out buffer. The input data buffer 117 stores two to several times the quantity of data included in one data block shown in FIG. 2B. Data stored in the input data buffer 117 provides for continuous video data processing in the event of a short transmission interruption.
Referring to FIGS. 1 and 5, the video transmission system 114 can lower the occurrence of viewing pauses by transmitting a reconstructed data block in response to detecting a slow disk 12-14. In one embodiment of the system 114, the RAID configuration 10 needs about 100 ms to transmit or reconstruct a data block, and the receiver's input data buffer 117 stores about 2000 ms of video data.
The timer 34 counts down a predetermined period of about 400 ms to determine whether one of the disks 12-14 is slowly responding. For this choice of the predetermined period, even several sequential slow disk responses will not empty the receiver's input data buffer 117 to produce a noticeable pause in a video being viewed.
Various embodiments may employ different numbers of j disks than the RAID configuration 10 of FIG. 1. Some embodiments use more disks to increase the access bandwidth and/or to lower read latencies. On the other hand, a RAID-1 configuration employs only two disks to store duplicate data blocks. In a RAID-1 configuration, a processor controls the transmission of stored data blocks. The processor commands the second disk to transmit a duplicate of a data block in response to the first disk not completing transmission of the data block within a predetermined time.
In the various embodiments, a read lasting longer than a predetermined time provokes a reconstruction of data from associated data from other disks and a transmission of the reconstructed 'data. This increases the predictability of read latencies for the RAID configurations described herein.
Some embodiments of RAID configurations store associated data and parity blocks differently than the pattern shown in FIG. 2B. These RAID configurations still transmit reconstructed data in response to detecting a slowly responding disk. To enable reconstruction of data of a slowly responding disk, each disk stores, at most, one block from any group formed of associated data and parity blocks.
FIG.6 shows a RAID configuration 140 with both first and second level RAID-5 structures. At the first level, a ~0 first level processor 141 receives consecutive groups of pairs of data blocks and generates a parity block to associate with each pair of data blocks. The first level processor 141 sends one block from each associated group of three blocks to each of the interfaces 142, 142', 142 " of ~5 the second level RAID configurations 10, 10', 10 " . Each second level processor 20, 20', 20 " subsequently breaks each block into two mini-blocks and generates a parity mini-block to associate with the two mini-blocks. Each second level RAID configuration 10, 10', 10 " stores the mini-blocks as illustrated in FIGS. 2A and 2B. The first level processor 141 retrieves blocks from the second level RAID
Background of the Invention This invention relates generally to the transmission and storage of data and, more particularly, to managing response times in redundant arrays of inexpensive disks.
Digital video and television systems need high bandwidth data transmission and low latencies. Redundant arrays of inexpensive disks (RAID) support high bandwidth data transfers and very low latencies. RAID configurations employ redundancy and/or parity blocks to mask the failure of a disk.
RAID configurations divide a received data stream into a sequence of blocks and write consecutive blocks of the sequence to different disks in the array. To retrieve data, the RAID configuration reads the blocks from the disks of the array and reconstitutes the original data stream from the read blocks. To increase reception and transmission speeds, the RAID configuration may write to and read from the various disks of the array in parallel.
Individual disks of a RAID configuration will occasionally stall or respond slowly to an access request due to disk surface defects and bad block revectoring.
During a slow response, the entire RAID configuration may wait while one disk transmits requested data. Thus, a single slowly responding disk can cause a long latency for a read operation from the RAID configuration.
For digital video and cable systems, one slowly responding disk can cause a disaster, because data needs to arrive at a video receiver at a substantially constant rate to keep the receiver's input buffer full. Continued long transmission latencies can deplete the input buffer. A
receiver's input buffer is typically only large enough to store about 1 to 2 seconds of video data, i.e. several megabytes of data. If a slow RAID configuration causes a transmission gap of longer than about 1 to 2 seconds, the receiver's input buffer may completely empty. If the receiver's input buffer empties, a viewer may perceive a noticeable pause in the video being viewed. Defect-free transmission of video requires that such pauses be absent.
RAID configurations are economically attractive, because they provide low latencies and high bandwidth data storage using inexpensive disks. But, contemporary inexpensive disks often have bad regions, which occasionally lead to bad block revectoring and slow disk responses. A
bad region can cause a read, which normally lasts about 10 milliseconds (ms), to take 1,000 ms or more. Thus, slow responses can cause unpredictable read latencies. These latencies make RAID configurations less acceptable in video transmitters, because transmission latencies can lead to the above-discussed problems in video reception.
The present invention is directed to overcoming, or at least reducing the effects of, one or more of the problems set forth above.
Summary of the Invention One object of the invention is to reduce the number of transmission gaps caused by slowly responding disks of a RAID configuration.
Another object of the invention is to provide a RAID
configuration with predictable read latencies.
In a first aspect, the invention provides a RAID
configuration. The RAID configuration includes a plurality of disks, a bus coupled to the disks to transmit data blocks, and a device to reconstruct a block stored in any one of the disks. The device reconstructs the block with associated data and parity blocks received from other disks.
The device transmits the reconstructed block to a receiving device in response to one of the disks responding slowly.
In a second aspect, the invention provides a method of transmitting data from a RAID configuration. The method includes requesting that a first disk of the RAID
Configuration transmit a first block, reconstructing the first block from associated blocks stored in other disks of the RAID configuration, and transmitting the reconstructed first block directly to a receiving device. The step of transmitting is performed if the first disk does not complete transmission of the first data block within a predetermined time.
In a third aspect, the invention provides a RAID
configuration, which stores parity and data blocks in stripes across the disks. The RAID configuration includes a plurality of disks and a processor connected to the disks.
The processor is adapted to write a plurality of groups of associated data and parity blocks to the disks. The processor writes the data and parity blocks of each group to different ones of the disks and writes at least two blocks from different groups to one stripe.
In a fourth aspect, the invention provides a RAID
Configuration to transmit data blocks to a receiving device.
The RAID configuration includes a plurality of disks, a processor to control reads from and writes to the disks, and a device to reconstruct blocks. The disks store blocks and transmit stored blocks to the receiving device. The Processor determines if disks are slowly responding. The device reconstructs a block stored in a slowly responding one of the disks from associated blocks stored in the remaining disks if the processor determines that the one of the disks is slowly responding.
Brief Description of the Drawinas Other objects, features, and advantages of the invention will be apparent from the following description taken together with the drawings, in which:
FIG. 1 shows one embodiment of a redundant array of inexpensive disks (RAID) configuration having a predictable read latency;
FIG. 2A shows a fragment of a data stream sent to the RAID configuration of FIG. 1 for storage therein;
FIG. 2B is a schematic illustration of how the RAID
configuration of FIG. 1 stores the data fragment of FIG. 2A;
FIG. 3 illustrates an embodiment of a reconstructor of data blocks for use in the RAID configuration of FIG. 1;
FIG. 4 is a flow chart illustrating a method of transmitting data from the RAID configuration of FIG. 1;
FIG. 5 illustrates a video transmission and reception system using the RAID configuration of FIG. 1;
FIG. 6 shows a two-level RAID configuration employing three of the RAID configurations shown in FIG. 1.
Description of the Preferred Embodiments U.S. Patent Application Serial No. 08/547,565, filed October 24, 1995, discloses several types of RAID
configurations and is incorporated by reference herein in its entirety.
FIG. 1 shows a RAID configuration 10 having three storage disks 12, 13, 14. The RAID configuration 10 has a bus 16 for data writes to and reads of the three disks 12-14. Generally, embodiments may have N disks. A processor controls writes to and reads of the disks 12-14. The writes and reads are for data and/or parity blocks. The 20 Processor 20 includes a reconstructor 22 to reconstruct data blocks of slowly responding disks. The processor 20 transmits data blocks over an interface or line 17, for example, a bus or a cable, to a receiving device 19.
In some embodiments the bus 16 has separate data and control lines (not shown) for each of the disks 12-14.
Then, reads and writes may be parallel accesses to all or to a subset of the disks 12-14. In other embodiments a single set of data and control lines connects to each disk 12-14 of the RAID configuration 10. Then, the processor 20 performs serial writes to and reads from the separate disks 12-14 over the shared data line. In this case, the bus 16 may be a single SCSI bus or another type of shared or dedicated interconnect.
A disk is slowly responding if the disk does not Complete a requested read within a predetermined time, but still sends signals, e.g., to the processor 20, indicating that the read is progressing. The predetermined time is longer than a normal time for completing the requested read.
A slowly responding disk may store the requested data in a readable form and may eventually complete the requested read, i.e. the disk is responding and not stalled.
FIG. 2A shows a fragment 40 of a data stream to store in the RAID configuration device 10 of FIG. 1. In this illustrative embodiment, the processor 20 divides the fragment 40 into an ordered sequence of blocks D(0), D(1), ... D(11) and produces a parity block P(i, i+1) (i - 0, 2, 4~ ...) to associate with consecutive pairs 42, 44 of the data blocks D ( i ) , D ( i+1 ) . The parity block P ( i , i+1 ) encodes at least one parity bit for each pair of equivalent bits of the associated pair 42, 44 of data blocks D(i), D(i+1). The processor 20 may write each associated pair 42, 44 of data blocks D(i), D(i+1) and parity block P(i, i+1) to the three disks 12-14 in parallel or serially as explained with respect to FIG. 1.
Henceforth, a stripe refers to a correspondingly positioned set of storage locations in each disk 12-14 of the RAID configuration 10. Each stripe includes the same number of storage locations from each disk 12-14.
Nevertheless, an array of disks may allow several definitions of stripes. For example, an array with disks A
and B may assign storage locations 101 to 200 of both disks A and B to a first stripe and assign storage locations 201 to 300 of both disks A and B to a second stripe. In the same array, a second definition may assign locations 101 to 200 of disk A and locations 201 to 300 of disk B to the first stripe and assign locations 201 to 300 of disk A and locations 101 to 200 of disk B to a second stripe.
FIG. 2B schematically illustrates how the processor 20 writes data and parity blocks in the disks 12-14. The storage locations of the three disks 12-14 are arranged in stripes S1-S6. Each stripe S1-S6 stores a group of three associated blocks, which includes a consecutive pair of data blocks D ( i ) , D ( i+1 ) and the parity block P ( i , i+1 ) constructed from the pair. The portion of each disk 12-14 in a particular stripe S1-S6 stores either one of the data blocks D(i), D(i+1) or the associated parity block P(i, i+1). The processor 20 writes the parity blocks P(i, i+1) associated with sequential pairs to different ones of the disks 12-14 by cyclically shifting the storage location of P(i, i+1) in each consecutive stripe. This is referred to as rotating the parity blocks P(i, i+1) across the disks 12-14. Rotating the storage location of the parity block more uniformly distributes the data blocks D(j) among the disks 12-14 thereby spreading the access burdens more uniformly across the different disks 12-14 during data reads and writes.
The configuration shown in FIGS. 1 and 2B is often referred to as a RAID-5 configuration.
FIG. 3 illustrates an embodiment 60 of the reconstructor 22 of FIG. 1, which includes a memory device 62 and a hardware processor 64. Both the memory device 62 and the processor 64 couple to the bus 16. The memory device 62 receives data and/or parity blocks from the disks 12-14 via the bus 16. The memory device 62 stores the associated data and parity blocks for reconstructing the associated block of a slowly responding disk 12-14.
The processor 64 performs an exclusive OR (XOR) of the associated parity and data blocks to reconstruct the data block of the stalled disk 12-14. To perform the XOR, the processor 64 reads the associated blocks from the memory device 62. Then, the processor 64 XOR's corresponding bits of the read associated parity and data blocks in a bit-by-bit manner. Finally, the processor 64 writes the results of the XOR back to the memory device 62. The reconstructor 60 can make a reconstructed block for any one of the disks 12-14.
FIG. 4 is a flow chart illustrating one method 100 of transmitting data from the RAID configuration 10 shown in FIGS. 1 and 2B. At step 102, the processor 20 selects to transmit the associated data blocks of the stripe S1. At step 104, the processor 20 requests that the disks 13-14 to transmit the data blocks of the selected stripe S1. At step 106, the processor 20 determines whether any of the disks 13-14 is slowly responding. At step 107, the processor 20 transmits the requested data blocks if neither disk 13-14 is slowly responding. At step 108, the reconstructor 22 reconstructs the data block of a slowly responding disk 13-14, from the associated data block and parity (from disk 12). The reconstructor 22 receives the associated data and parity blocks from storage locations of the same stripe S1 of the other disks 12-14, which are not slowly responding.
At step 110, the reconstructor 22 transmits the reconstructed data block to the data receiver 19. At step 112, the processor 20 selects the next stripe S2 of associated data blocks to transmit in response to completing transmission of the data blocks of the stripe Sl at step 106 or 110.
Referring to FIGS. 1 and 2B, the RAID configuration 10 uses a timer 34 to determine whether any of the disks 12-14 are slowly responding. The processor 20 resets the timer 34 at the start of each cycle for transmitting the data blocks from one of the stripes S1-S6. The timer 34 counts a predetermined time and signals the processor 20 when the time has elapsed. In response to the signal from the timer 34, the processor 20 determines whether each disk 12-14 has Completed transmission of the data block stored therein, i.e. whether any disk 12-14 is slowly responding.
The processor 20 may determine that one of the disks 12-14 is slowly responding even though the disk 12-14 continues to send "handshaking" signals to the processor 20 indicating normal operation.
Referring to FIGS. 1-3, the processor 20 controls the reconstruction and the transmission of reconstructed data blocks. First, the processor 20 orders the remaining disks 12-14 to transmit the associated blocks to the reconstructor 22, e.g., to the memory device 62, if a slowly responding disk 12-14 is detected. In FIG. 2B, the associated data and parity blocks are stored in the same stripe Sl-S6 as the untransmitted data block from the slowly responding disk 12-14. Thus, the processor 20 orders reads of the associated stripe Sl-S6 to obtain the associated blocks. Next, the processor 20 signals the reconstructor 22 to reconstruct the data block from a slowly responding disk, e.g., by a signal sent to the processor 64 of FIG. 3. Then, the processor 20 reads the reconstructed block from the reconstructor 22, e.g., the memory device 62, and transmits the reconstructed block to the interface or line 17.
Referring to FIGS. 1-3, the processor 20 does not interrupt a slowly responding disk 12-14 from recovering by sending to the disk 12-14 a second request to transmit data.
Instead the processor 20 orders the reconstructor 22 to reconstruct the missing data from the associated data blocks in the normally responding disks 12-14.
FIG. 5 illustrates a video transmission system 114, which uses the RAID configuration 10 of FIG. 1. A receiver 115 receives data blocks transmitted from the interface or line 17 at an input terminal 116. Transmission between the SID configuration 10 and receiver 116 may be by radio wave, light, and/or cable transmission. The input terminal 116 couples to a input data buffer 117, e.g., a first-in-first-out buffer. The input data buffer 117 stores two to several times the quantity of data included in one data block shown in FIG. 2B. Data stored in the input data buffer 117 provides for continuous video data processing in the event of a short transmission interruption.
Referring to FIGS. 1 and 5, the video transmission system 114 can lower the occurrence of viewing pauses by transmitting a reconstructed data block in response to detecting a slow disk 12-14. In one embodiment of the system 114, the RAID configuration 10 needs about 100 ms to transmit or reconstruct a data block, and the receiver's input data buffer 117 stores about 2000 ms of video data.
The timer 34 counts down a predetermined period of about 400 ms to determine whether one of the disks 12-14 is slowly responding. For this choice of the predetermined period, even several sequential slow disk responses will not empty the receiver's input data buffer 117 to produce a noticeable pause in a video being viewed.
Various embodiments may employ different numbers of j disks than the RAID configuration 10 of FIG. 1. Some embodiments use more disks to increase the access bandwidth and/or to lower read latencies. On the other hand, a RAID-1 configuration employs only two disks to store duplicate data blocks. In a RAID-1 configuration, a processor controls the transmission of stored data blocks. The processor commands the second disk to transmit a duplicate of a data block in response to the first disk not completing transmission of the data block within a predetermined time.
In the various embodiments, a read lasting longer than a predetermined time provokes a reconstruction of data from associated data from other disks and a transmission of the reconstructed 'data. This increases the predictability of read latencies for the RAID configurations described herein.
Some embodiments of RAID configurations store associated data and parity blocks differently than the pattern shown in FIG. 2B. These RAID configurations still transmit reconstructed data in response to detecting a slowly responding disk. To enable reconstruction of data of a slowly responding disk, each disk stores, at most, one block from any group formed of associated data and parity blocks.
FIG.6 shows a RAID configuration 140 with both first and second level RAID-5 structures. At the first level, a ~0 first level processor 141 receives consecutive groups of pairs of data blocks and generates a parity block to associate with each pair of data blocks. The first level processor 141 sends one block from each associated group of three blocks to each of the interfaces 142, 142', 142 " of ~5 the second level RAID configurations 10, 10', 10 " . Each second level processor 20, 20', 20 " subsequently breaks each block into two mini-blocks and generates a parity mini-block to associate with the two mini-blocks. Each second level RAID configuration 10, 10', 10 " stores the mini-blocks as illustrated in FIGS. 2A and 2B. The first level processor 141 retrieves blocks from the second level RAID
5 configurations 10, 10', 10 " and transmits the retrieved blocks over an interface or line 147 to a receiving device 149.
Still referring to FIG. 6, the two-level RAID
configuration handles slowly responding storage structures 10 bY reconstructing and transmitting reconstructed blocks at the first level. A first level reconstructor 144 reconstructs and transmits to the receiving device 149 the reconstructed block if any second level RAID configuration 10, 10', 10 " responds slowly. A slow response is signaled bY the first level processor 141 if the timer 143 counts a predetermined time before ail second level RAID
configurations 10, 10', 10 " complete transmission of requested data blocks. The timer 143 starts counting the predetermined time in response to the processor 141 sending ~0 a new read request to the second level RAID configurations 10, 10', 10 " . Thus, the two-level RAID configuration 140 deals handles slow responses in the second-level RAID
configurations 10, 10', 10 " at the first level. Even if the second level Raid configurations 10, 10', 10 " do not have timers, like the timers 34 of FIG. 1, the first level processor 141, timer 143, and reconstructor 144 can handle latencies due to sio~,a disk responses. These first level devices build predictability into the read latencies of the RAID configuration 140.
In some embodiments, the processor 141 is programmed to simulate the first level RAID-5 structure of FIG. 8, i.e.
to simulate the timer 143, and the reconstructor 144. The processor 141 may also control the processors 20, 20', 20 "
if they are programmable.
Additions, deletions, and other modifications of the described embodiments will be apparent to those practiced in this field and are within the scope of the following claims.
Still referring to FIG. 6, the two-level RAID
configuration handles slowly responding storage structures 10 bY reconstructing and transmitting reconstructed blocks at the first level. A first level reconstructor 144 reconstructs and transmits to the receiving device 149 the reconstructed block if any second level RAID configuration 10, 10', 10 " responds slowly. A slow response is signaled bY the first level processor 141 if the timer 143 counts a predetermined time before ail second level RAID
configurations 10, 10', 10 " complete transmission of requested data blocks. The timer 143 starts counting the predetermined time in response to the processor 141 sending ~0 a new read request to the second level RAID configurations 10, 10', 10 " . Thus, the two-level RAID configuration 140 deals handles slow responses in the second-level RAID
configurations 10, 10', 10 " at the first level. Even if the second level Raid configurations 10, 10', 10 " do not have timers, like the timers 34 of FIG. 1, the first level processor 141, timer 143, and reconstructor 144 can handle latencies due to sio~,a disk responses. These first level devices build predictability into the read latencies of the RAID configuration 140.
In some embodiments, the processor 141 is programmed to simulate the first level RAID-5 structure of FIG. 8, i.e.
to simulate the timer 143, and the reconstructor 144. The processor 141 may also control the processors 20, 20', 20 "
if they are programmable.
Additions, deletions, and other modifications of the described embodiments will be apparent to those practiced in this field and are within the scope of the following claims.
Claims (16)
1. A redundant array of inexpensive disks comprising:
a plurality of disks;
a bus coupled to said disks and adapted to transmit data blocks from said disks to a receiving device; and a reconstructor adapted to reconstruct a data block of one of said disks with associated data and parity blocks from other of the disks and to transmit the reconstructed block to the receiving device in response to determining that the one of the disks is slowly responding.
a plurality of disks;
a bus coupled to said disks and adapted to transmit data blocks from said disks to a receiving device; and a reconstructor adapted to reconstruct a data block of one of said disks with associated data and parity blocks from other of the disks and to transmit the reconstructed block to the receiving device in response to determining that the one of the disks is slowly responding.
2. The redundant array of claim 1 wherein a slowly responding disk does not complete transmission of a requested data block within a predetermined time.
3. The redundant array of claim 2, further comprising:
a processor adapted to send a signal to the reconstructor in response to the one of the disks slowly responding; and wherein the reconstructor is adapted to transmit the reconstructed block to the receiving device in response to receiving the signal.
a processor adapted to send a signal to the reconstructor in response to the one of the disks slowly responding; and wherein the reconstructor is adapted to transmit the reconstructed block to the receiving device in response to receiving the signal.
4. The redundant array of claim 3, wherein the processor is adapted to request the other of the disks to send associated data and parity blocks to the reconstructor in response to the one of the disks responding slowly.
5. The redundant data array of claim 3, wherein the processor couples to the plurality of disks and is adapted to control writes of associated data and parity blocks to said disks.
6. A method of transmitting data from a redundant array of inexpensive disks (RAID configuration), comprising:
requesting a first disk of the RAID configuration to transmit a first data block stored therein to a receiving device;
reconstructing the first data block from associated data stored in other disks of the RAID configuration; and transmitting the reconstructed first data block directly to the receiving device; and wherein the reconstructing and the transmitting are performed in response to the first disk providing data in the first data block, but not completing a transmission of the first data block within a predetermined time.
requesting a first disk of the RAID configuration to transmit a first data block stored therein to a receiving device;
reconstructing the first data block from associated data stored in other disks of the RAID configuration; and transmitting the reconstructed first data block directly to the receiving device; and wherein the reconstructing and the transmitting are performed in response to the first disk providing data in the first data block, but not completing a transmission of the first data block within a predetermined time.
7. The method of claim 6, wherein the associated data comprises at least one data block and a parity block.
8. The method of claim 6, wherein the reconstructing and the transmitting are performed in response to determining that the first disk is slowly responding.
9. The method of claim 6, further comprising:
providing a time signal at a predetermined time after the requesting; and wherein the transmitting is in response to an occurrence of the time signal before the first disk completes transmission of the first data block.
providing a time signal at a predetermined time after the requesting; and wherein the transmitting is in response to an occurrence of the time signal before the first disk completes transmission of the first data block.
10. The method of claim 7, wherein the reconstructing comprises calculating a bit-by-bit exclusive-OR for corresponding bits of data and parity blocks associated with the first data block.
11. A redundant array of inexpensive disks (RAID) configuration to transmit data blocks to a receiving device, comprising:
a plurality of disks adapted to store the blocks and to transmit the stored blocks to the receiving device;
a processor to control reads from and writes to the plurality of disks and to determine if one of the disks is slowly responding; and a device adapted to reconstruct a block stored in a slowly responding one of the disks from associated blocks stored in others of the disks in response to the processor determining that the one of the disks is slowly responding.
a plurality of disks adapted to store the blocks and to transmit the stored blocks to the receiving device;
a processor to control reads from and writes to the plurality of disks and to determine if one of the disks is slowly responding; and a device adapted to reconstruct a block stored in a slowly responding one of the disks from associated blocks stored in others of the disks in response to the processor determining that the one of the disks is slowly responding.
12. The RAID of claim 11, wherein the processor is configured to determine that the one of the disks is slowly responding if the one disk does not finish transmitting a requested data block within a predetermined time.
13. The RAID configuration of claim 12, further comprising:
a timer coupled to the processor and adapted to count the predetermined time in response to the processor sending a request to a disk to transmit a data block.
a timer coupled to the processor and adapted to count the predetermined time in response to the processor sending a request to a disk to transmit a data block.
14. A two-level redundant array of inexpensive disks (RAID), comprising:
a first level processor; and a plurality of second level redundant arrays of inexpensive disks, the first level processor adapted to write first level blocks to and read first level blocks from the second level arrays; and each second level array comprising:
a plurality of disks adapted to store second level blocks and to retrieve stored blocks; and a first level device to reconstruct a particular first level block from associated first level blocks and to transmit the reconstructed first level block to a receiving device in response to a second level array responding slowly.
a first level processor; and a plurality of second level redundant arrays of inexpensive disks, the first level processor adapted to write first level blocks to and read first level blocks from the second level arrays; and each second level array comprising:
a plurality of disks adapted to store second level blocks and to retrieve stored blocks; and a first level device to reconstruct a particular first level block from associated first level blocks and to transmit the reconstructed first level block to a receiving device in response to a second level array responding slowly.
15. The RAID configuration of claim 14, wherein the first level processor is adapted to signal the first level device that one of the second level arrays is responding slowly if the one of the second level arrays does not complete a transmission of a requested first level block in a predetermined time.
16. The RAID configuration of claim 14, each second level array further comprising:
a second level processor to request reads of and writes to the disks of the associated second level array.
a second level processor to request reads of and writes to the disks of the associated second level array.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US09/260,262 | 1999-03-01 | ||
US09/260,262 US6321345B1 (en) | 1999-03-01 | 1999-03-01 | Slow response in redundant arrays of inexpensive disks |
PCT/US2000/005272 WO2000052802A1 (en) | 1999-03-01 | 2000-02-29 | Slow responses in redundant arrays of inexpensive disks |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2365694A1 true CA2365694A1 (en) | 2000-09-08 |
Family
ID=22988463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002365694A Abandoned CA2365694A1 (en) | 1999-03-01 | 2000-02-29 | Slow responses in redundant arrays of inexpensive disks |
Country Status (11)
Country | Link |
---|---|
US (2) | US6321345B1 (en) |
EP (1) | EP1166418B1 (en) |
JP (1) | JP2003503766A (en) |
CN (1) | CN1391716A (en) |
AT (1) | ATE304237T1 (en) |
AU (1) | AU3713000A (en) |
CA (1) | CA2365694A1 (en) |
DE (1) | DE60022488D1 (en) |
HK (1) | HK1049409A1 (en) |
IL (1) | IL145214A0 (en) |
WO (1) | WO2000052802A1 (en) |
Families Citing this family (83)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6799283B1 (en) * | 1998-12-04 | 2004-09-28 | Matsushita Electric Industrial Co., Ltd. | Disk array device |
US6321345B1 (en) * | 1999-03-01 | 2001-11-20 | Seachange Systems, Inc. | Slow response in redundant arrays of inexpensive disks |
EP1193591A3 (en) * | 2000-09-29 | 2007-10-31 | Matsushita Electric Industrial Co., Ltd. | Data storage array device and data access method |
US6950966B2 (en) | 2001-07-17 | 2005-09-27 | Seachange International, Inc. | Data transmission from raid services |
US7685126B2 (en) * | 2001-08-03 | 2010-03-23 | Isilon Systems, Inc. | System and methods for providing a distributed file system utilizing metadata to track information about data stored throughout the system |
US7146524B2 (en) | 2001-08-03 | 2006-12-05 | Isilon Systems, Inc. | Systems and methods for providing a distributed file system incorporating a virtual hot spare |
US6789165B2 (en) * | 2002-05-10 | 2004-09-07 | International Business Machines Corporation | Data storage array method and system |
US6904498B2 (en) | 2002-10-08 | 2005-06-07 | Netcell Corp. | Raid controller disk write mask |
EP2284735A1 (en) | 2002-11-14 | 2011-02-16 | Isilon Systems, Inc. | Systems and methods for restriping files in a distributed file system |
US7379974B2 (en) * | 2003-07-14 | 2008-05-27 | International Business Machines Corporation | Multipath data retrieval from redundant array |
CN100343825C (en) * | 2004-01-05 | 2007-10-17 | 华为技术有限公司 | Method for treating flow media data |
JP2005196490A (en) * | 2004-01-07 | 2005-07-21 | Hitachi Ltd | System and method for data duplication |
US7188212B2 (en) * | 2004-05-06 | 2007-03-06 | International Business Machines Corporation | Method and system for storing data in an array of storage devices with additional and autonomic protection |
US8434118B2 (en) * | 2004-05-27 | 2013-04-30 | Time Warner Cable Enterprises Llc | Playlist menu navigation |
US7093157B2 (en) | 2004-06-17 | 2006-08-15 | International Business Machines Corporation | Method and system for autonomic protection against data strip loss |
CN100407166C (en) * | 2004-07-29 | 2008-07-30 | 普安科技股份有限公司 | Method for improving data reading performance using redundancy and storage system for performing the same |
US8238350B2 (en) | 2004-10-29 | 2012-08-07 | Emc Corporation | Message batching with checkpoints systems and methods |
US8055711B2 (en) | 2004-10-29 | 2011-11-08 | Emc Corporation | Non-blocking commit protocol systems and methods |
US8051425B2 (en) | 2004-10-29 | 2011-11-01 | Emc Corporation | Distributed system with asynchronous execution systems and methods |
CN100388239C (en) * | 2005-01-18 | 2008-05-14 | 英业达股份有限公司 | Method and system for online increasing disk number for redundant array of inexpensive disks |
KR101331569B1 (en) | 2005-04-21 | 2013-11-21 | 바이올린 메모리 인코포레이티드 | Interconnection System |
US8452929B2 (en) | 2005-04-21 | 2013-05-28 | Violin Memory Inc. | Method and system for storage of data in non-volatile media |
US8200887B2 (en) * | 2007-03-29 | 2012-06-12 | Violin Memory, Inc. | Memory management system and method |
US9286198B2 (en) | 2005-04-21 | 2016-03-15 | Violin Memory | Method and system for storage of data in non-volatile media |
US9384818B2 (en) | 2005-04-21 | 2016-07-05 | Violin Memory | Memory power management |
US9582449B2 (en) | 2005-04-21 | 2017-02-28 | Violin Memory, Inc. | Interconnection system |
US8521955B2 (en) * | 2005-09-13 | 2013-08-27 | Lsi Corporation | Aligned data storage for network attached media streaming systems |
CN100397352C (en) * | 2005-09-15 | 2008-06-25 | 威盛电子股份有限公司 | Return circuit for detecting magnetic disk array and method thereof |
JP4817783B2 (en) * | 2005-09-30 | 2011-11-16 | 富士通株式会社 | RAID system and rebuild / copyback processing method thereof |
US7386675B2 (en) | 2005-10-21 | 2008-06-10 | Isilon Systems, Inc. | Systems and methods for using excitement values to predict future access to resources |
US7917474B2 (en) | 2005-10-21 | 2011-03-29 | Isilon Systems, Inc. | Systems and methods for accessing and updating distributed data |
US7346720B2 (en) * | 2005-10-21 | 2008-03-18 | Isilon Systems, Inc. | Systems and methods for managing concurrent access requests to a shared resource |
US7551572B2 (en) | 2005-10-21 | 2009-06-23 | Isilon Systems, Inc. | Systems and methods for providing variable protection |
US7788303B2 (en) | 2005-10-21 | 2010-08-31 | Isilon Systems, Inc. | Systems and methods for distributed system scanning |
US7797283B2 (en) | 2005-10-21 | 2010-09-14 | Isilon Systems, Inc. | Systems and methods for maintaining distributed data |
JP4472617B2 (en) * | 2005-10-28 | 2010-06-02 | 富士通株式会社 | RAID system, RAID controller and rebuild / copy back processing method thereof |
TWI350526B (en) * | 2005-11-21 | 2011-10-11 | Infortrend Technology Inc | Data access methods and storage subsystems thereof |
US7848261B2 (en) | 2006-02-17 | 2010-12-07 | Isilon Systems, Inc. | Systems and methods for providing a quiescing protocol |
US7756898B2 (en) | 2006-03-31 | 2010-07-13 | Isilon Systems, Inc. | Systems and methods for notifying listeners of events |
US8539056B2 (en) | 2006-08-02 | 2013-09-17 | Emc Corporation | Systems and methods for configuring multiple network interfaces |
US7676691B2 (en) | 2006-08-18 | 2010-03-09 | Isilon Systems, Inc. | Systems and methods for providing nonlinear journaling |
US7680842B2 (en) | 2006-08-18 | 2010-03-16 | Isilon Systems, Inc. | Systems and methods for a snapshot of data |
US7953704B2 (en) | 2006-08-18 | 2011-05-31 | Emc Corporation | Systems and methods for a snapshot of data |
US7590652B2 (en) | 2006-08-18 | 2009-09-15 | Isilon Systems, Inc. | Systems and methods of reverse lookup |
US7899800B2 (en) | 2006-08-18 | 2011-03-01 | Isilon Systems, Inc. | Systems and methods for providing nonlinear journaling |
US7882071B2 (en) | 2006-08-18 | 2011-02-01 | Isilon Systems, Inc. | Systems and methods for a snapshot of data |
US7680836B2 (en) | 2006-08-18 | 2010-03-16 | Isilon Systems, Inc. | Systems and methods for a snapshot of data |
US7752402B2 (en) | 2006-08-18 | 2010-07-06 | Isilon Systems, Inc. | Systems and methods for allowing incremental journaling |
US7822932B2 (en) | 2006-08-18 | 2010-10-26 | Isilon Systems, Inc. | Systems and methods for providing nonlinear journaling |
US8028186B2 (en) | 2006-10-23 | 2011-09-27 | Violin Memory, Inc. | Skew management in an interconnection system |
US20080140724A1 (en) | 2006-12-06 | 2008-06-12 | David Flynn | Apparatus, system, and method for servicing object requests within a storage controller |
US8286029B2 (en) | 2006-12-21 | 2012-10-09 | Emc Corporation | Systems and methods for managing unavailable storage devices |
US7593938B2 (en) | 2006-12-22 | 2009-09-22 | Isilon Systems, Inc. | Systems and methods of directory entry encodings |
JP4369470B2 (en) * | 2006-12-26 | 2009-11-18 | 富士通株式会社 | Data relay apparatus, storage apparatus, and response delay monitoring method |
US7509448B2 (en) | 2007-01-05 | 2009-03-24 | Isilon Systems, Inc. | Systems and methods for managing semantic locks |
US11010076B2 (en) | 2007-03-29 | 2021-05-18 | Violin Systems Llc | Memory system with multiple striping of raid groups and method for performing the same |
US9632870B2 (en) | 2007-03-29 | 2017-04-25 | Violin Memory, Inc. | Memory system with multiple striping of raid groups and method for performing the same |
US7900015B2 (en) | 2007-04-13 | 2011-03-01 | Isilon Systems, Inc. | Systems and methods of quota accounting |
US7779048B2 (en) | 2007-04-13 | 2010-08-17 | Isilon Systems, Inc. | Systems and methods of providing possible value ranges |
US8966080B2 (en) | 2007-04-13 | 2015-02-24 | Emc Corporation | Systems and methods of managing resource utilization on a threaded computer system |
US7966289B2 (en) | 2007-08-21 | 2011-06-21 | Emc Corporation | Systems and methods for reading objects in a file system |
US7949692B2 (en) | 2007-08-21 | 2011-05-24 | Emc Corporation | Systems and methods for portals into snapshot data |
US7882068B2 (en) | 2007-08-21 | 2011-02-01 | Isilon Systems, Inc. | Systems and methods for adaptive copy on write |
EP2028593A1 (en) * | 2007-08-23 | 2009-02-25 | Deutsche Thomson OHG | Redundancy protected mass storage system with increased performance |
US7870345B2 (en) | 2008-03-27 | 2011-01-11 | Isilon Systems, Inc. | Systems and methods for managing stalled storage devices |
US7949636B2 (en) | 2008-03-27 | 2011-05-24 | Emc Corporation | Systems and methods for a read only mode for a portion of a storage system |
US7953709B2 (en) | 2008-03-27 | 2011-05-31 | Emc Corporation | Systems and methods for a read only mode for a portion of a storage system |
US7984324B2 (en) | 2008-03-27 | 2011-07-19 | Emc Corporation | Systems and methods for managing stalled storage devices |
US20100325351A1 (en) * | 2009-06-12 | 2010-12-23 | Bennett Jon C R | Memory system having persistent garbage collection |
US11614893B2 (en) | 2010-09-15 | 2023-03-28 | Pure Storage, Inc. | Optimizing storage device access based on latency |
US8732426B2 (en) | 2010-09-15 | 2014-05-20 | Pure Storage, Inc. | Scheduling of reactive I/O operations in a storage environment |
US8589625B2 (en) * | 2010-09-15 | 2013-11-19 | Pure Storage, Inc. | Scheduling of reconstructive I/O read operations in a storage environment |
US11275509B1 (en) | 2010-09-15 | 2022-03-15 | Pure Storage, Inc. | Intelligently sizing high latency I/O requests in a storage environment |
US8589655B2 (en) | 2010-09-15 | 2013-11-19 | Pure Storage, Inc. | Scheduling of I/O in an SSD environment |
US9348696B2 (en) * | 2010-10-01 | 2016-05-24 | Pure Storage, Inc. | Distributed multi-level protection in a raid array based storage system |
US9229808B2 (en) * | 2010-10-01 | 2016-01-05 | Pure Storage, Inc. | Reconstruct reads in a raid array with dynamic geometries |
US20120084507A1 (en) * | 2010-10-01 | 2012-04-05 | John Colgrove | Multi-level protection with intra-device protection in a raid array based storage system |
US20120084504A1 (en) * | 2010-10-01 | 2012-04-05 | John Colgrove | Dynamic raid geometries in an ssd environment |
US8583987B2 (en) | 2010-11-16 | 2013-11-12 | Micron Technology, Inc. | Method and apparatus to perform concurrent read and write memory operations |
US9990263B1 (en) * | 2015-03-20 | 2018-06-05 | Tintri Inc. | Efficient use of spare device(s) associated with a group of devices |
CN106445749B (en) * | 2015-05-12 | 2021-04-23 | 爱思开海力士有限公司 | Reduction of maximum latency for dynamic auto-tuning using redundant arrays of independent disks |
CN106354590B (en) * | 2015-07-17 | 2020-04-24 | 中兴通讯股份有限公司 | Disk detection method and device |
US10901646B2 (en) * | 2018-11-30 | 2021-01-26 | International Business Machines Corporation | Update of RAID array parity |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2270791B (en) * | 1992-09-21 | 1996-07-17 | Grass Valley Group | Disk-based digital video recorder |
US5623595A (en) * | 1994-09-26 | 1997-04-22 | Oracle Corporation | Method and apparatus for transparent, real time reconstruction of corrupted data in a redundant array data storage system |
US5592612A (en) * | 1995-04-28 | 1997-01-07 | Birk; Yitzhak | Method and apparatus for supplying data streams |
US5758057A (en) * | 1995-06-21 | 1998-05-26 | Mitsubishi Denki Kabushiki Kaisha | Multi-media storage system |
US5862312A (en) * | 1995-10-24 | 1999-01-19 | Seachange Technology, Inc. | Loosely coupled mass storage computer cluster |
US5754804A (en) * | 1996-01-30 | 1998-05-19 | International Business Machines Corporation | Method and system for managing system bus communications in a data processing system |
US6321345B1 (en) * | 1999-03-01 | 2001-11-20 | Seachange Systems, Inc. | Slow response in redundant arrays of inexpensive disks |
-
1999
- 1999-03-01 US US09/260,262 patent/US6321345B1/en not_active Expired - Lifetime
-
2000
- 2000-02-29 CA CA002365694A patent/CA2365694A1/en not_active Abandoned
- 2000-02-29 AU AU37130/00A patent/AU3713000A/en not_active Abandoned
- 2000-02-29 AT AT00915947T patent/ATE304237T1/en not_active IP Right Cessation
- 2000-02-29 JP JP2000603131A patent/JP2003503766A/en not_active Withdrawn
- 2000-02-29 IL IL14521400A patent/IL145214A0/en unknown
- 2000-02-29 WO PCT/US2000/005272 patent/WO2000052802A1/en active Search and Examination
- 2000-02-29 CN CN00804548A patent/CN1391716A/en active Pending
- 2000-02-29 DE DE60022488T patent/DE60022488D1/en not_active Expired - Lifetime
- 2000-02-29 EP EP00915947A patent/EP1166418B1/en not_active Expired - Lifetime
-
2001
- 2001-11-20 US US09/989,504 patent/US20020032882A1/en not_active Abandoned
-
2003
- 2003-03-03 HK HK03101562.3A patent/HK1049409A1/en unknown
Also Published As
Publication number | Publication date |
---|---|
EP1166418B1 (en) | 2005-09-07 |
US6321345B1 (en) | 2001-11-20 |
JP2003503766A (en) | 2003-01-28 |
ATE304237T1 (en) | 2005-09-15 |
IL145214A0 (en) | 2002-06-30 |
US20020032882A1 (en) | 2002-03-14 |
AU3713000A (en) | 2000-09-21 |
HK1049409A1 (en) | 2003-05-09 |
CN1391716A (en) | 2003-01-15 |
EP1166418A1 (en) | 2002-01-02 |
DE60022488D1 (en) | 2005-10-13 |
WO2000052802A1 (en) | 2000-09-08 |
EP1166418A4 (en) | 2002-11-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US6321345B1 (en) | Slow response in redundant arrays of inexpensive disks | |
US6950966B2 (en) | Data transmission from raid services | |
US5745671A (en) | Data storage system with localized XOR function | |
US5504858A (en) | Method and apparatus for preserving data integrity in a multiple disk raid organized storage system | |
US6996742B2 (en) | Method for regenerating and streaming content from a video server using RAID 5 data striping | |
US5960216A (en) | Method and apparatus for interfacing two remotely disposed devices coupled via a transmission medium | |
US5959860A (en) | Method and apparatus for operating an array of storage devices | |
US8095763B2 (en) | Method for reducing latency in a raid memory system while maintaining data integrity | |
US5623595A (en) | Method and apparatus for transparent, real time reconstruction of corrupted data in a redundant array data storage system | |
US6279050B1 (en) | Data transfer apparatus having upper, lower, middle state machines, with middle state machine arbitrating among lower state machine side requesters including selective assembly/disassembly requests | |
JP4314651B2 (en) | Disk array device and data recording / reproducing method | |
US6317805B1 (en) | Data transfer interface having protocol conversion device and upper, lower, middle machines: with middle machine arbitrating among lower machine side requesters including selective assembly/disassembly requests | |
GB2302428A (en) | Multi-media storage system | |
US20030088626A1 (en) | Messaging mechanism for inter processor communication | |
JP2001518665A (en) | Reliable array of distributed computing nodes | |
JP2006259894A (en) | Storage control device and method | |
JP2009545062A (en) | File server for RAID (Redundant Array of Independent Disks) system | |
JPH11504746A (en) | Multiple disk drive array with multiple parity groups | |
US5774641A (en) | Computer storage drive array with command initiation at respective drives | |
US7779169B2 (en) | System and method for mirroring data | |
US20040205269A1 (en) | Method and apparatus for synchronizing data from asynchronous disk drive data transfers | |
CA2409922A1 (en) | Controller fail-over without device bring-up | |
JP3375245B2 (en) | Device for fault-tolerant multimedia program distribution | |
JPH10293658A (en) | Disk array subsystem | |
JPH09138735A (en) | Consecutive data server, consecutive data sending method and disk array device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
FZDE | Discontinued |