US20060136778A1 - Process for generating and reconstructing variable number of parity for byte streams independent of host block size - Google Patents

Process for generating and reconstructing variable number of parity for byte streams independent of host block size Download PDF

Info

Publication number
US20060136778A1
US20060136778A1 US11/338,088 US33808806A US2006136778A1 US 20060136778 A1 US20060136778 A1 US 20060136778A1 US 33808806 A US33808806 A US 33808806A US 2006136778 A1 US2006136778 A1 US 2006136778A1
Authority
US
United States
Prior art keywords
data
stripes
redundancy
parity
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/338,088
Inventor
Philippe Graverand
Jacques Debiez
Gerald O'Nions
Charles Milligan
James Hughes
Christophe Carret
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/338,088 priority Critical patent/US20060136778A1/en
Publication of US20060136778A1 publication Critical patent/US20060136778A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1076Parity data used in redundant arrays of independent storages, e.g. in RAID systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1011Clustered RAID, i.e. clustered or de-clustered RAID where data and parity are spread over more disks than blocks in a parity group
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1026Different size groups, i.e. non uniform size of groups in RAID systems with parity
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/103Hybrid, i.e. RAID systems with parity comprising a mix of RAID types
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2211/00Indexing scheme relating to details of data-processing equipment not covered by groups G06F3/00 - G06F13/00
    • G06F2211/10Indexing scheme relating to G06F11/10
    • G06F2211/1002Indexing scheme relating to G06F11/1076
    • G06F2211/1092Single disk raid, i.e. RAID with parity on a single disk

Definitions

  • the present invention relates in general to RAID and RAIT systems, and more specifically to improving performance in virtualized RAID and RAIT systems.
  • RAIT/RAID Redundant Arrays of Independent Tapes
  • RAIT(D) comprises a set of two or more ordinary hard tapes/disks and a specialized disk controller that contains the RAIT(D) functionality.
  • RAIT(D) improves performance by disk striping, which interleaves bytes or groups of bytes across multiple tapes/drives, so more than one tape/disk is reading and writing simultaneously. Fault tolerance is achieved by means of mirroring or parity. Mirroring comprises 100% duplication of the data on two drives (RAIT(D) 1).
  • Parity is used (RAIT(D) 3 and 5) to calculate the data in two drives and store the results on a third.
  • a bit from drive 1 is XOR'd with a bit from drive 2, and the result bit is stored on drive 3.
  • a failed drive can be hot swapped with a new one, and the RAIT(D) controller automatically rebuilds the lost data.
  • storage virtualization The use of storage that has been mapped into a different presentation from that available directly from the storage devices is known as storage virtualization. It is important to have the ability to manipulate the functions and structures that create and define a virtual volume, in order to have independence between the boundaries of individual blocks written by the host system and the boundaries of physical entities used for striping and parity.
  • the present invention provides a method, system and program for generating parity in a data storage system.
  • the invention comprises organizing an incoming data block into a specified number of data stripes and cascading the data stripes into a parity creation mechanism.
  • the parity creation mechanism creates a specified number of parity stripes based on the data stripes, wherein the number of parity stripes is independent of the size of the data block.
  • the parity creation mechanism can operate offline to reconstruct lost data stripes and parity stripes without using critical system resources, wherein the number of devices required for stripe reconstruction is less than the combined number of data stripes and parity stripes.
  • FIG. 1 depicts a diagram illustrating a RAID system volume group containing multiple volumes, in which the present invention may be implemented
  • FIG. 2 depicts a schematic diagram illustrating a RAIT(D) write operation in accordance with the present invention
  • FIG. 3 depicts a schematic diagram illustrating a RAIT(D) read operation in accordance with the present invention
  • FIGS. 4 a and 4 B depict diagrams illustrating a method of parity generation in accordance with the present invention
  • FIGS. 5A and 5B depict diagrams illustrating an alternate method of parity generation in accordance with a preferred embodiment of the present invention
  • FIGS. 6A and 6B depict diagrams illustrating a method of data stripe reconstruction in accordance with the present invention.
  • FIGS. 7A and 7B diagrams illustrating an alternate method of data stripe reconstruction are depicted in accordance with a preferred embodiment of the present invention.
  • FIG. 1 a diagram illustrating a RAID system volume group containing multiple volumes, in which the present invention may be implemented.
  • the following description refers specifically to a RAID example but applies equally to RAIT systems or a combination of disks and tapes.
  • the RAID storage system 100 is divided into multiple (n) drive modules 1 ( 110 ) through n ( 130 ), each of which in turn comprises multiple (n) storage drives. Users can create volumes for physical data storage across a collection of drives. For example, in FIG. 1 , the data in volume A is divided into n sections (n being equal to the number of drive modules) and each section is stored on the first respective drive in each drive module. Therefore, section A-1 is stored on Drive 1 ( 111 ) in Module 1 ( 110 ), section A-2 is stored on Drive 1 ( 121 ) in Module 2 ( 120 ), and section A-n is stored on Drive 1 ( 131 ) in Module n ( 130 ).
  • volume group 140 comprises three volumes A, B and C.
  • sections A-1, B-1, and C-1 are stored on Drive 1 ( 111 ) in Module 1 ( 110 )
  • sections A-2, B-2, and C-2 are stored on Drive 1 ( 121 ) in Module 2 ( 120 ), etc.
  • a second volume group e.g., volumes D, E and F, might be stored on the second respective drives in each module.
  • the volume group 140 is assigned a specific RAID level by the user, which defines how the data will be striped across the set of drives and what kind of redundancy scheme is used. Any remaining capacity on a volume group can be used to create additional volumes or expand the capacity of the existing volumes.
  • FIG. 2 a schematic diagram illustrating a RAIT(D) write operation is depicted in accordance with the present invention.
  • the first part of implementing the present invention is establishing the level of performance and redundancy required for writing the data supplied by the using system. For a virtual tape, this can be done in relation to the mount or with respect to a file command. For a disk, it can be done relative to extent boundaries on the virtual disk.
  • the specification of performance and redundancy is supplied externally across a using interface ( 201 ).
  • n is the number of parallel operations required for performance.
  • the n streams are then cascaded into the parity generation/data reconstruction system 204 (which can be hardware or software, described in more detail below) in order to create p units of parity in such a manner that the memory holding the data is used in burst fashion. This is done on a section-by-section basis, where the sections that are processed together for parity can then be handed off to other functions for further analysis and processing before being written to the device.
  • the various possible data manipulations can be executed in parallel on different sets of data sections. This process is called a pipeline.
  • the second part of the solution is to allow the n stripes or the p parity to be modified at arbitrary points in the processing stream.
  • This modification is performed by parameter control at the interface to the parity creation system.
  • the using system 200 can set the parameter control via any type of link, which may include the same link 201 used for transferring client data 202 , or another link. Unless instructed otherwise, the system uses the same set of parameters as for the last operation. However, the interface allows a new set of parameters for every requested operation.
  • n sections of data being sent to the parity generation/data recreation system 204 simultaneously all p sets of parity are available to be written to storage.
  • the third part of the invention is to allow the parity generation/data reconstruction 204 system to be implemented in an off line set of resources 205 in such a manner that less than n+p devices are required to do the data manipulations.
  • n+p devices For example, in the extreme, only one input tape drive 206 might be required to allow the data to be manipulated.
  • This allows a stream initially written at n+p to be arbitrarily modified (for example to (n ⁇ 1)+p when one of the physical devices 207 fails during the operation) and then reconstructed back to n+p offline, thus avoiding the use of critical system resources for the reconstruction.
  • the advantage of this approach is a high throughput and process completed, even if some of the physical resources (up to p) fail during the operation. In such a case, failures can be repaired offline. Additional advantages include Pipelining and re-blocking at low memory cost.
  • the client host can also have a multiplexed or parallel data stream.
  • FIG. 3 a schematic diagram illustrating a RAIT(D) read operation is depicted in accordance with the present invention. Similar to the write process, the process begins by establishing the level of performance and redundancy available for reading, and presenting it to the using system. As stated above, for a virtual tape this can be done in relation to the mount or a file command, and for disk, it can be done relative to extent boundaries on the virtual disk. For read operations, the specification is retrieved from metadata 304 stored in association with the data.
  • Data reorganization component 302 collects all the read data stripes (not the parity stripes, which are used only for reconstructing missing data stripes), plus the reconstructed ones (up to p), and reorganizes them in order to send the original client data 301 back to the using system 300 .
  • the read operation implemented with the present invention has the advantage of recovering customer data even if some physical resources (up to p) are not mounted, not available, or fail during the operation.
  • FIGS. 4A and 4B diagrams illustrating a method of parity generation are depicted in accordance with the present invention.
  • FIGS. 4A and 4B depict the “heavy” method of parity generation.
  • the original customer data block is split into n data stripes S 1 -S n .
  • the first step in parity generation is to read and store the full customer data block in the n stripes.
  • FIG. 4A the data on stripes S 1 -S 3 has been read and stored.
  • stripe S 4 only the first ten sections have been recorded, with section S 4,11 401 receiving incoming data.
  • Stripes S 5 -S n have not yet been read and stored.
  • p parity units are computed from the data, as depicted in FIG. 4B .
  • the number of parities can be more or less.
  • a p value of three is a good tradeoff for values of n up to 12. For larger number of stripes (e.g., 15, 20), the optimal values of p will probably be higher (e.g., 4, 5) in order to ensure the same level or reliability.
  • Each parity stripe P x is generated given its slope x: P 0 , P 1 , P ⁇ 1 , P 2 , P ⁇ 2 , etc.
  • the advantage of the heavy parity method is that it uses a simple fixed structure of pointers for computing parity stripes.
  • the disadvantage is the need to store and handle the incoming data stripes, plus the parity stripes.
  • FIGS. 5A and 5B diagrams illustrating an alternate method of parity generation are depicted in accordance with a preferred embodiment of the present invention.
  • FIGS. 5A and 5B depict the “light” method of parity generation.
  • the data block is split into n stripes S 1 -S n .
  • the data is not physically stored after it is read. Instead, the data is read and thrown away.
  • the data is read stripe by stripe, while intermediate parity values are computed. Again, there are positive diagonal P +1 , straight P 0 , and negative diagonal parities P ⁇ 1 .
  • FIG. 5A and 5B depict the “light” method of parity generation.
  • the data block is split into n stripes S 1 -S n .
  • the data is not physically stored after it is read. Instead, the data is read and thrown away.
  • the data is read stripe by stripe, while intermediate parity values are computed. Again, there are positive diagonal P +1 , straight P 0 , and negative
  • incoming data is being read from the 11 th section S 4,11 501 in data stripe S 4 , and intermediate parity values are being computed and stored to the 15 th section P +1,15 502 of parity stripe P +1 , the 11 th section P 0,11 503 of parity stripe P 0 , and the 14 th section P ⁇ 1,34 504 of parity stripe P ⁇ 1 .
  • the XOR operation is commutative.
  • the following explanation takes the case of the straight parity, P 0 , since it is the easiest example.
  • P 0,y S 1,y ⁇ S 2,y ⁇ . . . ⁇ S n,y .
  • P 0,y ( . . . ( S 1,y ⁇ S 2,y ) ⁇ . . . ⁇ S n,y )
  • P 1 0,y S 1,y ⁇ S 2
  • FIG. 5B depicts the completed parity stripes P +1 , P 0 , and P ⁇ 1 securing data stripes S 1 -S n . No further calculations are required after the intermediate parity is calculated during the initial read of the data stripes S 1 -S n . The intermediate parity values become the final parity values securing the customer data block.
  • This light method of parity generation has the advantages of needing to store/handle only the p iteratively computed parity stripes and requiring only one step.
  • the disadvantage of this approach is the need for an adaptable matrix of pointers for computing parity stripes.
  • FIGS. 6A and 6B diagrams illustrating a method of data stripe reconstruction are depicted in accordance with the present invention.
  • the method depicted is the “heavy” method of stripe reconstruction.
  • data stripes S 3, S 5 , and S n are missing, and the data on those stripes is considered lost.
  • the remaining data is read and stored from the available data stripes S 1 -S 2 , S 4 , and S 6 -S 7 , in addition to the parity stripes P +1 , P 0 , and P ⁇ 1 .
  • the missing data stripes S 3 , S 5 , and S n are reconstructed by successive iterations, as depicted in FIG. 6B .
  • the heavy method of stripe reconstruction has the advantage of using a fixed structure of pointers for computing the reconstructed data stripes.
  • the disadvantage of this method is the need to store/handle all of the available data and parity stripes, plus the missing stripes reconstructed later during the second step.
  • each block of memory (from either the n data stripes or the p parity stripes) has a “fixed” position related to other blocks.
  • the “S 2,y ” block is implied in computation of P 0,y , P +1,y+n ⁇ 2 , P ⁇ 1,y+1 , etc. Therefore, during the data reconstruction process, the same “S 2,y ” block could be repaired by members of: P 0,y (P 0,y plus S 1,y , S 3,y , S 4,y . . . S n,y ) P +1,y+n ⁇ 2 (P +1,y+n ⁇ 2 plus S 1,y ⁇ 1 , S 3,y+1 , S 4,y+2 . . .
  • the matrix of pointers is adaptable and sets the relations across blocks of memory accordingly to which stripes (data or parity) are being reconstructed/generated.
  • FIGS. 7A and 7B diagrams illustrating an alternate method of data stripe reconstruction are depicted in accordance with a preferred embodiment of the present invention.
  • This alternate embodiment uses the “light” method for stripe reconstruction.
  • the missing stripes in FIG. 7A are S 3 , S 5 , and S n .
  • the stripes are read one by one, with the intermediate values of the missing stripes computed along the way. Because the intermediate values are being computed as the available data is read, there is no need to store the read data, so it is discarded.
  • S′ 3,y P ⁇ 1,y+2 ⁇ S 1,y+2 ⁇ S 2,y+1 ⁇ S 4,y ⁇ 1 ⁇ S 6,y ⁇ 3 ⁇ S n ⁇ 1,y ⁇ n+4 (Stripes S 5 and S n are missing)
  • S′ n,y P +1,y ⁇ S 1,y ⁇ n+1 ⁇ S 2,y ⁇ n+2 ⁇ S 4,y ⁇ n+4 ⁇ S 6,y ⁇ n+6 ⁇ S n ⁇ 1,y ⁇ 1 (Stripes S 3 and S 5 are missing)
  • S′ 5 P 0,y ⁇ S 1,y ⁇ S 2,y ⁇ S 4,y ⁇ S 6,y ⁇ S n ⁇ 1,y (Stripes S 3 and S n are missing)
  • the advantages of using the light method of stripe reconstruction are the need to store/handle only the iteratively reconstructed stripes and the simplification of the second step.
  • the disadvantage of this method is the need for an adaptable matrix of pointers for computing the missing stripes.

Abstract

A method, system and program for generating parity in a data storage system are provided. The invention comprises organizing an incoming data block into a specified number of data stripes and cascading the data stripes into a parity creation mechanism. The parity creation mechanism creates a specified number of parity stripes based on the data stripes, wherein the number of parity stripes is independent of the size of the data block. The parity creation mechanism can operate offline to reconstruct lost data stripes and parity stripes without using critical system resources, wherein the number of devices required for stripe reconstruction is less than the combined number of data stripes and parity stripes.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates in general to RAID and RAIT systems, and more specifically to improving performance in virtualized RAID and RAIT systems.
  • 2. Background of the Invention
  • Many systems use concepts such as Redundant Arrays of Independent Tapes (or Disks) (RAIT/RAID), comprising multiple physical resources allocated to enhance system performance or reliability. RAIT(D) comprises a set of two or more ordinary hard tapes/disks and a specialized disk controller that contains the RAIT(D) functionality. RAIT(D) improves performance by disk striping, which interleaves bytes or groups of bytes across multiple tapes/drives, so more than one tape/disk is reading and writing simultaneously. Fault tolerance is achieved by means of mirroring or parity. Mirroring comprises 100% duplication of the data on two drives (RAIT(D) 1). Parity is used (RAIT(D) 3 and 5) to calculate the data in two drives and store the results on a third. A bit from drive 1 is XOR'd with a bit from drive 2, and the result bit is stored on drive 3. A failed drive can be hot swapped with a new one, and the RAIT(D) controller automatically rebuilds the lost data.
  • Increasingly, data storage involves the use of computer systems via network connections. The use of storage that has been mapped into a different presentation from that available directly from the storage devices is known as storage virtualization. It is important to have the ability to manipulate the functions and structures that create and define a virtual volume, in order to have independence between the boundaries of individual blocks written by the host system and the boundaries of physical entities used for striping and parity.
  • Current data storage systems do not support the ability to set the level of striping and redundancy arbitrarily large or small for specific data files on a file-by-file basis. Arbitrarily large redundancy, when combined with arbitrarily large striping, generally imposes a large buffering requirement on the processing system. If the system uses small blocks of data for iterations of the process, the performance of the memory bus is adversely affected. This does not support the ability to experience a physical device failure and continue processing a file.
  • Correcting multiple errors that occur simultaneously is time consuming and significantly reduces system performance. Correction of errors offline requires the full set of striping and redundancy resources to be allocated for the process. Solving these limitations will advance the state of the art in virtualized storage systems.
  • SUMMARY OF THE INVENTION
  • The present invention provides a method, system and program for generating parity in a data storage system. The invention comprises organizing an incoming data block into a specified number of data stripes and cascading the data stripes into a parity creation mechanism. The parity creation mechanism creates a specified number of parity stripes based on the data stripes, wherein the number of parity stripes is independent of the size of the data block. The parity creation mechanism can operate offline to reconstruct lost data stripes and parity stripes without using critical system resources, wherein the number of devices required for stripe reconstruction is less than the combined number of data stripes and parity stripes.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself, however, as well as a preferred mode of use, further objectives and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
  • FIG. 1 depicts a diagram illustrating a RAID system volume group containing multiple volumes, in which the present invention may be implemented;
  • FIG. 2 depicts a schematic diagram illustrating a RAIT(D) write operation in accordance with the present invention;
  • FIG. 3 depicts a schematic diagram illustrating a RAIT(D) read operation in accordance with the present invention;
  • FIGS. 4 a and 4B depict diagrams illustrating a method of parity generation in accordance with the present invention;
  • FIGS. 5A and 5B depict diagrams illustrating an alternate method of parity generation in accordance with a preferred embodiment of the present invention;
  • FIGS. 6A and 6B depict diagrams illustrating a method of data stripe reconstruction in accordance with the present invention; and
  • FIGS. 7A and 7B, diagrams illustrating an alternate method of data stripe reconstruction are depicted in accordance with a preferred embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • With reference now to the figures and in particular with reference to FIG. 1, a diagram illustrating a RAID system volume group containing multiple volumes, in which the present invention may be implemented. The following description refers specifically to a RAID example but applies equally to RAIT systems or a combination of disks and tapes.
  • The RAID storage system 100, is divided into multiple (n) drive modules 1 (110) through n (130), each of which in turn comprises multiple (n) storage drives. Users can create volumes for physical data storage across a collection of drives. For example, in FIG. 1, the data in volume A is divided into n sections (n being equal to the number of drive modules) and each section is stored on the first respective drive in each drive module. Therefore, section A-1 is stored on Drive 1 (111) in Module 1 (110), section A-2 is stored on Drive 1 (121) in Module 2 (120), and section A-n is stored on Drive 1 (131) in Module n (130).
  • Furthermore, multiple volumes created on the same set of drives (e.g., the first respective drives in each module) are grouped into an entity called a volume group. In FIG. 1, volume group 140 comprises three volumes A, B and C. Building on the example above, sections A-1, B-1, and C-1 are stored on Drive 1 (111) in Module 1 (110), sections A-2, B-2, and C-2 are stored on Drive 1 (121) in Module 2 (120), etc. As a further example, a second volume group, e.g., volumes D, E and F, might be stored on the second respective drives in each module.
  • The volume group 140 is assigned a specific RAID level by the user, which defines how the data will be striped across the set of drives and what kind of redundancy scheme is used. Any remaining capacity on a volume group can be used to create additional volumes or expand the capacity of the existing volumes.
  • Referring to FIG. 2, a schematic diagram illustrating a RAIT(D) write operation is depicted in accordance with the present invention. The first part of implementing the present invention is establishing the level of performance and redundancy required for writing the data supplied by the using system. For a virtual tape, this can be done in relation to the mount or with respect to a file command. For a disk, it can be done relative to extent boundaries on the virtual disk. For write operations, the specification of performance and redundancy is supplied externally across a using interface (201).
  • With these parameters set, when data blocks 202 to be written to storage are received from a using system 200, the data is organized into n streams 203, where n is the number of parallel operations required for performance. The n streams are then cascaded into the parity generation/data reconstruction system 204 (which can be hardware or software, described in more detail below) in order to create p units of parity in such a manner that the memory holding the data is used in burst fashion. This is done on a section-by-section basis, where the sections that are processed together for parity can then be handed off to other functions for further analysis and processing before being written to the device. The various possible data manipulations can be executed in parallel on different sets of data sections. This process is called a pipeline.
  • The second part of the solution is to allow the n stripes or the p parity to be modified at arbitrary points in the processing stream. This modification is performed by parameter control at the interface to the parity creation system. The using system 200 can set the parameter control via any type of link, which may include the same link 201 used for transferring client data 202, or another link. Unless instructed otherwise, the system uses the same set of parameters as for the last operation. However, the interface allows a new set of parameters for every requested operation. At the end of n sections of data being sent to the parity generation/data recreation system 204, simultaneously all p sets of parity are available to be written to storage.
  • The third part of the invention is to allow the parity generation/data reconstruction 204 system to be implemented in an off line set of resources 205 in such a manner that less than n+p devices are required to do the data manipulations. For example, in the extreme, only one input tape drive 206 might be required to allow the data to be manipulated. This allows a stream initially written at n+p to be arbitrarily modified (for example to (n−1)+p when one of the physical devices 207 fails during the operation) and then reconstructed back to n+p offline, thus avoiding the use of critical system resources for the reconstruction.
  • The advantage of this approach is a high throughput and process completed, even if some of the physical resources (up to p) fail during the operation. In such a case, failures can be repaired offline. Additional advantages include Pipelining and re-blocking at low memory cost. The client host can also have a multiplexed or parallel data stream.
  • Referring to FIG. 3, a schematic diagram illustrating a RAIT(D) read operation is depicted in accordance with the present invention. Similar to the write process, the process begins by establishing the level of performance and redundancy available for reading, and presenting it to the using system. As stated above, for a virtual tape this can be done in relation to the mount or a file command, and for disk, it can be done relative to extent boundaries on the virtual disk. For read operations, the specification is retrieved from metadata 304 stored in association with the data.
  • When n sets of data and/or parity are given to the parity generation/data reconstruction system 303 for a read operation, and the rest of the data (or parity) is created in real time. Data reorganization component 302 collects all the read data stripes (not the parity stripes, which are used only for reconstructing missing data stripes), plus the reconstructed ones (up to p), and reorganizes them in order to send the original client data 301 back to the using system 300.
  • The read operation implemented with the present invention has the advantage of recovering customer data even if some physical resources (up to p) are not mounted, not available, or fail during the operation.
  • Referring to FIGS. 4A and 4B, diagrams illustrating a method of parity generation are depicted in accordance with the present invention. FIGS. 4A and 4B depict the “heavy” method of parity generation. As explained above, the original customer data block is split into n data stripes S1-Sn. The first step in parity generation is to read and store the full customer data block in the n stripes. In FIG. 4A, the data on stripes S1-S3 has been read and stored. In stripe S4, only the first ten sections have been recorded, with section S 4,11 401 receiving incoming data. Stripes S5-Sn have not yet been read and stored.
  • After the data on stripes S1-Sn have been read and stored, p parity units are computed from the data, as depicted in FIG. 4B. In the present example, there are three parity stripes: a positive diagonal parity P+1, a straight parity P0, and a negative diagonal parity P−1. The number of parities can be more or less. A p value of three is a good tradeoff for values of n up to 12. For larger number of stripes (e.g., 15, 20), the optimal values of p will probably be higher (e.g., 4, 5) in order to ensure the same level or reliability.
  • Each parity stripe Px is generated given its slope x: P0, P1, P−1, P2, P−2, etc. The parity calculation is: P x , y = i = 1 N S i , y + x * ( i - max ( x * n / x ; 1 )
    where Σ stands for a sum of XOR operation.
  • For data indexed at a position y, the parity calculation gives:
      • For the straight parity (the vertical column in FIG. 4B): P0,y=S1,y ⊕ S2,y ⊕ . . . ⊕ Sn,y
      • For the first positive parity (the positive diagonal of FIG. 4B): P+1,y=S1,y+1−n ⊕S2,y+2−n ⊕ . . . ⊕ Sn,y
      • For the first negative parity (the negative diagonal of FIG. 4B): P−1,y=S1,y⊕ S2,y−1 ⊕. . . ⊕ Sn,y−n+1
      • Etc.
        where ⊕ stands for a XOR operation.
  • The advantage of the heavy parity method is that it uses a simple fixed structure of pointers for computing parity stripes. The disadvantage is the need to store and handle the incoming data stripes, plus the parity stripes.
  • Referring to FIGS. 5A and 5B, diagrams illustrating an alternate method of parity generation are depicted in accordance with a preferred embodiment of the present invention. FIGS. 5A and 5B depict the “light” method of parity generation. As with the previous method, the data block is split into n stripes S1-Sn. However, unlike the heavy method, the data is not physically stored after it is read. Instead, the data is read and thrown away. The data is read stripe by stripe, while intermediate parity values are computed. Again, there are positive diagonal P+1, straight P0, and negative diagonal parities P−1. In FIG. 5A, incoming data is being read from the 11th section S 4,11 501 in data stripe S4, and intermediate parity values are being computed and stored to the 15th section P +1,15 502 of parity stripe P+1, the 11th section P 0,11 503 of parity stripe P0, and the 14th section P −1,34 504 of parity stripe P−1.
  • The XOR operation is commutative. The following explanation takes the case of the straight parity, P0, since it is the easiest example.
  • As indicated above, for data indexed at a position y:
    P0,y=S1,y ⊕ S2,y ⊕ . . . ⊕ Sn,y.
    This can also be written as:
    P 0,y=( . . . (S 1,y ⊕ S 2,y)⊕ . . . ⊕ S n,y)
    where P1 0,y=S1,y ⊕ S2,y is the first intermediate value. The second intermediate value is P2 0,y=P1 0,y ⊕ S3,y, and so on, up to the final value Pn−1 0,y=Pn−2 0,y ⊕ Sn,y which is equal to P0,y.
  • FIG. 5B depicts the completed parity stripes P+1, P0, and P−1 securing data stripes S1-Sn. No further calculations are required after the intermediate parity is calculated during the initial read of the data stripes S1-Sn. The intermediate parity values become the final parity values securing the customer data block.
  • This light method of parity generation has the advantages of needing to store/handle only the p iteratively computed parity stripes and requiring only one step. The disadvantage of this approach is the need for an adaptable matrix of pointers for computing parity stripes.
  • Referring now to FIGS. 6A and 6B, diagrams illustrating a method of data stripe reconstruction are depicted in accordance with the present invention. The method depicted is the “heavy” method of stripe reconstruction. As depicted in FIG. 6A, data stripes S3, S5, and Sn are missing, and the data on those stripes is considered lost. To recover the lost data, the remaining data is read and stored from the available data stripes S1-S2, S4, and S6-S7, in addition to the parity stripes P+1, P0, and P−1.
  • After the available data and parity have been read and stored, the missing data stripes S3, S5, and Sn are reconstructed by successive iterations, as depicted in FIG. 6B.
  • The heavy method of stripe reconstruction has the advantage of using a fixed structure of pointers for computing the reconstructed data stripes. The disadvantage of this method is the need to store/handle all of the available data and parity stripes, plus the missing stripes reconstructed later during the second step.
  • For parity generation, the differences between the “heavy” method and the “light” one (which computes the parities iteratively), lead to small impacts on the matrix of pointers. However, this is not the case when considering stripe reconstruction.
  • In the “heavy” method, each block of memory (from either the n data stripes or the p parity stripes) has a “fixed” position related to other blocks. For example, during the parity generation process, the “S2,y” block is implied in computation of P0,y, P+1,y+n−2, P−1,y+1, etc. Therefore, during the data reconstruction process, the same “S2,y” block could be repaired by members of:
    P0,y(P0,y plus S1,y, S3,y, S4,y . . . Sn,y)
    P+1,y+n−2(P+1,y+n−2 plus S1,y−1, S3,y+1, S4,y+2 . . . Sn,y+n−2) or
    P−1,y+1(P−1,y+1 plus S1,y+1, S3,y−1, S4,y−2 . . . Sn,y−n+2)
    and so on (if there are more parities with higher slopes).
  • In the “light” method, only the p iteratively computed stripes (data and/or parity) are stored. As they can represent any combination among the n+p, their blocks of memory no longer have “fixed” positions related to other ones. In order to mitigate this, the matrix of pointers is adaptable and sets the relations across blocks of memory accordingly to which stripes (data or parity) are being reconstructed/generated.
  • Referring now to FIGS. 7A and 7B, diagrams illustrating an alternate method of data stripe reconstruction are depicted in accordance with a preferred embodiment of the present invention. This alternate embodiment uses the “light” method for stripe reconstruction. As in FIG. 6A, the missing stripes in FIG. 7A are S3, S5, and Sn. Rather than reading and storing all of the remaining data and parity stripes, the stripes are read one by one, with the intermediate values of the missing stripes computed along the way. Because the intermediate values are being computed as the available data is read, there is no need to store the read data, so it is discarded.
  • When several stripes are reconstructed, they are iteratively computed while reading available data/parity stripes, as depicted in FIG. 7A. However, as their own values are linked together, a second pass, shown in FIG. 7B is done in order to get the final values. Reconstruction is finished by propagating a correction, starting from the lowest memory blocks (y==1) to the highest ones (y>>1), first recovering the data via the diagonal parities, then via the straight parity.
  • In FIGS. 7A and 7B, stripes S3, S5, and Sn are missing and will be reconstructed. The lowest index one (S3) will be reconstructed via the first negative parity P−1, the highest index one (Sn) will be reconstructed via the first positive parity P+1 and the middle index one (alias S5) will be reconstructed via the Straight parity P0.
  • The intermediate values computed in the first step of “light” stripe reconstruction are:
    S′ 3,y =P −1,y+2 ⊕ S 1,y+2 ⊕ S 2,y+1 ⊕ S 4,y−1 ⊕ S 6,y−3 ⊕ S n−1,y−n+4 (Stripes S5 and Sn are missing)
    S′ n,y =P +1,y ⊕ S 1,y−n+1 ⊕ S 2,y−n+2 ⊕ S 4,y−n+4 ⊕ S 6,y−n+6 ⊕ S n−1,y−1 (Stripes S3 and S5 are missing)
    S′ 5,y =P 0,y ⊕ S 1,y ⊕ S 2,y ⊕ S 4,y ⊕ S 6,y ⊕ S n−1,y (Stripes S3 and Sn are missing)
  • The second step in the “light” reconstruction method is to determine the final values by propagating the correction from the lowest memory blocks (y==1) to the highest ones (y>>1).
  • Starting with y==1, the first values recovered are:
    Sn,1=S′n,y and S′3,1=S′3,1 ⊕ S1,2 ⊕ S2,3
    followed by S5,1=S′5,1 ⊕ S3,1 ⊕ Sn,y
    This is continued until reaching and recovering the highest memory block (y>>1).
  • The advantages of using the light method of stripe reconstruction are the need to store/handle only the iteratively reconstructed stripes and the simplification of the second step. The disadvantage of this method is the need for an adaptable matrix of pointers for computing the missing stripes.
  • It is important to note that while the present invention has been described in the context of a fully functioning data processing system, those of ordinary skill in the art will appreciate that the processes of the present invention are capable of being distributed in the form of a computer readable medium of instructions and a variety of forms and that the present invention applies equally regardless of the particular type of signal bearing media actually used to carry out the distribution. Examples of computer readable media include recordable-type media such a floppy disc, a hard disk drive, a RAM, CD-ROMs, and transmission-type media such as digital and analog communications links.
  • The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiment was chosen and described in order to best explain the principles of the invention, the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (19)

1-21. (canceled)
22. A method for storing data in a data storage system, the method comprising the steps of:
organizing incoming data into a variable number of data stripes, wherein the number of data stripes is independent of the number of storage units used to store the data stripes;
cascading the data stripes into a redundancy creation mechanism to create redundancy data; and
storing the data stripes and the redundancy data on a plurality of storage units, wherein the size of the data stripes is independent of the capacity of the storage units used to store the data stripes.
23. The method of claim 22, wherein the storage unit is at least one of a storage device, a storage media, a storage system, a hierarchical storage system, or a RAID storage system.
24. The method of claim 22, wherein the redundancy creation mechanism is independent of the size of the incoming data.
25. The method of claim 22, further comprising:
modifying parameters that define and control the data stripes and the redundancy data at an arbitrary point.
26. The method of claim 25, further comprising:
storing metadata that contains the parameters that define and control translation of the incoming data into the data stripes and the redundancy data to insure that concomitant translation is used in reading the data.
27. The method of claim 22, wherein the data stripes and the redundancy data are stored on less than the entire plurality of storage units.
28. A computer program product comprising a computer usable medium including computer usable program code for storing data in a data storage system, the computer program product comprising:
computer usable program code for organizing incoming data into a variable number of data stripes, wherein the number of data stripes is independent of the number of storage units used to store the data stripes;
computer usable program code for cascading the data stripes into a redundancy creation mechanism to create redundancy data; and
computer usable program code for storing the data stripes and the redundancy data on a plurality of storage units, wherein the size of the data stripes is independent of the capacity of the storage units used to store the data stripes.
29. The computer program product of claim 28, wherein the storage unit is at least one of a storage device, a storage media, a storage system, a hierarchical storage system, or a RAID storage system.
30. The computer program product of claim 28, wherein the redundancy creation mechanism is independent of the size of the incoming data.
31. The computer program product of claim 28, further comprising:
computer usable program code for modifying parameters that define and control the data stripes and the redundancy data at an arbitrary point.
32. The computer program product of claim 31, further comprising:
computer usable program code for storing metadata that contains the parameters that define and control translation of the incoming data into the data stripes and the redundancy data to insure that concomitant translation is used in reading the data.
33. The computer program product of claim 28, wherein the data stripes and the redundancy data are stored on less than the entire plurality of storage units.
34. A data processing system for storing data in a data storage system, the data processing system comprising:
a storage device, wherein the storage device stores computer usable program code; and
a processor, wherein the processor executes the computer usable program code to organize incoming data into a variable number of data stripes, wherein the number of data stripes is independent of the number of storage units used to store the data stripes; cascade the data stripes into a redundancy creation mechanism to create redundancy data; and store the data stripes and the redundancy data on a plurality of storage units, wherein the size of the data stripes is independent of the capacity of the storage units used to store the data stripes.
35. The data processing system of claim 34, wherein the storage unit is at least one of a storage device, a storage media, a storage system, a hierarchical storage system, or a RAID storage system.
36. The data processing system of claim 34, wherein the redundancy creation mechanism is independent of the size of the incoming data.
37. The data processing system of claim 34, wherein the processor further executes computer usable program code for modifying parameters that define and control the data stripes and the redundancy data at an arbitrary point.
38. The data processing system of claim 37, wherein the processor further executes computer usable program code for storing metadata that contains the parameters that define and control translation of the incoming data into the data stripes and the redundancy data to insure that concomitant translation is used in reading the data.
39. The data processing system of claim 34, wherein the data stripes and the redundancy data are stored on less than the entire plurality of storage units.
US11/338,088 2002-10-25 2006-01-24 Process for generating and reconstructing variable number of parity for byte streams independent of host block size Abandoned US20060136778A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/338,088 US20060136778A1 (en) 2002-10-25 2006-01-24 Process for generating and reconstructing variable number of parity for byte streams independent of host block size

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/280,426 US7155634B1 (en) 2002-10-25 2002-10-25 Process for generating and reconstructing variable number of parity for byte streams independent of host block size
US11/338,088 US20060136778A1 (en) 2002-10-25 2006-01-24 Process for generating and reconstructing variable number of parity for byte streams independent of host block size

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/280,426 Continuation US7155634B1 (en) 2002-10-25 2002-10-25 Process for generating and reconstructing variable number of parity for byte streams independent of host block size

Publications (1)

Publication Number Publication Date
US20060136778A1 true US20060136778A1 (en) 2006-06-22

Family

ID=36597609

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/280,426 Expired - Lifetime US7155634B1 (en) 2002-10-25 2002-10-25 Process for generating and reconstructing variable number of parity for byte streams independent of host block size
US11/338,088 Abandoned US20060136778A1 (en) 2002-10-25 2006-01-24 Process for generating and reconstructing variable number of parity for byte streams independent of host block size

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/280,426 Expired - Lifetime US7155634B1 (en) 2002-10-25 2002-10-25 Process for generating and reconstructing variable number of parity for byte streams independent of host block size

Country Status (1)

Country Link
US (2) US7155634B1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060248378A1 (en) * 2005-04-29 2006-11-02 Network Appliance, Inc. Lost writes detection in a redundancy group based on RAID with multiple parity
US20090144347A1 (en) * 2007-11-30 2009-06-04 Boyd James A Storage volume spanning with intelligent file placement and/or rearrangement
US20090222622A1 (en) * 2008-02-28 2009-09-03 Harris Corporation, Corporation Of The State Of Delaware Video media data storage system and related methods
US9454333B2 (en) 2014-10-27 2016-09-27 International Business Machines Corporation Parity logs for RAID systems with variable capacity media
US9495247B2 (en) 2014-10-27 2016-11-15 International Business Machines Corporation Time multiplexed redundant array of independent tapes
CN109558070A (en) * 2017-09-27 2019-04-02 北京忆恒创源科技有限公司 Scalable storage framework

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7308599B2 (en) * 2003-06-09 2007-12-11 Hewlett-Packard Development Company, L.P. Method and apparatus for data reconstruction after failure of a storage device in a storage array
US7870464B2 (en) * 2004-11-02 2011-01-11 International Business Machines Corporation System and method for recovery of data for a lost sector in a storage system
US20060123321A1 (en) 2004-11-22 2006-06-08 International Business Machines Corporation System and method for reconstructing lost data in a storage system
US20060253729A1 (en) * 2005-05-03 2006-11-09 International Business Machines Corporation Outboard swap of defective device in a storage subsystem
US20080201524A1 (en) * 2007-02-15 2008-08-21 Harris Corporation System and method for increasing video server storage bandwidth
US7809880B2 (en) * 2007-09-17 2010-10-05 L-3 Communications Integrated Systems L.P. Systems and methods for managing tape drive operation
US8751858B2 (en) 2011-05-20 2014-06-10 International Business Machines Corporation System, method, and computer program product for physical drive failure identification, prevention, and minimization of firmware revisions
US9454426B2 (en) 2014-07-07 2016-09-27 International Business Machines Corporation Codes of length tn invariant under rotations of order n
US9715418B2 (en) 2014-12-02 2017-07-25 International Business Machines Corporation Performance problem detection in arrays of similar hardware
WO2017050346A1 (en) * 2015-09-21 2017-03-30 Lantiq Beteiligungs-GmbH & Co. KG Intrusion prevention
US10929226B1 (en) 2017-11-21 2021-02-23 Pure Storage, Inc. Providing for increased flexibility for large scale parity

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4205324A (en) * 1977-12-23 1980-05-27 International Business Machines Corporation Methods and means for simultaneously correcting several channels in error in a parallel multi channel data system using continuously modifiable syndromes and selective generation of internal channel pointers
US4771458A (en) * 1987-03-12 1988-09-13 Zenith Electronics Corporation Secure data packet transmission system and method
US6289415B1 (en) * 1998-12-23 2001-09-11 At&T Corporation System and method for storage media group parity protection
US6347359B1 (en) * 1998-02-27 2002-02-12 Aiwa Raid Technology, Inc. Method for reconfiguration of RAID data storage systems
US20020095616A1 (en) * 2000-12-29 2002-07-18 Busser Richard W. Recovering data from arrays of storage devices after certain failures
US6529997B1 (en) * 2000-08-11 2003-03-04 Storage Technology Corporation Apparatus and method for writing and reading data to and from a virtual volume of redundant storage devices
US6581185B1 (en) * 2000-01-24 2003-06-17 Storage Technology Corporation Apparatus and method for reconstructing data using cross-parity stripes on storage media
US20040205387A1 (en) * 2002-03-21 2004-10-14 Kleiman Steven R. Method for writing contiguous arrays of stripes in a RAID storage system
US6848022B2 (en) * 2002-10-02 2005-01-25 Adaptec, Inc. Disk array fault tolerant method and system using two-dimensional parity

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4205324A (en) * 1977-12-23 1980-05-27 International Business Machines Corporation Methods and means for simultaneously correcting several channels in error in a parallel multi channel data system using continuously modifiable syndromes and selective generation of internal channel pointers
US4771458A (en) * 1987-03-12 1988-09-13 Zenith Electronics Corporation Secure data packet transmission system and method
US6347359B1 (en) * 1998-02-27 2002-02-12 Aiwa Raid Technology, Inc. Method for reconfiguration of RAID data storage systems
US6289415B1 (en) * 1998-12-23 2001-09-11 At&T Corporation System and method for storage media group parity protection
US6581185B1 (en) * 2000-01-24 2003-06-17 Storage Technology Corporation Apparatus and method for reconstructing data using cross-parity stripes on storage media
US6529997B1 (en) * 2000-08-11 2003-03-04 Storage Technology Corporation Apparatus and method for writing and reading data to and from a virtual volume of redundant storage devices
US20020095616A1 (en) * 2000-12-29 2002-07-18 Busser Richard W. Recovering data from arrays of storage devices after certain failures
US20040205387A1 (en) * 2002-03-21 2004-10-14 Kleiman Steven R. Method for writing contiguous arrays of stripes in a RAID storage system
US6848022B2 (en) * 2002-10-02 2005-01-25 Adaptec, Inc. Disk array fault tolerant method and system using two-dimensional parity

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060248378A1 (en) * 2005-04-29 2006-11-02 Network Appliance, Inc. Lost writes detection in a redundancy group based on RAID with multiple parity
US7552357B2 (en) * 2005-04-29 2009-06-23 Network Appliance, Inc. Lost writes detection in a redundancy group based on RAID with multiple parity
US20090144347A1 (en) * 2007-11-30 2009-06-04 Boyd James A Storage volume spanning with intelligent file placement and/or rearrangement
WO2009073353A1 (en) * 2007-11-30 2009-06-11 Intel Corporation Storage volume spanning with intelligent file placement and/or rearrangement
GB2467287A (en) * 2007-11-30 2010-07-28 Intel Corp Storage volume spanning with intelligent file placement and/or rearrangement
GB2467287B (en) * 2007-11-30 2013-02-13 Intel Corp Storage volume spanning with intelligent file placement and/or rearrangement
US20090222622A1 (en) * 2008-02-28 2009-09-03 Harris Corporation, Corporation Of The State Of Delaware Video media data storage system and related methods
US7899988B2 (en) * 2008-02-28 2011-03-01 Harris Corporation Video media data storage system and related methods
US9454333B2 (en) 2014-10-27 2016-09-27 International Business Machines Corporation Parity logs for RAID systems with variable capacity media
US9495247B2 (en) 2014-10-27 2016-11-15 International Business Machines Corporation Time multiplexed redundant array of independent tapes
CN109558070A (en) * 2017-09-27 2019-04-02 北京忆恒创源科技有限公司 Scalable storage framework

Also Published As

Publication number Publication date
US7155634B1 (en) 2006-12-26

Similar Documents

Publication Publication Date Title
US20060136778A1 (en) Process for generating and reconstructing variable number of parity for byte streams independent of host block size
JP5102776B2 (en) Triple parity technology that enables efficient recovery from triple failures in storage arrays
US8839028B1 (en) Managing data availability in storage systems
JP3742494B2 (en) Mass storage device
US7743275B1 (en) Fault tolerant distributed storage method and controller using (N,K) algorithms
US7188270B1 (en) Method and system for a disk fault tolerance in a disk array using rotating parity
US8060772B2 (en) Storage redundant array of independent drives
US7647526B1 (en) Reducing reconstruct input/output operations in storage systems
US7752389B1 (en) Techniques for determining physical data layout of RAID devices
CN100390745C (en) Apparatus and method to check data integrity when handling data
JP4664988B2 (en) Method and system for syndrome generation and data recovery
US20120192037A1 (en) Data storage systems and methods having block group error correction for repairing unrecoverable read errors
US7890795B1 (en) Auto-adapting cache memory system and memory
US20100191907A1 (en) RAID Converter and Methods for Transforming a First RAID Array to a Second RAID Array Without Creating a Backup Copy
US7694171B2 (en) Raid5 error recovery logic
JP2000207136A (en) Multi-drive fault-tolerance raid algorithm
US7743308B2 (en) Method and system for wire-speed parity generation and data rebuild in RAID systems
JP2018508073A (en) Data removal, allocation and reconstruction
JP2013506191A (en) N-way parity technology that enables recovery from up to N storage device failures
US20040078642A1 (en) Method and system for disk fault tolerance in a disk array
US7689877B2 (en) Method and system using checksums to repair data
US8032703B2 (en) Storage apparatus and failure recovery method
US7240237B2 (en) Method and system for high bandwidth fault tolerance in a storage subsystem
US6792391B1 (en) Method and system for three disk fault tolerance in a disk array
US20030163757A1 (en) RAID subsystem and data input/output and recovery method in disk error mode

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION