US20020194529A1 - Resynchronization of mirrored storage devices - Google Patents

Resynchronization of mirrored storage devices Download PDF

Info

Publication number
US20020194529A1
US20020194529A1 US10/154,414 US15441402A US2002194529A1 US 20020194529 A1 US20020194529 A1 US 20020194529A1 US 15441402 A US15441402 A US 15441402A US 2002194529 A1 US2002194529 A1 US 2002194529A1
Authority
US
United States
Prior art keywords
storage
storage device
data
usage information
resynchronizing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/154,414
Inventor
Douglas Doucette
Stephen Strange
Srinivasan Viswanathan
Steven Kleiman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NetApp Inc
Original Assignee
Network Appliance Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Network Appliance Inc filed Critical Network Appliance Inc
Priority to US10/154,414 priority Critical patent/US20020194529A1/en
Priority to US10/225,453 priority patent/US7143249B2/en
Assigned to NETWORK APPLIANCE, INC. reassignment NETWORK APPLIANCE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: VISWANATHAN, SRINIVASAN, KLEIMAN, STEVEN R., DOUCETTE, DOUGLAS P., STRANGE, STEPHEN H.
Publication of US20020194529A1 publication Critical patent/US20020194529A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2082Data synchronisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2064Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring while ensuring consistency
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1471Saving, restoring, recovering or retrying involving logging of persistent data for recovery
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99953Recoverability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99951File or database maintenance
    • Y10S707/99952Coherency, e.g. same view to multiple users
    • Y10S707/99955Archiving or backup

Definitions

  • the present invention relates generally to computer systems, and more particularly but not exclusively to file systems and storage devices.
  • Storage devices are employed to store data that are accessed by computer systems. Examples of storage devices include volatile and non-volatile memory, floppy drives, hard disk drives, tape drives, optical drives, etc.
  • a storage device may be locally attached to an input/output (I/O) channel of a computer. For example, a hard disk drive may be connected to a computer's disk controller.
  • a storage device may also be accessible over a network. Examples of such a storage device include network attached storage (NAS) and storage area network (SAN) devices.
  • NAS network attached storage
  • SAN storage area network
  • a storage device may be a single stand-alone component or be comprised of a system of storage devices such as in the case of Redundant Array Of Inexpensive Disks (RAID) groups and some Direct Access Storage Devices (DASD).
  • RAID Redundant Array Of Inexpensive Disks
  • DASD Direct Access Storage Devices
  • mirror For mission-critical applications requiring high availability of stored data, various techniques for enhancing data reliability are typically employed.
  • One such technique is to provide a “mirror” for each storage device.
  • data are written to at least two storage devices.
  • data may be read from either of the two storage devices so long as the two devices are operational and contain the same data. That is, either of the two storage devices may process read requests so long as the two devices are in synchronization.
  • a first storage device and a second storage device form a mirrored pair of storage devices.
  • first storage device loses synchronization with the second storage device, data present in the second storage device but not in the first storage device are identified. The identified data are then copied to the first storage device.
  • a method of resynchronizing mirrored storage devices includes the act of creating a first storage usage information when both storage devices are accessible. When one of the storage devices goes down and then comes back up, a second storage usage information is created. A difference between the first storage usage information and the second storage usage information is determined and then used to resynchronize the previously down storage device with its mirror.
  • FIG. 1 shows a schematic diagram of an example file layout.
  • FIGS. 2 A- 2 D show schematic diagrams of inode files in the file layout of FIG. 1.
  • FIGS. 3 A- 3 C show schematic diagrams illustrating the creation of a snapshot in the file layout of FIG. 1.
  • FIG. 4 shows a schematic diagram of a computing environment in accordance with an embodiment of the present invention.
  • FIG. 5 shows a logical diagram illustrating the relationship between a file system, a storage device manager, and a storage system in accordance with an embodiment of the present invention.
  • FIG. 6 shows a state diagram of a mirror in accordance with an embodiment of the present invention.
  • FIG. 7 shows a flow diagram of a method of resynchronizing a mirrored storage device in accordance with an embodiment of the present invention.
  • FIGS. 8A and 8B show schematic diagrams further illustrating an action in the flow diagram of FIG. 7.
  • File layout 150 may be adopted by a file system to organize files. Similar file layouts are also disclosed in the following commonly-assigned disclosures, which are incorporated herein by reference in their entirety: (a) U.S. Pat. No. 6,289,356, filed on Sep. 14, 1998; (b) U.S. Pat. No. 5,963,962, filed on Jun. 30, 1998; and (c) U.S. Pat. No. 5,819,292, filed on May. 31, 1995. It should be understood, however, that the present invention may also be adapted for use with other file layouts.
  • file layout 150 has a tree structure with a root inode 100 as a base. Root inode 100 includes multiple blocks for describing one or more inode files 110 (i.e., 110 A, 110 B, . . . ). Each inode file 110 contains information about a file in file layout 150 .
  • a file may comprise one or more blocks of data, with each block being a storage location in a storage device.
  • an inode file 110 may contain data or point to blocks containing data.
  • a file may be accessed by consulting root inode 100 to find the inode file 110 that contains or points to the file's data.
  • data file 122 is stored in one or more blocks pointed to by inode 110 B; inode 110 B is in turn identified by root inode 100 .
  • File layout 150 also includes a block map file 120 and an inode map file 121 .
  • Block map file 120 identifies free (i.e., unused) blocks, while inode map file 121 identifies free inodes.
  • Block map file 120 and inode map file 121 may be accessed just like any other file in file layout 150 .
  • block map file 120 and inode map file 121 may be stored in blocks pointed to by an inode file 110 , which is identified by root inode 100 .
  • root inode 100 is stored in a predetermined location in a storage device. This facilitates finding root inode 100 upon system boot-up. Because block map file 120 , inode map file 121 , and inode files 110 may be found by consulting root inode 100 as described above, they may be stored anywhere in the storage device.
  • FIG. 2A there is shown a schematic diagram of an inode file 110 identified by a root inode 100 .
  • An inode file 110 includes a block 111 for storing general inode information such as a file's size, owner, permissions, etc.
  • An inode file 110 also includes one or more blocks 112 (i.e., 112 A, 112 B, . . . ). Depending on the size of the file, blocks 112 may contain the file's data or pointers to the file's data. In the example of FIG. 2A, the file is small enough to fit all of its data in blocks 112 .
  • an inode file 110 includes 16 blocks 112 , with each block 112 accommodating 4 bytes (i.e., 32 bits).
  • files having a size of 64 bytes (i.e., 4-bytes ⁇ 16) or less may be stored directly in an inode file 110 .
  • FIG. 2B shows a schematic diagram of an inode file 110 that contains pointers in its blocks 112 .
  • a pointer in a block 112 points to a data block 210 (i.e., 210 A, 210 B , . . . ) containing data.
  • a data block 210 i.e., 210 A, 210 B , . . .
  • each of 16 blocks 112 may point to a 4 KB (kilo-byte) data block 210 .
  • an inode file 110 may accommodate files having a size of 64 KB (i.e.,16 ⁇ 4 KB) or less.
  • FIG. 2C shows a schematic diagram of another inode file 110 that contains pointers in its blocks 112 .
  • Each of the blocks 112 points to indirect blocks 220 (i.e., 220 A, 220 B , . . . ), each of which has blocks that point to a data block 230 (i.e., 230 A, 230 B , . . . ) containing data.
  • Pointing to an indirect block 220 allows an inode file 110 to accommodate larger files.
  • an inode file 110 has 16 blocks 112 that each point to an indirect block 220 ; each indirect block 220 in turn has 1024 blocks that each point to a 4 KB data block 230 .
  • an inode file 110 may accommodate files having a size of 64 MB (mega-bytes) (i.e., 16 ⁇ 1024 ⁇ 4KB) or less.
  • an inode file 110 may have several levels of indirection to accommodate even larger files.
  • FIG. 2D shows a schematic diagram of an inode file 110 that points to double indirect blocks 240 (i.e., 240 A, 240 B , . . . ), which point to single indirect blocks 250 (i.e., 250 A, 250 B , . . . ), which in turn point to data blocks 260 (i.e., 260 A, 260 B , . . . ).
  • an inode file 110 has 16 blocks 112 that each points to a double indirect block 240 containing 1024 blocks; each block in a double indirect block 240 points to a single indirect block 250 that contains 1024 blocks; each block in a single indirect block 250 points to a 4 KB data block 260 .
  • an inode file 110 may accommodate files having a size of 64 GB (giga-bytes) (i.e., 16 ⁇ 1024 ⁇ 1024 ⁇ 4 KB) or less.
  • FIG. 3A there is shown a schematic diagram of a root inode 100 with one or more branches 310 (i.e., 310 A, 310 B , . . . ).
  • branches 310 i.e., 310 A, 310 B , . . .
  • FIG. 3A and the following FIGS. 3B and 3C do not show the details of each branch from a root inode 100 for clarity of illustration.
  • Each branch 310 may include an inode file plus one or more levels of indirection to data blocks, if any.
  • FIG. 3B shows a schematic diagram of a snapshot 300 created by copying a root inode 100 .
  • Snaphot is a trademark of Network Appliance, Inc. It is used for purposes of this disclosure to designate a persistent consistency point (CP) image.
  • a persistent consistency point image (PCPI) is a point-in-time representation of the storage system, and more particularly, of the active file system, stored on a storage device (e.g., on disk) or in other persistent memory and having a name or other unique identifier that distinguishes it from other PCPIs taken at other points in time.
  • a PCPI can also include other information (metadata) about the active file system at the particular point in time for which the image is taken.
  • the terms “PCPI” and “snapshot” shall be used interchangeably through out this disclosure without derogation of Network Appliance's trademark rights.
  • a snapshot 300 being a copy of a root inode 100 , identifies all blocks identified by the root inode 100 at the time snapshot 300 was created. Because a snapshot 300 identifies but does not copy branches 310 , a snapshot 300 does not consume a large amount of storage space. Generally speaking, a snapshot 300 provides storage usage information at a given moment in time.
  • FIG. 3C shows a schematic diagram illustrating what happens when data in a 103 branch 310 are modified by a write command.
  • writes may only be performed on unused blocks. That is, a used block is not overwritten when its data are modified; instead, an unused block is allocated to contain the modified data.
  • modifying data in branch 310 E results in the creation of a new branch 311 containing the modified data.
  • Branch 311 is created on new, unused blocks.
  • the old branch 310 E remains in the storage device and is still identified by snapshot 300 .
  • Root inode 100 breaks its pointer to branch 310 E and now points to the new branch 311 . Because branch 310 E is still identified by snapshot 300 , its data blocks may be readily recovered if desired.
  • a snapshot 300 may be replaced by a new snapshot 300 from time to time to release old blocks, thereby making them available for new writes.
  • a consistency point count may be atomically increased every time a consistency point is established. For example, a consistency point count may be increased by one every time a snapshot 300 is created to establish a PCPI.
  • the PCPI (which is a snapshot 300 in this example) may be used to recreate the file system.
  • a consistency point count gives an indication of how up to date a file system is. The higher the consistency point count, the more up to date the file system. For example, a file system with a consistency point count of 7 is more up to date than a version of that file system with a consistency point count of 4 .
  • FIG. 4 there is shown a schematic diagram of a computing environment in accordance with an embodiment of the present invention.
  • one or more computers 401 i.e., 401 A, 401 B, . . . .
  • a computer 401 may be any type of data processing device capable of sending write and read requests to filer 400 .
  • a computer 401 may be, without limitation, a personal computer, mini-computer, mainframe computer, portable computer, workstation, wireless terminal, personal digital assistant, cellular phone, etc.
  • Network 402 may include various types of communication networks such as wide area networks, local area networks, the Internet, etc. Other nodes on network 402 such as gateways, routers, bridges, firewalls, etc. are not depicted in FIG. 4 for clarity of illustration.
  • Filer 400 provides data storage services over network 402 .
  • filer 400 processes data read and write requests from a computer 401 .
  • filer 400 does not necessarily have to be accessible over network 402 .
  • a filer 400 may also be locally attached to an I/O channel of a computer 401 , for example.
  • filer 400 may include a network interface 410 , a storage operating system 450 , and a storage system 460 .
  • Storage operating system 450 may further include a file system 452 and a storage device manager 454 .
  • Storage system 460 may include one or more storage devices.
  • Components of filer 400 may be implemented in hardware, software, and/or firmware.
  • filer 400 may be a computer having one or more processors running computer-readable program code of storage operating system 450 in memory.
  • Software components of filer 400 may be stored on computer-readable storage media (e.g., memories, CD-ROMS, tapes, disks, ZIP drive , . . . ) or transmitted over wired or wireless link to a computer 401 .
  • Network interface 410 includes components for receiving storage-related service requests over network 402 .
  • Network interface 410 forwards a received service request to storage operating system 450 , which processes the request by reading data from storage system 460 in the case of a read request, or by writing data to storage system 460 in the case of a write request.
  • Data read from storage system 460 are transmitted over network 402 to the requesting computer 401 .
  • data to be written to storage system 460 are received over network 402 from a computer 401 .
  • FIG. 5 shows a logical diagram further illustrating the relationship between a file system 452 , a storage device manager 454 , and a storage system 460 in accordance with an embodiment of the present invention.
  • file system 452 and storage device manager 454 are implemented in software while storage system 460 is implemented in hardware.
  • file system 452 , storage device manager 454 , and storage system 460 may be implemented in hardware, software, and/or firmware.
  • data structures, tables, and maps may be employed to define the logical interconnection between file system 452 and storage device manager 454 .
  • storage device manager 454 and storage system 460 may communicate via a disk controller.
  • File system 452 manages files that are stored in storage system 460 .
  • file system 452 uses a file layout 150 (see FIG. 1) to organize files. That is, in one embodiment, file system 452 views files as a tree of blocks with a root inode as a base. File system 452 is capable of creating snapshots and consistency points in a manner previously described.
  • file system 452 organizes files in accordance with the Write-Anywhere-File Layout (WAFL) disclosed in the incorporated disclosures U.S. Pat. Nos. 6,289,356, 5,963,962, and 5,819,292.
  • WAFL Write-Anywhere-File Layout
  • the present invention is not so limited and may also be used with other file systems and layouts.
  • Storage device manager 454 manages the storage devices in storage system 460 .
  • Storage device manager 454 receives read and write commands from file system 452 and processes the commands by accordingly accessing storage system 460 .
  • Storage device manager 454 takes a block's logical address from file system 452 and translates that logical address to a physical address in one or more storage devices in storage system 460 .
  • storage device manager 454 manages storage devices in accordance with RAID level 4 , and accordingly stripes data blocks across storage devices and uses separate parity storage devices. It should be understood, however, that the present invention may also be used with data storage architectures other than RAID level 4 . For example, embodiments of the present invention may be used with other RAID levels, DASD's, and non-arrayed storage devices.
  • storage device manager 454 is logically organized as a tree of objects that include a volume 501 , a mirror 502 , plexes 503 (i.e., 503 A, 503 B), and RAID groups 504 - 507 .
  • implementing a mirror in a logical layer below file system 452 advantageously allows for a relatively transparent fail-over mechanism. For example, because file system 452 does not necessarily have to know of the existence of the mirror, a failing plex 503 does not have to be reported to file to system 452 . When a plex fails, file system 452 may still read and write data as before. This minimizes disruption to file system 452 and also simplifies its design.
  • volume 501 represents a file system.
  • Mirror 502 is one level below volume 501 and manages a pair of mirrored plexes 503 .
  • Plex 503 A is a duplicate of plex 503 B, and vice versa.
  • Each plex 503 represents a full copy of the file system of volume 501 .
  • consistency points are established from time to time for each plex 503 . As will be described further below, this allows storage device manager 454 to determine which plex is more up to date in the event both plexes go down and one of them needs to be resynchronized with the other.
  • each plex 503 is one or more RAID groups that have associated storage devices in storage system 460 .
  • storage devices 511 - 513 belong to RAID group 504
  • storage devices 514 - 516 belong to RAID group 505
  • storage devices 517 - 519 belong to RAID group 506
  • storage devices 520 - 522 belong to RAID group 507 .
  • RAID group 504 mirrors RAID group 506
  • RAID group 505 mirrors RAID group 507 .
  • storage devices 511 - 522 do not have to be housed in the same cabinet or facility.
  • storage devices 511 - 516 may be located in a data center in one city, while storage devices 517 - 522 may be in another data center in another city. This advantageously allows data to remain available even if a facility housing one set of storage devices is hit by a disaster (e.g., fire, earthquake).
  • a disaster e.g., fire, earthquake
  • storage devices 511 - 522 include hard disk drives communicating with storage device manager 454 over a Fiber Channel Arbitrated Loop link and configured in accordance with RAID level 4 .
  • RAID level 4 significantly improves data availability.
  • RAID level 4 does not include mirroring.
  • a storage system according to RAID level 4 may survive a single disk failure, it may not be able to survive double disk failures.
  • Implementing a mirror with RAID level 4 improves data availability by providing back up copies in the event of a double disk failure in one of the RAID groups.
  • plex 503 A and plex 503 B mirror each other, data may be accessed through either plex 503 A or plex 503 B. This allows data to be accessed from a surviving plex in the event one of the plexes goes down and becomes inaccessible. This is particularly advantageous in mission-critical applications where a high degree of data availability is required. To further improve data availability, plex 503 A and plex 503 B may also utilize separate pieces of hardware to communicate with storage system 460 .
  • FIG. 6 shows a state diagram of mirror 502 in accordance with an embodiment of the present invention.
  • mirror 502 may be in normal (state 601 ), degraded (state 602 ), or resync (state 603 ) state.
  • Mirror 502 is in the normal state when both plexes are working and online.
  • data may be read from either plex.
  • FIG. 5 as an example, a block in storage device 511 may be read and passed through RAID group 504 , plex 503 A, mirror 502 , volume 501 , and then to file system 452 .
  • the same block may be read from storage device 517 and passed through RAID group 506 , plex 503 B, mirror 502 , volume 501 , and then to file system 452 .
  • data are written to both plexes in response to a write command from file system 452 .
  • the writing of data to both plexes may progress simultaneously.
  • Data may also be written to each plex sequentially.
  • write data received from file system 452 may be forwarded by mirror 502 to an available plex.
  • mirror 502 may then forward the same data to the other plex.
  • the data may first be stored through plex 503 A. Once plex 503 A sends a confirmation that the data were successfully written to storage system 460 , mirror 502 may then forward the same data to plex 503 B. In response, plex 503 B may initiate writing of the data to storage system 460 .
  • mirror 502 may go to the degraded state when either plex 503 A or plex 503 B goes down.
  • a plex 503 may go down for a variety of reasons including when its associated storage devices fail, are placed offline, etc.
  • a down plex loses synchronization with its mirror as time passes. The longer the down time, the more the down plex becomes outdated.
  • read and write commands are processed by the surviving plex.
  • plex 503 A assumes responsibility for processing all read and write commands.
  • having a mirrored pair of plexes allows storage device manager 454 to continue to operate even after a plex goes down.
  • mirror 502 goes to the resync state when the down plex (now a “previously down plex”) becomes operational again.
  • the previously down plex is resynchronized with the surviving plex.
  • information in the previously down plex is updated to match that in the surviving plex.
  • a technique for resynchronizing a previously down plex is later described in connection with FIG. 7.
  • resynchronization of a previously down plex with a surviving plex is performed by storage device manager 454 . Performing resynchronization in a logical layer below file system 452 allows the resynchronization process to be relatively transparent to file system 452 . This advantageously minimizes disruption to file system 452 .
  • data writes may only be performed on unused blocks. Because an unused block by definition has not been allocated in either plex while one of the plexes is down, data may be written to both plexes even if the mirror is still in the resync state. In other words, data may be written to the previously down plex even while it is still being resynchronized. As can be appreciated, the capability to write to the previously down plex while it is being resynchronized advantageously reduces the complexity of the resynchronization process.
  • mirror 502 From the resync state, mirror 502 returns to the normal state after the previously down plex is resynchronized with the surviving plex.
  • FIG. 7 shows a flow diagram of a method for resynchronizing a mirrored storage device in accordance with an embodiment of the present invention.
  • a snapshot arbitrarily referred to as a “base snapshot” is created by file system 452 at the request of storage device manager 454 .
  • the base snapshot like a snapshot 300 (see FIG. 3), includes information about files in a file system.
  • file system 452 periodically creates a new base snapshot (and deletes the old one) while both plexes remain accessible.
  • mirror 502 goes to the degraded state as indicated in action 706 .
  • action 708 to action 706 mirror 502 remains in the degraded state while one of the plexes remains down.
  • mirror 502 goes to the resync state when the down plex becomes operational.
  • another snapshot arbitrarily referred to as a “resync snapshot” is created by file system 452 at the request of storage device manager 454 .
  • the resync snapshot is just like a snapshot 300 except that it is created when mirror 502 is in the resync state. Because file system 452 , in one embodiment, only sees the most current plex, the resync snapshot is a copy of a root inode in the surviving plex.
  • file system 452 determines the difference by:
  • the base snapshot is created at an earlier time when both plexes are up (normal state), whereas the resync snapshot is created at a later time when a plex that has gone down goes back up (resync state).
  • the difference between the base and resync snapshots represents data that were written to the surviving plex while mirror 502 is in the degraded state.
  • FIGS. 8A and 8B further illustrate action 714 .
  • FIGS. 8A and 8B represent storage locations of a storage device, with each cell representing one or more blocks.
  • cell A 1 holds a base snapshot 801 .
  • Base snapshot 801 identifies blocks in cells A 2 , B 3 , and C 1 .
  • cell C 4 holds a resync snapshot 802 created while mirror 502 is in the resync state.
  • resync snapshot 802 identifies blocks in cells A 2 , B 3 , and C 1 .
  • Resync snapshot 802 additionally identifies blocks in cell D 2 .
  • the blocks in cell D 2 compose the difference between base snapshot 801 and resync snapshot 802 .
  • the difference between the base and resync snapshots is copied to the formerly down plex.
  • this is performed by storage device manager 454 by copying to the formerly down plex the blocks that are in the resync snapshot but not in the base snapshot.
  • FIG. 8B blocks in cell D 2 are copied to the formerly down plex.
  • this speeds up the resynchronization process and thus shortens the period when only one plex is operational.
  • copying the difference to the formerly down plex consumes less processing time and I/O bandwidth.
  • action 718 the resync snapshot is made the base snapshot.
  • action 719 the previous base snapshot is deleted. Thereafter, mirror 502 goes to the normal state as indicated in action 720 . The cycle then continues with file system 452 periodically creating base snapshots while both plexes remain accessible.
  • the flow diagram of FIG. 7 may also be used in the event both plexes go down.
  • the plex with the higher consistency point count is designated the surviving plex while the other plex is designated the down plex.
  • the down plex is resynchronized with the surviving plex as in FIG. 7.
  • plex 503 A and 503 B both go down and plex 503 A has a higher consistency point count than plex 503 B
  • plex 503 A is designated the surviving plex while plex 503 B is designated the down plex.
  • plex 503 B may then be resynchronized with plex 503 A as in actions 710 , 712 , 714 , 716 , 718 , etc.

Abstract

In one embodiment, a first storage device and a second storage device form a mirror. When the first storage device loses synchronization with the second storage device, data present in the second storage device but not in the first storage device are identified. The identified data are then copied to the first storage device.

Description

    CROSS-REFERENCE TO RELATED APPLICATION
  • This application is a continuation-in-part of U.S. application Ser. No. 09/684,487 (Atty. Docket No. 103.1031/P00-1031), filed on 10/4/2000 by Srinivasan Viswanathan and Steven R. Kleiman, entitled “Recovery of File System Data in File Servers Mirrored File System Volumes”. The just mentioned U.S. application is incorporated herein by reference in its entirety.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. Field Of The Invention [0002]
  • The present invention relates generally to computer systems, and more particularly but not exclusively to file systems and storage devices. [0003]
  • 2. Description Of The Background Art [0004]
  • Storage devices are employed to store data that are accessed by computer systems. Examples of storage devices include volatile and non-volatile memory, floppy drives, hard disk drives, tape drives, optical drives, etc. A storage device may be locally attached to an input/output (I/O) channel of a computer. For example, a hard disk drive may be connected to a computer's disk controller. A storage device may also be accessible over a network. Examples of such a storage device include network attached storage (NAS) and storage area network (SAN) devices. A storage device may be a single stand-alone component or be comprised of a system of storage devices such as in the case of Redundant Array Of Inexpensive Disks (RAID) groups and some Direct Access Storage Devices (DASD). [0005]
  • For mission-critical applications requiring high availability of stored data, various techniques for enhancing data reliability are typically employed. One such technique is to provide a “mirror” for each storage device. In a mirror arrangement, data are written to at least two storage devices. Thus, data may be read from either of the two storage devices so long as the two devices are operational and contain the same data. That is, either of the two storage devices may process read requests so long as the two devices are in synchronization. [0006]
  • When one of the storage devices fails, its mirror may be used to continue processing read and write requests. However, this also means that the failing storage device will be out of synchronization with its mirror. To avoid losing data in the event the mirror also fails, it is desirable to resynchronize the two storage devices as soon as the failing storage device becomes operational. Unfortunately, prior techniques for resynchronizing mirrored storage devices take a long time and consume a relatively large amount of processing time and [0007] 1/O bandwidth. These not only increase the probability of data loss, but also result in performance degradation.
  • SUMMARY
  • In one embodiment, a first storage device and a second storage device form a mirrored pair of storage devices. When the first storage device loses synchronization with the second storage device, data present in the second storage device but not in the first storage device are identified. The identified data are then copied to the first storage device. [0008]
  • In one embodiment, a method of resynchronizing mirrored storage devices includes the act of creating a first storage usage information when both storage devices are accessible. When one of the storage devices goes down and then comes back up, a second storage usage information is created. A difference between the first storage usage information and the second storage usage information is determined and then used to resynchronize the previously down storage device with its mirror. [0009]
  • These and other features of the present invention will be readily apparent to persons of ordinary skill in the art upon reading the entirety of this disclosure, which includes the accompanying drawings and claims.[0010]
  • to DESCRIPTION OF THE DRAWINGS
  • FIG. 1 shows a schematic diagram of an example file layout. [0011]
  • FIGS. [0012] 2A-2D show schematic diagrams of inode files in the file layout of FIG. 1.
  • FIGS. [0013] 3A-3C show schematic diagrams illustrating the creation of a snapshot in the file layout of FIG. 1.
  • FIG. 4 shows a schematic diagram of a computing environment in accordance with an embodiment of the present invention. [0014]
  • FIG. 5 shows a logical diagram illustrating the relationship between a file system, a storage device manager, and a storage system in accordance with an embodiment of the present invention. [0015]
  • FIG. 6 shows a state diagram of a mirror in accordance with an embodiment of the present invention. [0016]
  • FIG. 7 shows a flow diagram of a method of resynchronizing a mirrored storage device in accordance with an embodiment of the present invention. [0017]
  • FIGS. 8A and 8B show schematic diagrams further illustrating an action in the flow diagram of FIG. 7. [0018]
  • The use of the same reference label in different drawings indicates the same or like components.[0019]
  • DETAILED DESCRIPTION
  • In the present disclosure, numerous specific details are provided, such as examples of systems, components, and methods to provide a thorough understanding of embodiments of the invention. Persons of ordinary skill in the art will recognize, however, that the invention can be practiced without one or more of the specific details. In other instances, well-known details are not shown or described to avoid obscuring aspects of the invention. [0020]
  • Referring now to FIG. 1, there is shown a schematic diagram of an [0021] example file layout 150. File layout 150 may be adopted by a file system to organize files. Similar file layouts are also disclosed in the following commonly-assigned disclosures, which are incorporated herein by reference in their entirety: (a) U.S. Pat. No. 6,289,356, filed on Sep. 14, 1998; (b) U.S. Pat. No. 5,963,962, filed on Jun. 30, 1998; and (c) U.S. Pat. No. 5,819,292, filed on May. 31, 1995. It should be understood, however, that the present invention may also be adapted for use with other file layouts.
  • As shown in FIG. 1, [0022] file layout 150 has a tree structure with a root inode 100 as a base. Root inode 100 includes multiple blocks for describing one or more inode files 110 (i.e., 110A, 110B, . . . ). Each inode file 110 contains information about a file in file layout 150. A file may comprise one or more blocks of data, with each block being a storage location in a storage device.
  • As will be explained below, an [0023] inode file 110 may contain data or point to blocks containing data. Thus, a file may be accessed by consulting root inode 100 to find the inode file 110 that contains or points to the file's data. Using FIG. 1 as an example, data file 122 is stored in one or more blocks pointed to by inode 110B; inode 110B is in turn identified by root inode 100.
  • [0024] File layout 150 also includes a block map file 120 and an inode map file 121. Block map file 120 identifies free (i.e., unused) blocks, while inode map file 121 identifies free inodes. Block map file 120 and inode map file 121 may be accessed just like any other file in file layout 150. In other words, block map file 120 and inode map file 121 may be stored in blocks pointed to by an inode file 110, which is identified by root inode 100.
  • In one embodiment, [0025] root inode 100 is stored in a predetermined location in a storage device. This facilitates finding root inode 100 upon system boot-up. Because block map file 120, inode map file 121, and inode files 110 may be found by consulting root inode 100 as described above, they may be stored anywhere in the storage device.
  • Referring to FIG. 2A, there is shown a schematic diagram of an [0026] inode file 110 identified by a root inode 100. An inode file 110 includes a block 111 for storing general inode information such as a file's size, owner, permissions, etc. An inode file 110 also includes one or more blocks 112 (i.e., 112A, 112B, . . . ). Depending on the size of the file, blocks 112 may contain the file's data or pointers to the file's data. In the example of FIG. 2A, the file is small enough to fit all of its data in blocks 112.
  • In one embodiment, an [0027] inode file 110 includes 16 blocks 112, with each block 112 accommodating 4 bytes (i.e., 32 bits). Thus, in the just mentioned embodiment, files having a size of 64 bytes (i.e., 4-bytes ×16) or less may be stored directly in an inode file 110.
  • FIG. 2B shows a schematic diagram of an [0028] inode file 110 that contains pointers in its blocks 112. In the example of FIG.2B, a pointer in a block 112 points to a data block 210 (i.e., 210A, 210B , . . . ) containing data. This allows an inode file 110 to accommodate files that are too large to fit in the inode file itself. In one embodiment, each of 16 blocks 112 may point to a 4 KB (kilo-byte) data block 210. Thus, in the just mentioned embodiment, an inode file 110 may accommodate files having a size of 64 KB (i.e.,16 ×4 KB) or less.
  • FIG. 2C shows a schematic diagram of another [0029] inode file 110 that contains pointers in its blocks 112. Each of the blocks 112 points to indirect blocks 220 (i.e., 220A, 220B , . . . ), each of which has blocks that point to a data block 230 (i.e., 230A, 230B , . . . ) containing data. Pointing to an indirect block 220 allows an inode file 110 to accommodate larger files. In one embodiment, an inode file 110 has 16 blocks 112 that each point to an indirect block 220; each indirect block 220 in turn has 1024 blocks that each point to a 4 KB data block 230. Thus, in the just mentioned embodiment, an inode file 110 may accommodate files having a size of 64 MB (mega-bytes) (i.e., 16 ×1024 ×4KB) or less.
  • As can be appreciated, an [0030] inode file 110 may have several levels of indirection to accommodate even larger files. For example, FIG. 2D shows a schematic diagram of an inode file 110 that points to double indirect blocks 240 (i.e., 240A, 240B , . . . ), which point to single indirect blocks 250 (i.e., 250A, 250B , . . . ), which in turn point to data blocks 260 (i.e., 260A, 260B , . . . ). In one embodiment, an inode file 110 has 16 blocks 112 that each points to a double indirect block 240 containing 1024 blocks; each block in a double indirect block 240 points to a single indirect block 250 that contains 1024 blocks; each block in a single indirect block 250 points to a 4 KB data block 260. Thus, in the just mentioned embodiment, an inode file 110 may accommodate files having a size of 64 GB (giga-bytes) (i.e., 16 ×1024 ×1024 ×4 KB) or less.
  • Referring now to FIG. 3A, there is shown a schematic diagram of a [0031] root inode 100 with one or more branches 310 (i.e., 310A, 310B , . . . ). FIG. 3A and the following FIGS. 3B and 3C do not show the details of each branch from a root inode 100 for clarity of illustration. Each branch 310 may include an inode file plus one or more levels of indirection to data blocks, if any.
  • FIG. 3B shows a schematic diagram of a [0032] snapshot 300 created by copying a root inode 100. It is to be noted that “Snapshot” is a trademark of Network Appliance, Inc. It is used for purposes of this disclosure to designate a persistent consistency point (CP) image. A persistent consistency point image (PCPI) is a point-in-time representation of the storage system, and more particularly, of the active file system, stored on a storage device (e.g., on disk) or in other persistent memory and having a name or other unique identifier that distinguishes it from other PCPIs taken at other points in time. A PCPI can also include other information (metadata) about the active file system at the particular point in time for which the image is taken. The terms “PCPI” and “snapshot” shall be used interchangeably through out this disclosure without derogation of Network Appliance's trademark rights.
  • A [0033] snapshot 300, being a copy of a root inode 100, identifies all blocks identified by the root inode 100 at the time snapshot 300 was created. Because a snapshot 300 identifies but does not copy branches 310, a snapshot 300 does not consume a large amount of storage space. Generally speaking, a snapshot 300 provides storage usage information at a given moment in time.
  • FIG. 3C shows a schematic diagram illustrating what happens when data in a [0034] 103 branch 310 are modified by a write command. In one embodiment, writes may only be performed on unused blocks. That is, a used block is not overwritten when its data are modified; instead, an unused block is allocated to contain the modified data. Using FIG. 3C as an example, modifying data in branch 310E results in the creation of a new branch 311 containing the modified data. Branch 311 is created on new, unused blocks. The old branch 310E remains in the storage device and is still identified by snapshot 300. Root inode 100, on the other hand, breaks its pointer to branch 310E and now points to the new branch 311. Because branch 310E is still identified by snapshot 300, its data blocks may be readily recovered if desired.
  • As data identified by [0035] root inode 100 are modified, the number of retained old blocks may start to consume a large amount storage space. Thus, depending on the application, a snapshot 300 may be replaced by a new snapshot 300 from time to time to release old blocks, thereby making them available for new writes.
  • A consistency point count may be atomically increased every time a consistency point is established. For example, a consistency point count may be increased by one every time a [0036] snapshot 300 is created to establish a PCPI. When a file system becomes corrupted (e.g., root inode 100 lost information after an unclean shutdown), the PCPI (which is a snapshot 300 in this example) may be used to recreate the file system. As can be appreciated, a consistency point count gives an indication of how up to date a file system is. The higher the consistency point count, the more up to date the file system. For example, a file system with a consistency point count of 7 is more up to date than a version of that file system with a consistency point count of 4.
  • Turning now to FIG. 4, there is shown a schematic diagram of a computing environment in accordance with an embodiment of the present invention. In the example of FIG. 4, one or more computers [0037] 401 (i.e., 401A, 401B, . . . . ) are coupled to a filer 400 over a network 402. A computer 401 may be any type of data processing device capable of sending write and read requests to filer 400. A computer 401 may be, without limitation, a personal computer, mini-computer, mainframe computer, portable computer, workstation, wireless terminal, personal digital assistant, cellular phone, etc.
  • [0038] Network 402 may include various types of communication networks such as wide area networks, local area networks, the Internet, etc. Other nodes on network 402 such as gateways, routers, bridges, firewalls, etc. are not depicted in FIG. 4 for clarity of illustration.
  • [0039] Filer 400 provides data storage services over network 402. In one embodiment, filer 400 processes data read and write requests from a computer 401. Of course, filer 400 does not necessarily have to be accessible over network 402. Depending on the application, a filer 400 may also be locally attached to an I/O channel of a computer 401, for example.
  • As shown in FIG. 4, [0040] filer 400 may include a network interface 410, a storage operating system 450, and a storage system 460. Storage operating system 450 may further include a file system 452 and a storage device manager 454. Storage system 460 may include one or more storage devices. Components of filer 400 may be implemented in hardware, software, and/or firmware. For example, filer 400 may be a computer having one or more processors running computer-readable program code of storage operating system 450 in memory. Software components of filer 400 may be stored on computer-readable storage media (e.g., memories, CD-ROMS, tapes, disks, ZIP drive , . . . ) or transmitted over wired or wireless link to a computer 401.
  • [0041] Network interface 410 includes components for receiving storage-related service requests over network 402. Network interface 410 forwards a received service request to storage operating system 450, which processes the request by reading data from storage system 460 in the case of a read request, or by writing data to storage system 460 in the case of a write request. Data read from storage system 460 are transmitted over network 402 to the requesting computer 401. Similarly, data to be written to storage system 460 are received over network 402 from a computer 401.
  • FIG. 5 shows a logical diagram further illustrating the relationship between a [0042] file system 452, a storage device manager 454, and a storage system 460 in accordance with an embodiment of the present invention. In one embodiment, file system 452 and storage device manager 454 are implemented in software while storage system 460 is implemented in hardware. As can be appreciated, however, file system 452, storage device manager 454, and storage system 460 may be implemented in hardware, software, and/or firmware. For example, data structures, tables, and maps may be employed to define the logical interconnection between file system 452 and storage device manager 454. As another example, storage device manager 454 and storage system 460 may communicate via a disk controller.
  • [0043] File system 452 manages files that are stored in storage system 460. In one embodiment, file system 452 uses a file layout 150 (see FIG. 1) to organize files. That is, in one embodiment, file system 452 views files as a tree of blocks with a root inode as a base. File system 452 is capable of creating snapshots and consistency points in a manner previously described. In one embodiment, file system 452 organizes files in accordance with the Write-Anywhere-File Layout (WAFL) disclosed in the incorporated disclosures U.S. Pat. Nos. 6,289,356, 5,963,962, and 5,819,292. However, the present invention is not so limited and may also be used with other file systems and layouts.
  • [0044] Storage device manager 454 manages the storage devices in storage system 460. Storage device manager 454 receives read and write commands from file system 452 and processes the commands by accordingly accessing storage system 460. Storage device manager 454 takes a block's logical address from file system 452 and translates that logical address to a physical address in one or more storage devices in storage system 460. In one embodiment, storage device manager 454 manages storage devices in accordance with RAID level 4, and accordingly stripes data blocks across storage devices and uses separate parity storage devices. It should be understood, however, that the present invention may also be used with data storage architectures other than RAID level 4. For example, embodiments of the present invention may be used with other RAID levels, DASD's, and non-arrayed storage devices.
  • As shown in FIG. 5, [0045] storage device manager 454 is logically organized as a tree of objects that include a volume 501, a mirror 502, plexes 503 (i.e., 503A, 503B), and RAID groups 504-507. It is to be noted that implementing a mirror in a logical layer below file system 452 advantageously allows for a relatively transparent fail-over mechanism. For example, because file system 452 does not necessarily have to know of the existence of the mirror, a failing plex 503 does not have to be reported to file to system 452. When a plex fails, file system 452 may still read and write data as before. This minimizes disruption to file system 452 and also simplifies its design.
  • Still referring to FIG. 5, [0046] volume 501 represents a file system. Mirror 502 is one level below volume 501 and manages a pair of mirrored plexes 503. Plex 503A is a duplicate of plex 503B, and vice versa. Each plex 503 represents a full copy of the file system of volume 501. In one embodiment, consistency points are established from time to time for each plex 503. As will be described further below, this allows storage device manager 454 to determine which plex is more up to date in the event both plexes go down and one of them needs to be resynchronized with the other.
  • Below each plex [0047] 503 is one or more RAID groups that have associated storage devices in storage system 460. In the example of FIG. 5, storage devices 511-513 belong to RAID group 504, storage devices 514-516 belong to RAID group 505, storage devices 517-519 belong to RAID group 506, and storage devices 520-522 belong to RAID group 507. RAID group 504 mirrors RAID group 506, while RAID group 505 mirrors RAID group 507. As can be appreciated, storage devices 511-522 do not have to be housed in the same cabinet or facility. For example, storage devices 511-516 may be located in a data center in one city, while storage devices 517-522 may be in another data center in another city. This advantageously allows data to remain available even if a facility housing one set of storage devices is hit by a disaster (e.g., fire, earthquake).
  • In one embodiment, storage devices [0048] 511-522 include hard disk drives communicating with storage device manager 454 over a Fiber Channel Arbitrated Loop link and configured in accordance with RAID level 4. Implementing a mirror with RAID level 4 significantly improves data availability. Ordinarily, RAID level 4 does not include mirroring. Thus, although a storage system according to RAID level 4 may survive a single disk failure, it may not be able to survive double disk failures. Implementing a mirror with RAID level 4 improves data availability by providing back up copies in the event of a double disk failure in one of the RAID groups.
  • Because [0049] plex 503A and plex 503B mirror each other, data may be accessed through either plex 503A or plex 503B. This allows data to be accessed from a surviving plex in the event one of the plexes goes down and becomes inaccessible. This is particularly advantageous in mission-critical applications where a high degree of data availability is required. To further improve data availability, plex 503A and plex 503B may also utilize separate pieces of hardware to communicate with storage system 460.
  • FIG. 6 shows a state diagram of [0050] mirror 502 in accordance with an embodiment of the present invention. At any given moment, mirror 502 may be in normal (state 601), degraded (state 602), or resync (state 603) state. Mirror 502 is in the normal state when both plexes are working and online. In the normal state, data may be read from either plex. Using FIG. 5 as an example, a block in storage device 511 may be read and passed through RAID group 504, plex 503A, mirror 502, volume 501, and then to file system 452. Alternatively, the same block may be read from storage device 517 and passed through RAID group 506, plex 503B, mirror 502, volume 501, and then to file system 452.
  • In the normal state, data are written to both plexes in response to a write command from [0051] file system 452. The writing of data to both plexes may progress simultaneously. Data may also be written to each plex sequentially. For example, write data received from file system 452 may be forwarded by mirror 502 to an available plex. After the available plex confirms that the data were successfully written to storage system 460, mirror 502 may then forward the same data to the other plex. For example, the data may first be stored through plex 503A. Once plex 503A sends a confirmation that the data were successfully written to storage system 460, mirror 502 may then forward the same data to plex 503B. In response, plex 503B may initiate writing of the data to storage system 460.
  • From the normal state, [0052] mirror 502 may go to the degraded state when either plex 503A or plex 503B goes down. A plex 503 may go down for a variety of reasons including when its associated storage devices fail, are placed offline, etc. A down plex loses synchronization with its mirror as time passes. The longer the down time, the more the down plex becomes outdated.
  • In the degraded state, read and write commands are processed by the surviving plex. For example, when plex [0053] 503B goes down and is survived by plex 503A, plex 503A assumes responsibility for processing all read and write commands. As can be appreciated, having a mirrored pair of plexes allows storage device manager 454 to continue to operate even after a plex goes down.
  • From the degraded state, [0054] mirror 502 goes to the resync state when the down plex (now a “previously down plex”) becomes operational again. In the resync state, the previously down plex is resynchronized with the surviving plex. In other words, during the resync state, information in the previously down plex is updated to match that in the surviving plex. A technique for resynchronizing a previously down plex is later described in connection with FIG. 7. In one embodiment, resynchronization of a previously down plex with a surviving plex is performed by storage device manager 454. Performing resynchronization in a logical layer below file system 452 allows the resynchronization process to be relatively transparent to file system 452. This advantageously minimizes disruption to file system 452.
  • In the resync state, data are read from the surviving plex because the previously down plex may not yet have the most current data. [0055]
  • As mentioned, in one embodiment, data writes may only be performed on unused blocks. Because an unused block by definition has not been allocated in either plex while one of the plexes is down, data may be written to both plexes even if the mirror is still in the resync state. In other words, data may be written to the previously down plex even while it is still being resynchronized. As can be appreciated, the capability to write to the previously down plex while it is being resynchronized advantageously reduces the complexity of the resynchronization process. [0056]
  • From the resync state, [0057] mirror 502 returns to the normal state after the previously down plex is resynchronized with the surviving plex.
  • FIG. 7 shows a flow diagram of a method for resynchronizing a mirrored storage device in accordance with an embodiment of the present invention. In [0058] action 702, a snapshot arbitrarily referred to as a “base snapshot” is created by file system 452 at the request of storage device manager 454. The base snapshot, like a snapshot 300 (see FIG. 3), includes information about files in a file system.
  • In [0059] action 704 to action 702, at the request of storage device manager 454, file system 452 periodically creates a new base snapshot (and deletes the old one) while both plexes remain accessible. When one of the plexes goes down and becomes inaccessible, mirror 502 goes to the degraded state as indicated in action 706. In action 708 to action 706, mirror 502 remains in the degraded state while one of the plexes remains down.
  • In [0060] action 708 to action 710, mirror 502 goes to the resync state when the down plex becomes operational. In action 712, another snapshot arbitrarily referred to as a “resync snapshot” is created by file system 452 at the request of storage device manager 454. The resync snapshot is just like a snapshot 300 except that it is created when mirror 502 is in the resync state. Because file system 452, in one embodiment, only sees the most current plex, the resync snapshot is a copy of a root inode in the surviving plex.
  • In [0061] action 714, the difference between the base snapshot and the resync snapshot is determined. In one embodiment, file system 452 determines the difference by:
  • (a) reading the base snapshot and the resync snapshot; [0062]
  • (b) identifying blocks composing the base snapshot and blocks composing the resync snapshot; and [0063]
  • (c) finding blocks that are in the resync snapshot but not in the base snapshot. Note that the base snapshot is created at an earlier time when both plexes are up (normal state), whereas the resync snapshot is created at a later time when a plex that has gone down goes back up (resync state). Thus, the difference between the base and resync snapshots represents data that were written to the surviving plex while [0064] mirror 502 is in the degraded state.
  • FIGS. 8A and 8B further illustrate [0065] action 714. FIGS. 8A and 8B represent storage locations of a storage device, with each cell representing one or more blocks. In FIG. 8A, cell A1 holds a base snapshot 801. Base snapshot 801 identifies blocks in cells A2, B3, and C1. In FIG. 8B, cell C4 holds a resync snapshot 802 created while mirror 502 is in the resync state. Like base snapshot 801, resync snapshot 802 identifies blocks in cells A2, B3, and C1. Resync snapshot 802 additionally identifies blocks in cell D2. Thus, the blocks in cell D2 compose the difference between base snapshot 801 and resync snapshot 802.
  • Continuing in [0066] action 716 of FIG. 7, the difference between the base and resync snapshots is copied to the formerly down plex. In one embodiment, this is performed by storage device manager 454 by copying to the formerly down plex the blocks that are in the resync snapshot but not in the base snapshot. Using FIG. 8B as an example, blocks in cell D2 are copied to the formerly down plex. Advantageously, this speeds up the resynchronization process and thus shortens the period when only one plex is operational. Also, compared with prior techniques where all blocks of the surviving plex are copied to a formerly down plex, copying the difference to the formerly down plex consumes less processing time and I/O bandwidth.
  • In [0067] action 718, the resync snapshot is made the base snapshot. In action 719, the previous base snapshot is deleted. Thereafter, mirror 502 goes to the normal state as indicated in action 720. The cycle then continues with file system 452 periodically creating base snapshots while both plexes remain accessible.
  • It is to be noted that the flow diagram of FIG. 7 may also be used in the event both plexes go down. In that case, the plex with the higher consistency point count is designated the surviving plex while the other plex is designated the down plex. Thereafter, the down plex is resynchronized with the surviving plex as in FIG. 7. For example, if plexes [0068] 503A and 503B both go down and plex 503A has a higher consistency point count than plex 503B, plex 503A is designated the surviving plex while plex 503B is designated the down plex. When both plexes become operational again, plex 503B may then be resynchronized with plex 503A as in actions 710, 712, 714, 716, 718, etc.
  • Improved techniques for resynchronizing mirrored storage devices have been disclosed. While specific embodiments have been provided, it is to be understood that these embodiments are for illustration purposes and not limiting. Many additional embodiments will be apparent to persons of ordinary skill in the art reading this disclosure. Thus, the present invention is limited only by the following claims. [0069]

Claims (36)

What is claimed is:
1. A method of resynchronizing mirrored storage devices, the method comprising:
mirroring a first storage apparatus with a second storage apparatus;
determining a difference between data stored in the second storage apparatus and data stored in the first storage apparatus; and
in the event the first storage apparatus loses synchronization with the second storage apparatus, resynchronizing the first storage apparatus by copying the difference to the first storage apparatus.
2. The method of claim 1 further comprising:
servicing data write requests by writing data to the first storage apparatus while resynchronizing the first storage apparatus.
3. The method of claim 1 further comprising:
servicing data read requests by reading data from the second storage apparatus while resynchronizing the first storage apparatus.
4. The method of claim 1 wherein determining the difference between data stored in the second storage apparatus and data stored in the first storage apparatus further comprises:
reading a first storage usage information and a second storage usage information;
identifying data in the first storage usage information and data in the second storage usage information; and
finding blocks that correspond to data that are in the second storage usage information but not in the first storage usage information.
5. The method of claim 1 wherein the first storage apparatus and the second storage apparatus are configured in accordance with RAID level 4.
6. A system comprising:
a first storage device and a second storage device forming a mirrored pair of storage devices;
a storage device manager configured to manage the first storage device and the second storage device; and
wherein the storage device manager is configured to resynchronize the second storage device with data blocks allocated in the first storage device but not in the second storage device.
7. The system of claim 6 further comprising:
a file system at a logical layer above the storage device manager and configured to send storage-related commands to the storage device manager.
8. The system of claim 7 further comprising:
a network interface in communication with the file system, the network interface being configured to receive storage-related requests over a computer network.
9. The system of claim 6 wherein the first storage device and the second storage device are configured in accordance with RAID level 4.
10. The system of claim 6 wherein the storage device manager is configured to service storage-related requests while resynchronizing the second storage device.
11. A method of resynchronizing mirrored storage devices, the method comprising:
creating a first storage usage information at a first moment and a second storage usage information at a second moment;
determining a difference between the first storage usage information and the second storage usage information; and
based on the difference, resynchronizing a first storage device that forms a mirror with a second storage device.
12. The method of claim 11 further comprising:
servicing data write requests by writing data to the first storage device while resynchronizing the first storage device.
13. The method of claim 11 further comprising:
servicing data read requests by reading data from the second storage device while resynchronizing the first storage device.
14. The method of claim 11 wherein determining the difference between the first storage usage information and the second storage usage information further comprises:
reading the first storage usage information and the second storage usage information;
identifying blocks in the first storage usage information and blocks in the second storage usage information; and
finding blocks that are in the second storage usage information but not in the first storage usage information.
15. The method of claim 11 wherein the mirror is implemented in a logical layer below a file system.
16. The method of claim 11 wherein the first storage device and the second storage device are configured in accordance with RAID level 4.
17. The method of claim 11 further comprising:
going from a normal state to a degraded state when the first storage device becomes inaccessible;
going from the degraded state to a resync state when resynchronizing the first storage device; and
going from the resync state to the normal state after resynchronizing the first storage device.
18. The method of claim 17 further comprising:
writing new data to the first storage device while in the resync state.
19. The method of claim 17 further comprising:
reading data from the second storage device while in the resync state.
20. The method of claim 17 wherein the first storage usage information is created while in the normal state and the second storage usage information is created while in the resync state.
21. A computer-readable storage medium comprising:
computer-readable program code for creating a first storage usage information and a second storage usage information;
computer-readable program code for determining a difference between the first storage usage information and the second storage usage information; and
computer-readable program code for resynchronizing a previously down storage device with another storage device based on the difference.
22. A method of resynchronizing a storage device, the method comprising:
creating a first storage usage information when a first storage device and a second device that form a mirror are both accessible;
creating a second storage usage information after the first storage device goes down and comes back up;
determining a difference between the first storage usage information and the second storage information;
resynchronizing the first storage device with the second storage device based on the difference; and
servicing data write requests by writing data to the first storage device while resynchronizing the first storage device.
23. The method of claim 22 further comprising:
servicing data read requests by reading data from the second storage device while resynchronizing the first storage device.
24. The method of claim 22 wherein the first storage device and the second storage device are configured in accordance with RAID level 4.
25. A method of resynchronizing mirrored storage devices, the method comprising:
keeping a mirror in a normal state while a first storage device and a second storage device of the mirror are both accessible;
transitioning the mirror from the normal state to a degraded state when the second storage device becomes inaccessible;
transitioning the mirror from the degraded state to a resync state when the second storage device becomes accessible;
determining a difference between data stored in the first storage device and data stored in the second storage device; and
transitioning the mirror from the resync state to the normal state after the difference is copied to the second storage device.
26. The method of claim 25 wherein determining the difference between data stored in the first storage device and data stored in the second storage device comprises:
identifying data blocks in the first storage device that are not in the second storage device.
27. The method of claim 25 wherein determining the difference between data stored in the first storage device and data stored in the second storage device comprises:
identifying data blocks stored in the first storage device and the second storage device while the mirror is in the normal state to create a first storage usage information;
identifying data blocks stored in the first storage device while the mirror is in the resync state to create a second storage usage information; and
determining a difference between the first storage usage information and the second storage usage information.
28. The method of claim 25 further comprising:
in response to a write command, writing data to the second storage device while the mirror is in the resync state.
29. A system for providing data storage services over a computer network, the system comprising:
a file system;
a storage device manager configured to service data access requests from the file system, the storage device manager configured to form a mirror with a first storage device and a second storage device; and
wherein the storage device manager is configured to resynchronize the second storage device with data determined to be in the first storage device but not in the second storage device.
30. The system of claim 29 wherein the first storage and the second storage device are configured in accordance with RAID level 4.
31. The system of claim 29 wherein the first storage device and the second storage device are not housed in the same facility.
32. A method of resynchronizing mirrored storage devices, the method comprising:
mirroring a first group of storage devices with a second group of storage devices;
determining a difference between data stored in the second group of storage devices and data stored in the second group of storage devices; and
in the event the first group of storage devices loses synchronization with the second group of storage devices, resynchronizing the first group of storage devices by copying the difference to the first group of storage devices.
33. The method of claim 32 further comprising:
servicing data write requests by writing data to the first group of storage devices while resynchronizing the first group of storage devices.
34. The method of claim 32 further comprising:
servicing data read requests by reading data from the second group of storage devices while resynchronizing the first group of storage devices.
35. The method of claim 32 wherein determining the difference between data stored in the second group of storage devices and data stored in the second group of storage devices further comprises:
reading a first storage usage information and a second storage usage information;
identifying data in the first storage usage information and data in the second storage usage information; and
finding blocks that correspond to data that are in the second storage usage information but not in the first storage usage information.
36. The method of claim 32 wherein the first group of storage devices and the second group of storage devices are configured in accordance with RAID level 4.
US10/154,414 2000-10-04 2002-05-23 Resynchronization of mirrored storage devices Abandoned US20020194529A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/154,414 US20020194529A1 (en) 2000-10-04 2002-05-23 Resynchronization of mirrored storage devices
US10/225,453 US7143249B2 (en) 2000-10-04 2002-08-19 Resynchronization of mirrored storage devices

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/684,487 US6654912B1 (en) 2000-10-04 2000-10-04 Recovery of file system data in file servers mirrored file system volumes
US10/154,414 US20020194529A1 (en) 2000-10-04 2002-05-23 Resynchronization of mirrored storage devices

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/684,487 Continuation-In-Part US6654912B1 (en) 2000-10-04 2000-10-04 Recovery of file system data in file servers mirrored file system volumes

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/225,453 Continuation-In-Part US7143249B2 (en) 2000-10-04 2002-08-19 Resynchronization of mirrored storage devices

Publications (1)

Publication Number Publication Date
US20020194529A1 true US20020194529A1 (en) 2002-12-19

Family

ID=24748237

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/684,487 Expired - Lifetime US6654912B1 (en) 2000-10-04 2000-10-04 Recovery of file system data in file servers mirrored file system volumes
US10/154,414 Abandoned US20020194529A1 (en) 2000-10-04 2002-05-23 Resynchronization of mirrored storage devices
US10/719,699 Expired - Fee Related US7096379B2 (en) 2000-10-04 2003-11-21 Recovery of file system data in file servers mirrored file system volumes

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/684,487 Expired - Lifetime US6654912B1 (en) 2000-10-04 2000-10-04 Recovery of file system data in file servers mirrored file system volumes

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/719,699 Expired - Fee Related US7096379B2 (en) 2000-10-04 2003-11-21 Recovery of file system data in file servers mirrored file system volumes

Country Status (4)

Country Link
US (3) US6654912B1 (en)
EP (1) EP1325415B1 (en)
DE (1) DE60112462T2 (en)
WO (1) WO2002029572A2 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060136771A1 (en) * 2004-12-06 2006-06-22 Hitachi, Ltd. Storage system and snapshot data preparation method in storage system
US20060168624A1 (en) * 2004-11-22 2006-07-27 John Carney Method and system for delivering enhanced TV content
US7162662B1 (en) * 2003-12-23 2007-01-09 Network Appliance, Inc. System and method for fault-tolerant synchronization of replica updates for fixed persistent consistency point image consumption
US7200726B1 (en) 2003-10-24 2007-04-03 Network Appliance, Inc. Method and apparatus for reducing network traffic during mass storage synchronization phase of synchronous data mirroring
US7203796B1 (en) 2003-10-24 2007-04-10 Network Appliance, Inc. Method and apparatus for synchronous data mirroring
US20070180305A1 (en) * 2003-01-31 2007-08-02 Hitachi, Ltd. Methods for Controlling Storage Devices Controlling Apparatuses
US20070255758A1 (en) * 2006-04-28 2007-11-01 Ling Zheng System and method for sampling based elimination of duplicate data
US20080005201A1 (en) * 2006-06-29 2008-01-03 Daniel Ting System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US20080005141A1 (en) * 2006-06-29 2008-01-03 Ling Zheng System and method for retrieving and using block fingerprints for data deduplication
US7325109B1 (en) * 2003-10-24 2008-01-29 Network Appliance, Inc. Method and apparatus to mirror data at two separate sites without comparing the data at the two sites
US20080184001A1 (en) * 2007-01-30 2008-07-31 Network Appliance, Inc. Method and an apparatus to store data patterns
US20080301134A1 (en) * 2007-05-31 2008-12-04 Miller Steven C System and method for accelerating anchor point detection
US20080313496A1 (en) * 2007-06-12 2008-12-18 Microsoft Corporation Gracefully degradable versioned storage systems
US7596672B1 (en) 2003-10-24 2009-09-29 Network Appliance, Inc. Synchronous mirroring including writing image updates to a file
US20090299492A1 (en) * 2008-05-28 2009-12-03 Fujitsu Limited Control of connecting apparatuses in information processing system
US20100049726A1 (en) * 2008-08-19 2010-02-25 Netapp, Inc. System and method for compression of partially ordered data sets
US7707165B1 (en) * 2004-12-09 2010-04-27 Netapp, Inc. System and method for managing data versions in a file system
US7747584B1 (en) 2006-08-22 2010-06-29 Netapp, Inc. System and method for enabling de-duplication in a storage system architecture
US8001307B1 (en) 2007-04-27 2011-08-16 Network Appliance, Inc. Apparatus and a method to eliminate deadlock in a bi-directionally mirrored data storage system
US8793226B1 (en) 2007-08-28 2014-07-29 Netapp, Inc. System and method for estimating duplicate data
US20150039572A1 (en) * 2012-03-01 2015-02-05 Netapp, Inc. System and method for removing overlapping ranges from a flat sorted data structure
US10142121B2 (en) 2011-12-07 2018-11-27 Comcast Cable Communications, Llc Providing synchronous content and supplemental experiences

Families Citing this family (112)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6138126A (en) 1995-05-31 2000-10-24 Network Appliance, Inc. Method for allocating files in a file system integrated with a raid disk sub-system
US6516351B2 (en) 1997-12-05 2003-02-04 Network Appliance, Inc. Enforcing uniform file-locking for diverse file-locking protocols
US6119244A (en) 1998-08-25 2000-09-12 Network Appliance, Inc. Coordinating persistent status information with multiple file servers
US7406484B1 (en) 2000-09-12 2008-07-29 Tbrix, Inc. Storage allocation in a distributed segmented file system
US6782389B1 (en) * 2000-09-12 2004-08-24 Ibrix, Inc. Distributing files across multiple, permissibly heterogeneous, storage devices
US20040236798A1 (en) * 2001-09-11 2004-11-25 Sudhir Srinivasan Migration of control in a distributed segmented file system
US20060288080A1 (en) * 2000-09-12 2006-12-21 Ibrix, Inc. Balanced computer architecture
US8935307B1 (en) 2000-09-12 2015-01-13 Hewlett-Packard Development Company, L.P. Independent data access in a segmented file system
US7836017B1 (en) 2000-09-12 2010-11-16 Hewlett-Packard Development Company, L.P. File replication in a distributed segmented file system
US6654912B1 (en) * 2000-10-04 2003-11-25 Network Appliance, Inc. Recovery of file system data in file servers mirrored file system volumes
US7143249B2 (en) * 2000-10-04 2006-11-28 Network Appliance, Inc. Resynchronization of mirrored storage devices
US6728735B1 (en) * 2001-03-12 2004-04-27 Network Appliance, Inc. Restartable dump that produces a consistent filesystem on tapes
US8010558B2 (en) 2001-06-05 2011-08-30 Silicon Graphics International Relocation of metadata server with outstanding DMAPI requests
US7765329B2 (en) * 2002-06-05 2010-07-27 Silicon Graphics International Messaging between heterogeneous clients of a storage area network
US6950833B2 (en) * 2001-06-05 2005-09-27 Silicon Graphics, Inc. Clustered filesystem
US20040139125A1 (en) 2001-06-05 2004-07-15 Roger Strassburg Snapshot copy of data volume during data access
US7640582B2 (en) 2003-04-16 2009-12-29 Silicon Graphics International Clustered filesystem for mix of trusted and untrusted nodes
US7617292B2 (en) 2001-06-05 2009-11-10 Silicon Graphics International Multi-class heterogeneous clients in a clustered filesystem
KR20040029089A (en) * 2001-09-03 2004-04-03 코닌클리케 필립스 일렉트로닉스 엔.브이. Device for use in a network environment
US6948089B2 (en) * 2002-01-10 2005-09-20 Hitachi, Ltd. Apparatus and method for multiple generation remote backup and fast restore
US7043503B2 (en) * 2002-02-15 2006-05-09 International Business Machines Corporation Ditto address indicating true disk address for actual data blocks stored in one of an inode of the file system and subsequent snapshot
US7216135B2 (en) * 2002-02-15 2007-05-08 International Business Machines Corporation File system for providing access to a snapshot dataset where disk address in the inode is equal to a ditto address for indicating that the disk address is invalid disk address
US6857001B2 (en) 2002-06-07 2005-02-15 Network Appliance, Inc. Multiple concurrent active file systems
US7024586B2 (en) 2002-06-24 2006-04-04 Network Appliance, Inc. Using file system information in raid data reconstruction and migration
US7454529B2 (en) 2002-08-02 2008-11-18 Netapp, Inc. Protectable data storage system and a method of protecting and/or managing a data storage system
US7117386B2 (en) * 2002-08-21 2006-10-03 Emc Corporation SAR restart and going home procedures
US7437387B2 (en) 2002-08-30 2008-10-14 Netapp, Inc. Method and system for providing a file system overlay
US7882081B2 (en) 2002-08-30 2011-02-01 Netapp, Inc. Optimized disk repository for the storage and retrieval of mostly sequential data
US6938184B2 (en) * 2002-10-17 2005-08-30 Spinnaker Networks, Inc. Method and system for providing persistent storage of user data
US8024172B2 (en) 2002-12-09 2011-09-20 Netapp, Inc. Method and system for emulating tape libraries
US7567993B2 (en) 2002-12-09 2009-07-28 Netapp, Inc. Method and system for creating and using removable disk based copies of backup data
US7769722B1 (en) 2006-12-08 2010-08-03 Emc Corporation Replication and restoration of multiple data storage object types in a data network
US20040181707A1 (en) 2003-03-11 2004-09-16 Hitachi, Ltd. Method and apparatus for seamless management for disaster recovery
US6973369B2 (en) 2003-03-12 2005-12-06 Alacritus, Inc. System and method for virtual vaulting
US7437492B2 (en) 2003-05-14 2008-10-14 Netapp, Inc Method and system for data compression and compression estimation in a virtual tape library environment
US20040267823A1 (en) * 2003-06-24 2004-12-30 Microsoft Corporation Reconcilable and undoable file system
US7275177B2 (en) * 2003-06-25 2007-09-25 Emc Corporation Data recovery with internet protocol replication with or without full resync
US7028156B1 (en) 2003-07-01 2006-04-11 Veritas Operating Corporation Use of read data tracking and caching to recover from data corruption
US7188272B2 (en) * 2003-09-29 2007-03-06 International Business Machines Corporation Method, system and article of manufacture for recovery from a failure in a cascading PPRC system
US7278049B2 (en) * 2003-09-29 2007-10-02 International Business Machines Corporation Method, system, and program for recovery from a failure in an asynchronous data copying system
US7490103B2 (en) 2004-02-04 2009-02-10 Netapp, Inc. Method and system for backing up data
US7904679B2 (en) 2004-02-04 2011-03-08 Netapp, Inc. Method and apparatus for managing backup data
US7426617B2 (en) * 2004-02-04 2008-09-16 Network Appliance, Inc. Method and system for synchronizing volumes in a continuous data protection system
US7406488B2 (en) 2004-02-04 2008-07-29 Netapp Method and system for maintaining data in a continuous data protection system
US7315965B2 (en) 2004-02-04 2008-01-01 Network Appliance, Inc. Method and system for storing data using a continuous data protection system
US7559088B2 (en) 2004-02-04 2009-07-07 Netapp, Inc. Method and apparatus for deleting data upon expiration
US7720817B2 (en) 2004-02-04 2010-05-18 Netapp, Inc. Method and system for browsing objects on a protected volume in a continuous data protection system
US7783606B2 (en) 2004-02-04 2010-08-24 Netapp, Inc. Method and system for remote data recovery
US7325159B2 (en) 2004-02-04 2008-01-29 Network Appliance, Inc. Method and system for data recovery in a continuous data protection system
JP2006011581A (en) * 2004-06-23 2006-01-12 Hitachi Ltd Storage system and its control method
US8028135B1 (en) 2004-09-01 2011-09-27 Netapp, Inc. Method and apparatus for maintaining compliant storage
US7680839B1 (en) * 2004-09-30 2010-03-16 Symantec Operating Corporation System and method for resynchronizing mirrored volumes
US7526620B1 (en) 2004-12-14 2009-04-28 Netapp, Inc. Disk sanitization in an active file system
US7581118B2 (en) 2004-12-14 2009-08-25 Netapp, Inc. Disk sanitization using encryption
US7558839B1 (en) 2004-12-14 2009-07-07 Netapp, Inc. Read-after-write verification for improved write-once-read-many data storage
US7774610B2 (en) 2004-12-14 2010-08-10 Netapp, Inc. Method and apparatus for verifiably migrating WORM data
US7437601B1 (en) * 2005-03-08 2008-10-14 Network Appliance, Inc. Method and system for re-synchronizing an asynchronous mirror without data loss
US7401198B2 (en) 2005-10-06 2008-07-15 Netapp Maximizing storage system throughput by measuring system performance metrics
US7765187B2 (en) * 2005-11-29 2010-07-27 Emc Corporation Replication of a consistency group of data storage objects from servers in a data network
US20070168721A1 (en) * 2005-12-22 2007-07-19 Nokia Corporation Method, network entity, system, electronic device and computer program product for backup and restore provisioning
US7752401B2 (en) 2006-01-25 2010-07-06 Netapp, Inc. Method and apparatus to automatically commit files to WORM status
US7788456B1 (en) 2006-02-16 2010-08-31 Network Appliance, Inc. Use of data images to allow release of unneeded data storage
US7650533B1 (en) 2006-04-20 2010-01-19 Netapp, Inc. Method and system for performing a restoration in a continuous data protection system
US7730351B2 (en) * 2006-05-15 2010-06-01 Oracle America, Inc. Per file dirty region logging
US20080077635A1 (en) * 2006-09-22 2008-03-27 Digital Bazaar, Inc. Highly Available Clustered Storage Network
US8706833B1 (en) 2006-12-08 2014-04-22 Emc Corporation Data storage server having common replication architecture for multiple storage object types
US7793148B2 (en) * 2007-01-12 2010-09-07 International Business Machines Corporation Using virtual copies in a failover and failback environment
US7644300B1 (en) * 2007-04-20 2010-01-05 3Par, Inc. Fast resynchronization of data from a remote copy
US8566362B2 (en) 2009-01-23 2013-10-22 Nasuni Corporation Method and system for versioned file system using structured data representations
DE102009029334A1 (en) * 2009-09-10 2011-03-24 Henkel Ag & Co. Kgaa Two-stage process for the corrosion-protective treatment of metal surfaces
US8190574B2 (en) 2010-03-02 2012-05-29 Storagecraft Technology Corporation Systems, methods, and computer-readable media for backup and restoration of computer information
CN102947681B (en) 2010-04-20 2016-05-18 惠普发展公司,有限责任合伙企业 Strengthen luminous automatic layout, luminous enhance device for surface
US8799231B2 (en) 2010-08-30 2014-08-05 Nasuni Corporation Versioned file system with fast restore
WO2012051298A2 (en) 2010-10-12 2012-04-19 Nasuni Corporation Versioned file system with sharing
WO2012054027A1 (en) 2010-10-20 2012-04-26 Hewlett-Packard Development Company, L.P. Chemical-analysis device integrated with metallic-nanofinger device for chemical sensing
US9274058B2 (en) 2010-10-20 2016-03-01 Hewlett-Packard Development Company, L.P. Metallic-nanofinger device for chemical sensing
US8843489B2 (en) 2010-11-16 2014-09-23 Actifio, Inc. System and method for managing deduplicated copies of data using temporal relationships among copies
US8402004B2 (en) 2010-11-16 2013-03-19 Actifio, Inc. System and method for creating deduplicated copies of data by tracking temporal relationships among copies and by ingesting difference data
US8417674B2 (en) 2010-11-16 2013-04-09 Actifio, Inc. System and method for creating deduplicated copies of data by sending difference data between near-neighbor temporal states
US9858155B2 (en) 2010-11-16 2018-01-02 Actifio, Inc. System and method for managing data with service level agreements that may specify non-uniform copying of data
US8904126B2 (en) 2010-11-16 2014-12-02 Actifio, Inc. System and method for performing a plurality of prescribed data management functions in a manner that reduces redundant access operations to primary storage
US8601220B1 (en) 2011-04-29 2013-12-03 Netapp, Inc. Transparent data migration in a storage system environment
US8589724B2 (en) 2011-06-30 2013-11-19 Seagate Technology Llc Rapid rebuild of a data set
WO2013019869A2 (en) 2011-08-01 2013-02-07 Actifio, Inc. Data fingerpringting for copy accuracy assurance
GB2495079A (en) * 2011-09-23 2013-04-03 Hybrid Logic Ltd Live migration of applications and file systems in a distributed system
CA2877284A1 (en) 2012-06-18 2013-12-27 Actifio, Inc. Enhanced data management virtualization system
US8892941B2 (en) 2012-06-27 2014-11-18 International Business Machines Corporation Recovering a volume table and data sets from a corrupted volume
KR102050723B1 (en) 2012-09-28 2019-12-02 삼성전자 주식회사 Computing system and data management method thereof
AU2014265979A1 (en) 2013-05-14 2015-12-10 Actifio, Inc. Efficient data replication and garbage collection predictions
US20150142748A1 (en) 2013-11-18 2015-05-21 Actifio, Inc. Computerized methods and apparatus for data cloning
US9720778B2 (en) 2014-02-14 2017-08-01 Actifio, Inc. Local area network free data movement
US9792187B2 (en) 2014-05-06 2017-10-17 Actifio, Inc. Facilitating test failover using a thin provisioned virtual machine created from a snapshot
US9772916B2 (en) 2014-06-17 2017-09-26 Actifio, Inc. Resiliency director
WO2016044403A1 (en) 2014-09-16 2016-03-24 Mutalik, Madhav Copy data techniques
US10379963B2 (en) 2014-09-16 2019-08-13 Actifio, Inc. Methods and apparatus for managing a large-scale environment of copy data management appliances
US10146788B1 (en) * 2014-10-10 2018-12-04 Google Llc Combined mirroring and caching network file system
WO2016085541A1 (en) 2014-11-28 2016-06-02 Nasuni Corporation Versioned file system with global lock
US10445187B2 (en) 2014-12-12 2019-10-15 Actifio, Inc. Searching and indexing of backup data sets
WO2016115135A1 (en) 2015-01-12 2016-07-21 Xiangdong Zhang Disk group based backup
US9842029B2 (en) * 2015-03-25 2017-12-12 Kabushiki Kaisha Toshiba Electronic device, method and storage medium
US10282201B2 (en) 2015-04-30 2019-05-07 Actifo, Inc. Data provisioning techniques
US9734028B2 (en) * 2015-06-29 2017-08-15 International Business Machines Corporation Reverse resynchronization by a secondary data source when a data destination has more recent data
US10691659B2 (en) 2015-07-01 2020-06-23 Actifio, Inc. Integrating copy data tokens with source code repositories
US10613938B2 (en) 2015-07-01 2020-04-07 Actifio, Inc. Data virtualization using copy data tokens
US10684994B2 (en) * 2015-09-25 2020-06-16 Netapp Inc. Data synchronization
US10445298B2 (en) 2016-05-18 2019-10-15 Actifio, Inc. Vault to object store
US10476955B2 (en) 2016-06-02 2019-11-12 Actifio, Inc. Streaming and sequential data replication
US10855554B2 (en) 2017-04-28 2020-12-01 Actifio, Inc. Systems and methods for determining service level agreement compliance
US11403178B2 (en) 2017-09-29 2022-08-02 Google Llc Incremental vault to object store
US11176001B2 (en) 2018-06-08 2021-11-16 Google Llc Automated backup and restore of a disk group
CN112307013A (en) * 2019-07-30 2021-02-02 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for managing application systems
CN111291005B (en) * 2020-01-19 2023-05-02 Oppo(重庆)智能科技有限公司 File viewing method, device, terminal equipment, system and storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5479653A (en) * 1994-07-14 1995-12-26 Dellusa, L.P. Disk array apparatus and method which supports compound raid configurations and spareless hot sparing
US5519844A (en) * 1990-11-09 1996-05-21 Emc Corporation Logical partitioning of a redundant array storage system
US5819292A (en) * 1993-06-03 1998-10-06 Network Appliance, Inc. Method for maintaining consistent states of a file system and for creating user-accessible read-only copies of a file system
US5960169A (en) * 1997-02-27 1999-09-28 International Business Machines Corporation Transformational raid for hierarchical storage management system
US6023780A (en) * 1996-05-13 2000-02-08 Fujitsu Limited Disc array apparatus checking and restructuring data read from attached disc drives
US6085298A (en) * 1994-10-13 2000-07-04 Vinca Corporation Comparing mass storage devices through digests that are representative of stored data in order to minimize data transfer
US6092215A (en) * 1997-09-29 2000-07-18 International Business Machines Corporation System and method for reconstructing data in a storage array system
US20010010070A1 (en) * 1998-08-13 2001-07-26 Crockett Robert Nelson System and method for dynamically resynchronizing backup data
US6269381B1 (en) * 1998-06-30 2001-07-31 Emc Corporation Method and apparatus for backing up data before updating the data and for restoring from the backups
US20020059505A1 (en) * 1998-06-30 2002-05-16 St. Pierre Edgar J. Method and apparatus for differential backup in a computer storage system
US6463573B1 (en) * 1999-06-03 2002-10-08 International Business Machines Corporation Data processor storage systems with dynamic resynchronization of mirrored logical data volumes subsequent to a storage system failure
US6543004B1 (en) * 1999-07-29 2003-04-01 Hewlett-Packard Development Company, L.P. Method and apparatus for archiving and restoring data
US6654912B1 (en) * 2000-10-04 2003-11-25 Network Appliance, Inc. Recovery of file system data in file servers mirrored file system volumes
US6662268B1 (en) * 1999-09-02 2003-12-09 International Business Machines Corporation System and method for striped mirror re-synchronization by logical partition rather than stripe units
US6671705B1 (en) * 1999-08-17 2003-12-30 Emc Corporation Remote mirroring system, device, and method
US20040073831A1 (en) * 1993-04-23 2004-04-15 Moshe Yanai Remote data mirroring

Family Cites Families (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US489781A (en) * 1893-01-10 William w
US4761785B1 (en) 1986-06-12 1996-03-12 Ibm Parity spreading to enhance storage access
US4897781A (en) 1987-02-13 1990-01-30 International Business Machines Corporation System and method for using cached data at a local node after re-opening a file at a remote node in a distributed networking environment
US4875159A (en) 1987-12-22 1989-10-17 Amdahl Corporation Version management system using plural control fields for synchronizing two versions of files in a multiprocessor system
US4937763A (en) 1988-09-06 1990-06-26 E I International, Inc. Method of system state analysis
US5067099A (en) 1988-11-03 1991-11-19 Allied-Signal Inc. Methods and apparatus for monitoring system performance
US5163148A (en) 1989-08-11 1992-11-10 Digital Equipment Corporation File backup system for producing a backup copy of a file which may be updated during backup
US5163131A (en) 1989-09-08 1992-11-10 Auspex Systems, Inc. Parallel i/o network file server architecture
US5276867A (en) 1989-12-19 1994-01-04 Epoch Systems, Inc. Digital data storage system with improved data migration
JPH0731582B2 (en) 1990-06-21 1995-04-10 インターナショナル・ビジネス・マシーンズ・コーポレイション Method and apparatus for recovering parity protected data
US5208813A (en) 1990-10-23 1993-05-04 Array Technology Corporation On-line reconstruction of a failed redundant array system
JP2603757B2 (en) 1990-11-30 1997-04-23 富士通株式会社 Method of controlling array disk device
US5235601A (en) 1990-12-21 1993-08-10 Array Technology Corporation On-line restoration of redundancy information in a redundant array system
US5369757A (en) * 1991-06-18 1994-11-29 Digital Equipment Corporation Recovery logging in the presence of snapshot files by ordering of buffer pool flushing
US5321837A (en) 1991-10-11 1994-06-14 International Business Machines Corporation Event handling mechanism having a process and an action association process
US5313626A (en) 1991-12-17 1994-05-17 Jones Craig S Disk drive array with efficient background rebuilding
US5442752A (en) 1992-01-24 1995-08-15 International Business Machines Corporation Data storage method for DASD arrays using striping based on file length
US5305326A (en) 1992-03-06 1994-04-19 Data General Corporation High availability disk arrays
US5335235A (en) 1992-07-07 1994-08-02 Digital Equipment Corporation FIFO based parity generator
US6604118B2 (en) 1998-07-31 2003-08-05 Network Appliance, Inc. File system image transfer
JP3862274B2 (en) 1993-06-03 2006-12-27 ネットワーク・アプライアンス・インコーポレイテッド File allocation method of file system integrated with RAID disk subsystem
US5963962A (en) 1995-05-31 1999-10-05 Network Appliance, Inc. Write anywhere file-system layout
EP1031928B1 (en) 1993-06-04 2005-05-18 Network Appliance, Inc. A method for providing parity in a raid sub-system using non-volatile memory
AU682523B2 (en) 1993-07-01 1997-10-09 Legent Corporation System and method for distributed storage management on networked computer systems
KR0128271B1 (en) * 1994-02-22 1998-04-15 윌리암 티. 엘리스 Remote data duplexing
US5649152A (en) 1994-10-13 1997-07-15 Vinca Corporation Method and system for providing a static snapshot of data stored on a mass storage system
US5604862A (en) * 1995-03-14 1997-02-18 Network Integrity, Inc. Continuously-snapshotted protection of computer files
US5666353A (en) 1995-03-21 1997-09-09 Cisco Systems, Inc. Frame based traffic policing for a digital switch
US6453325B1 (en) 1995-05-24 2002-09-17 International Business Machines Corporation Method and means for backup and restoration of a database system linked to a system for filing data
US5907672A (en) 1995-10-04 1999-05-25 Stac, Inc. System for backing up computer disk volumes with error remapping of flawed memory addresses
US5819310A (en) 1996-05-24 1998-10-06 Emc Corporation Method and apparatus for reading data from mirrored logical volumes on physical disk drives
US5857208A (en) * 1996-05-31 1999-01-05 Emc Corporation Method and apparatus for performing point in time backup operation in a computer system
US5996106A (en) 1997-02-04 1999-11-30 Micron Technology, Inc. Multi bank test mode for memory devices
US5873101A (en) 1997-02-10 1999-02-16 Oracle Corporation Database backup/restore and bulk data transfer
US5895495A (en) * 1997-03-13 1999-04-20 International Business Machines Corporation Demand-based larx-reserve protocol for SMP system buses
US6490610B1 (en) * 1997-05-30 2002-12-03 Oracle Corporation Automatic failover for clients accessing a resource through a server
US5996086A (en) 1997-10-14 1999-11-30 Lsi Logic Corporation Context-based failover architecture for redundant servers
US6101585A (en) 1997-11-04 2000-08-08 Adaptec, Inc. Mechanism for incremental backup of on-line files
US6212531B1 (en) * 1998-01-13 2001-04-03 International Business Machines Corporation Method for implementing point-in-time copy using a snapshot function
US6360330B1 (en) 1998-03-31 2002-03-19 Emc Corporation System and method for backing up data stored in multiple mirrors on a mass storage subsystem under control of a backup server
US6182198B1 (en) * 1998-06-05 2001-01-30 International Business Machines Corporation Method and apparatus for providing a disc drive snapshot backup while allowing normal drive read, write, and buffering operations
US6279011B1 (en) 1998-06-19 2001-08-21 Network Appliance, Inc. Backup and restore for heterogeneous file server environment
US6574591B1 (en) 1998-07-31 2003-06-03 Network Appliance, Inc. File systems image transfer between dissimilar file systems
US6119244A (en) 1998-08-25 2000-09-12 Network Appliance, Inc. Coordinating persistent status information with multiple file servers
US6397307B2 (en) * 1999-02-23 2002-05-28 Legato Systems, Inc. Method and system for mirroring and archiving mass storage
KR100382851B1 (en) * 1999-03-31 2003-05-09 인터내셔널 비지네스 머신즈 코포레이션 A method and apparatus for managing client computers in a distributed data processing system
US6529921B1 (en) 1999-06-29 2003-03-04 Microsoft Corporation Dynamic synchronization of tables
US6591377B1 (en) * 1999-11-24 2003-07-08 Unisys Corporation Method for comparing system states at different points in time
US6715034B1 (en) 1999-12-13 2004-03-30 Network Appliance, Inc. Switching file system request in a mass storage system
US6341341B1 (en) * 1999-12-16 2002-01-22 Adaptec, Inc. System and method for disk control with snapshot feature including read-write snapshot half
US6708227B1 (en) * 2000-04-24 2004-03-16 Microsoft Corporation Method and system for providing common coordination and administration of multiple snapshot providers
US6978280B1 (en) * 2000-10-12 2005-12-20 Hewlett-Packard Development Company, L.P. Method and system for improving LUN-based backup reliability
US6877016B1 (en) * 2001-09-13 2005-04-05 Unisys Corporation Method of capturing a physically consistent mirrored snapshot of an online database
US6981114B1 (en) * 2002-10-16 2005-12-27 Veritas Operating Corporation Snapshot reconstruction from an existing snapshot and one or more modification logs

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5519844A (en) * 1990-11-09 1996-05-21 Emc Corporation Logical partitioning of a redundant array storage system
US20040073831A1 (en) * 1993-04-23 2004-04-15 Moshe Yanai Remote data mirroring
US5819292A (en) * 1993-06-03 1998-10-06 Network Appliance, Inc. Method for maintaining consistent states of a file system and for creating user-accessible read-only copies of a file system
US5479653A (en) * 1994-07-14 1995-12-26 Dellusa, L.P. Disk array apparatus and method which supports compound raid configurations and spareless hot sparing
US6085298A (en) * 1994-10-13 2000-07-04 Vinca Corporation Comparing mass storage devices through digests that are representative of stored data in order to minimize data transfer
US6023780A (en) * 1996-05-13 2000-02-08 Fujitsu Limited Disc array apparatus checking and restructuring data read from attached disc drives
US5960169A (en) * 1997-02-27 1999-09-28 International Business Machines Corporation Transformational raid for hierarchical storage management system
US6092215A (en) * 1997-09-29 2000-07-18 International Business Machines Corporation System and method for reconstructing data in a storage array system
US6269381B1 (en) * 1998-06-30 2001-07-31 Emc Corporation Method and apparatus for backing up data before updating the data and for restoring from the backups
US20020059505A1 (en) * 1998-06-30 2002-05-16 St. Pierre Edgar J. Method and apparatus for differential backup in a computer storage system
US20010010070A1 (en) * 1998-08-13 2001-07-26 Crockett Robert Nelson System and method for dynamically resynchronizing backup data
US6463573B1 (en) * 1999-06-03 2002-10-08 International Business Machines Corporation Data processor storage systems with dynamic resynchronization of mirrored logical data volumes subsequent to a storage system failure
US6543004B1 (en) * 1999-07-29 2003-04-01 Hewlett-Packard Development Company, L.P. Method and apparatus for archiving and restoring data
US6671705B1 (en) * 1999-08-17 2003-12-30 Emc Corporation Remote mirroring system, device, and method
US6662268B1 (en) * 1999-09-02 2003-12-09 International Business Machines Corporation System and method for striped mirror re-synchronization by logical partition rather than stripe units
US6654912B1 (en) * 2000-10-04 2003-11-25 Network Appliance, Inc. Recovery of file system data in file servers mirrored file system volumes

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070180305A1 (en) * 2003-01-31 2007-08-02 Hitachi, Ltd. Methods for Controlling Storage Devices Controlling Apparatuses
US7596672B1 (en) 2003-10-24 2009-09-29 Network Appliance, Inc. Synchronous mirroring including writing image updates to a file
US7200726B1 (en) 2003-10-24 2007-04-03 Network Appliance, Inc. Method and apparatus for reducing network traffic during mass storage synchronization phase of synchronous data mirroring
US7203796B1 (en) 2003-10-24 2007-04-10 Network Appliance, Inc. Method and apparatus for synchronous data mirroring
US7325109B1 (en) * 2003-10-24 2008-01-29 Network Appliance, Inc. Method and apparatus to mirror data at two separate sites without comparing the data at the two sites
US7162662B1 (en) * 2003-12-23 2007-01-09 Network Appliance, Inc. System and method for fault-tolerant synchronization of replica updates for fixed persistent consistency point image consumption
US7363537B1 (en) * 2003-12-23 2008-04-22 Network Appliance, Inc. System and method for fault-tolerant synchronization of replica updates for fixed persistent consistency point image consumption
US20060168624A1 (en) * 2004-11-22 2006-07-27 John Carney Method and system for delivering enhanced TV content
US20060136771A1 (en) * 2004-12-06 2006-06-22 Hitachi, Ltd. Storage system and snapshot data preparation method in storage system
US8095822B2 (en) 2004-12-06 2012-01-10 Hitachi, Ltd. Storage system and snapshot data preparation method in storage system
US7536592B2 (en) * 2004-12-06 2009-05-19 Hitachi, Ltd. Storage system and snapshot data preparation method in storage system
US20090216977A1 (en) * 2004-12-06 2009-08-27 Hitachi, Ltd. Storage System and Snapshot Data Preparation Method in Storage System
US7707165B1 (en) * 2004-12-09 2010-04-27 Netapp, Inc. System and method for managing data versions in a file system
US9344112B2 (en) 2006-04-28 2016-05-17 Ling Zheng Sampling based elimination of duplicate data
US20070255758A1 (en) * 2006-04-28 2007-11-01 Ling Zheng System and method for sampling based elimination of duplicate data
US8165221B2 (en) 2006-04-28 2012-04-24 Netapp, Inc. System and method for sampling based elimination of duplicate data
US8296260B2 (en) 2006-06-29 2012-10-23 Netapp, Inc. System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US20080005201A1 (en) * 2006-06-29 2008-01-03 Daniel Ting System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US8412682B2 (en) * 2006-06-29 2013-04-02 Netapp, Inc. System and method for retrieving and using block fingerprints for data deduplication
US20080005141A1 (en) * 2006-06-29 2008-01-03 Ling Zheng System and method for retrieving and using block fingerprints for data deduplication
US20110035357A1 (en) * 2006-06-29 2011-02-10 Daniel Ting System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US7921077B2 (en) 2006-06-29 2011-04-05 Netapp, Inc. System and method for managing data deduplication of storage systems utilizing persistent consistency point images
US7747584B1 (en) 2006-08-22 2010-06-29 Netapp, Inc. System and method for enabling de-duplication in a storage system architecture
US7853750B2 (en) 2007-01-30 2010-12-14 Netapp, Inc. Method and an apparatus to store data patterns
US20080184001A1 (en) * 2007-01-30 2008-07-31 Network Appliance, Inc. Method and an apparatus to store data patterns
US8001307B1 (en) 2007-04-27 2011-08-16 Network Appliance, Inc. Apparatus and a method to eliminate deadlock in a bi-directionally mirrored data storage system
US9069787B2 (en) 2007-05-31 2015-06-30 Netapp, Inc. System and method for accelerating anchor point detection
US20080301134A1 (en) * 2007-05-31 2008-12-04 Miller Steven C System and method for accelerating anchor point detection
US8762345B2 (en) 2007-05-31 2014-06-24 Netapp, Inc. System and method for accelerating anchor point detection
US20080313496A1 (en) * 2007-06-12 2008-12-18 Microsoft Corporation Gracefully degradable versioned storage systems
US7849354B2 (en) * 2007-06-12 2010-12-07 Microsoft Corporation Gracefully degradable versioned storage systems
US8793226B1 (en) 2007-08-28 2014-07-29 Netapp, Inc. System and method for estimating duplicate data
US7941691B2 (en) * 2008-05-28 2011-05-10 Fujitsu Limited Control of connecting apparatuses in information processing system
US20090299492A1 (en) * 2008-05-28 2009-12-03 Fujitsu Limited Control of connecting apparatuses in information processing system
US8250043B2 (en) 2008-08-19 2012-08-21 Netapp, Inc. System and method for compression of partially ordered data sets
US20100049726A1 (en) * 2008-08-19 2010-02-25 Netapp, Inc. System and method for compression of partially ordered data sets
US10142121B2 (en) 2011-12-07 2018-11-27 Comcast Cable Communications, Llc Providing synchronous content and supplemental experiences
US10848333B2 (en) 2011-12-07 2020-11-24 Comcast Cable Communications, Llc Providing synchronous content and supplemental experiences
US11711231B2 (en) 2011-12-07 2023-07-25 Comcast Cable Communications, Llc Providing synchronous content and supplemental experiences
US20150039572A1 (en) * 2012-03-01 2015-02-05 Netapp, Inc. System and method for removing overlapping ranges from a flat sorted data structure
US9720928B2 (en) * 2012-03-01 2017-08-01 Netapp, Inc. Removing overlapping ranges from a flat sorted data structure

Also Published As

Publication number Publication date
US20040153736A1 (en) 2004-08-05
EP1325415A2 (en) 2003-07-09
WO2002029572B1 (en) 2003-04-24
EP1325415B1 (en) 2005-08-03
WO2002029572A9 (en) 2003-11-13
WO2002029572A8 (en) 2002-09-12
US6654912B1 (en) 2003-11-25
DE60112462T2 (en) 2006-04-20
WO2002029572A3 (en) 2003-01-09
DE60112462D1 (en) 2005-09-08
US7096379B2 (en) 2006-08-22
WO2002029572A2 (en) 2002-04-11

Similar Documents

Publication Publication Date Title
US7143249B2 (en) Resynchronization of mirrored storage devices
US20020194529A1 (en) Resynchronization of mirrored storage devices
US5682513A (en) Cache queue entry linking for DASD record updates
US7634594B1 (en) System and method for identifying block-level write operations to be transferred to a secondary site during replication
US7415488B1 (en) System and method for redundant storage consistency recovery
US7337288B2 (en) Instant refresh of a data volume copy
JP4454342B2 (en) Storage system and storage system control method
US7904684B2 (en) System and article of manufacture for consistent copying of storage volumes
US7478263B1 (en) System and method for establishing bi-directional failover in a two node cluster
US6035412A (en) RDF-based and MMF-based backups
US6678809B1 (en) Write-ahead log in directory management for concurrent I/O access for block storage
US7383407B1 (en) Synchronous replication for system and data security
US6366986B1 (en) Method and apparatus for differential backup in a computer storage system
US7194487B1 (en) System and method for recording the order of a change caused by restoring a primary volume during ongoing replication of the primary volume
US6981114B1 (en) Snapshot reconstruction from an existing snapshot and one or more modification logs
US6269381B1 (en) Method and apparatus for backing up data before updating the data and for restoring from the backups
US7089385B1 (en) Tracking in-progress writes through use of multi-column bitmaps
US6553389B1 (en) Resource availability determination mechanism for distributed data storage system
US6832330B1 (en) Reversible mirrored restore of an enterprise level primary disk
US8200631B2 (en) Snapshot reset method and apparatus
US20040254964A1 (en) Data replication with rollback
US20030065780A1 (en) Data storage system having data restore by swapping logical units
US7424497B1 (en) Technique for accelerating the creation of a point in time prepresentation of a virtual file system
US20070277012A1 (en) Method and apparatus for managing backup data and journal
US7617259B1 (en) System and method for managing redundant storage consistency at a file system level

Legal Events

Date Code Title Description
AS Assignment

Owner name: NETWORK APPLIANCE, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DOUCETTE, DOUGLAS P.;STRANGE, STEPHEN H.;VISWANATHAN, SRINIVASAN;AND OTHERS;REEL/FRAME:013198/0401;SIGNING DATES FROM 20020730 TO 20020812

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION