US9430343B1 - Using affinity to mediate bias in a distributed storage system - Google Patents

Using affinity to mediate bias in a distributed storage system Download PDF

Info

Publication number
US9430343B1
US9430343B1 US13/465,275 US201213465275A US9430343B1 US 9430343 B1 US9430343 B1 US 9430343B1 US 201213465275 A US201213465275 A US 201213465275A US 9430343 B1 US9430343 B1 US 9430343B1
Authority
US
United States
Prior art keywords
site
data
affinity
copy
communication
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/465,275
Inventor
Michael Trachtman
Bradford B. Glade
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
EMC Corp
Original Assignee
EMC Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by EMC Corp filed Critical EMC Corp
Priority to US13/465,275 priority Critical patent/US9430343B1/en
Assigned to EMC CORPORATION reassignment EMC CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TRACHTMAN, MICHAEL, GLADE, BRADFORD B.
Application granted granted Critical
Publication of US9430343B1 publication Critical patent/US9430343B1/en
Assigned to CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT reassignment CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT SECURITY AGREEMENT Assignors: ASAP SOFTWARE EXPRESS, INC., AVENTAIL LLC, CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL SYSTEMS CORPORATION, DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., MAGINATICS LLC, MOZY, INC., SCALEIO LLC, SPANNING CLOUD APPS LLC, WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT SECURITY AGREEMENT Assignors: ASAP SOFTWARE EXPRESS, INC., AVENTAIL LLC, CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL SOFTWARE INC., DELL SYSTEMS CORPORATION, DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., MAGINATICS LLC, MOZY, INC., SCALEIO LLC, SPANNING CLOUD APPS LLC, WYSE TECHNOLOGY L.L.C.
Assigned to EMC IP Holding Company LLC reassignment EMC IP Holding Company LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EMC CORPORATION
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES, INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY AGREEMENT Assignors: CREDANT TECHNOLOGIES INC., DELL INTERNATIONAL L.L.C., DELL MARKETING L.P., DELL PRODUCTS L.P., DELL USA L.P., EMC CORPORATION, EMC IP Holding Company LLC, FORCE10 NETWORKS, INC., WYSE TECHNOLOGY L.L.C.
Assigned to FORCE10 NETWORKS, INC., EMC IP Holding Company LLC, EMC CORPORATION, DELL PRODUCTS L.P., DELL SYSTEMS CORPORATION, ASAP SOFTWARE EXPRESS, INC., DELL SOFTWARE INC., DELL MARKETING L.P., MOZY, INC., MAGINATICS LLC, CREDANT TECHNOLOGIES, INC., WYSE TECHNOLOGY L.L.C., SCALEIO LLC, DELL INTERNATIONAL, L.L.C., AVENTAIL LLC, DELL USA L.P. reassignment FORCE10 NETWORKS, INC. RELEASE BY SECURED PARTY (SEE DOCUMENT FOR DETAILS). Assignors: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH
Assigned to SCALEIO LLC, EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), DELL PRODUCTS L.P., DELL USA L.P., DELL INTERNATIONAL L.L.C., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.) reassignment SCALEIO LLC RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Assigned to DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), DELL PRODUCTS L.P., SCALEIO LLC, DELL USA L.P., DELL INTERNATIONAL L.L.C., DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.) reassignment DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.) RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001) Assignors: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2058Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring using more than 2 mirrored copies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/1658Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit
    • G06F11/1662Data re-synchronization of a redundant component, or initial sync of replacement, additional or spare unit the resynchronized component or unit being a persistent storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2094Redundant storage or storage space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/142Reconfiguring to eliminate the error
    • G06F11/1425Reconfiguring to eliminate the error by reconfiguration of node membership
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1479Generic software techniques for error detection or fault masking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/82Solving problems relating to consistency

Definitions

  • This invention is generally related to distributed data storage systems, and more particularly to responding to loss of communication in such systems.
  • a distributed storage system includes multiple sites which are in communication via a network.
  • a separate copy of the same set of data may be maintained at each one of the sites.
  • the data set represents a logical unit of storage (LUN) which can be presented to clients at the different sites.
  • LUN logical unit of storage
  • a client associated with a particular site may access the virtualized LUN by performing IOs on data maintained at the associated site. Changes made to data at the associated site are communicated to other sites to prompt corresponding changes which help maintain consistency of the copies of the data set across sites.
  • Procedures may be used to prevent contemporaneous changes by different clients. For example, a portion of data such as a range of blocks can be locked down across sites so that two clients are prevented from contemporaneously modifying that data in a manner which creates inconsistency.
  • criteria such as a “bias” setting or a “vote” provided by a third “witness” site may be used to limit access to the LUN while inter-site communication is not possible.
  • criteria such as a “bias” setting or a “vote” provided by a third “witness” site may be used to limit access to the LUN while inter-site communication is not possible.
  • only one site is permitted to access the LUN. Consequently, clients at other sites cannot read or write to the LUN. This helps ensure that data consistency is maintained, but may inhibit client progress at sites which are not provided access to the LUN.
  • an apparatus comprises: a first data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device; a second data storage device associated with a second site, a second copy of a set of data being maintained on the second data storage device; an interface for communicating between the first site and the second site to maintain consistency of the data; and control logic operative in response to loss of communication between the first site and the second site to provide access to at least a first part of the first copy of the data set at the first site, and provide access to at least a second part of the second copy of the data set at the second site, and in response to restoration of communication between the first site and the second site to resolve differences between the first copy and the second copy.
  • an apparatus comprises: a first data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device; a second data storage device associated with a second site, a second copy of a set of data being maintained on the second data storage device; a third data storage device associated with a third site, a third copy of a set of data being maintained on the third data storage device, and potentially a fourth or more similarly configured storage devices, storage sites and sets of data; an interface for communicating between the sites to maintain consistency of the data; and control logic operative in response to loss of communication between some subset of the sites that results in either a logical or physical partition of the sites into two (or more) groups in which the sites of the each group have communication with each other but do not have communication with other sites and in response to restoration of communication between the groups of sites to resolve differences between the various copies.
  • a method of operating a distributed storage system comprises: maintaining a first copy of a set of data by a first site; maintaining a second copy of the set of data by a second site; communicating between the first site and the second site to maintain consistency of the data; in response to loss of communication between the first site and the second site, providing access to at least a first part of the first copy of the data set at the first site, and providing access to at least a second part of the second copy of the data set at the second site; and in response to restoration of communication between the first site and the second site, resolving differences between the first copy and the second copy.
  • One advantage of some aspects of the invention is that access to a data set can be provided to more than one site during loss of communication. This advantage is realized by using affinity to determine access privileges for each site, where affinity is an indication that a particular site has or should be given permission to perform an IO (Input/Output) operation to a particular part of the data set. Affinities can be specified such that permissions are non-overlapping, or overlap in a manner which is acceptable and resolvable when communication is restored.
  • Another advantage of some aspects of the invention is that differences between copies of data sets can be automatically resolved.
  • a data storage device can be “repaired” upon inspection of read and write activity that occurred at each site following restoration of communications. This may be accomplished using plug-in resolvers.
  • FIG. 1 illustrates a simplified distributed data storage system in which a virtualized LUN is maintained across multiple sites.
  • FIG. 2 illustrates steps for using affinity in a distributed storage system.
  • FIG. 3 illustrates affinity in an unstructured data environment.
  • FIG. 4 illustrates affinity in an structured data environment.
  • FIG. 5 illustrates real-time affinity changes.
  • FIG. 6 illustrates assignment of affinities to different parts of a file.
  • FIG. 7 illustrates Read Only affinity
  • FIG. 8 illustrates Laissez Faire affinity.
  • FIG. 9 illustrates Read Laissez Faire affinity.
  • FIG. 10 illustrates data resolution
  • FIG. 11 illustrates resolution of free block allocation inconsistency.
  • aspects of the invention may be implemented wholly or partly using computer program code.
  • the program code may or may not operate in a distributed manner. However it is implemented, the program code will be stored in computer-readable non-transitory memory and utilized by processing hardware.
  • FIG. 1 illustrates a simplified distributed data storage system in which a virtualized LUN 99 is maintained across at least two sites (site 100 and site 200 ) which are in communication via a network 97 .
  • Physical data storage resources 102 , 202 at each site maintain separate copies of the data associated with the LUN.
  • Clients 104 a - 104 n and 204 a - 204 n associated with the sites are able to access the data storage resources, and thus the LUN, via server nodes 106 , 206 .
  • the server nodes may include block servers, and one server node might support multiple sites. Moreover, the server nodes may be integrated with other devices.
  • the servers maintain communication via the network in order to maintain data consistency. For example, the servers lock down portions of data that have been “checked out,” signal an indication of local data modifications to other sites, and mirror local changes signaled from other sites.
  • access to the illustrated LUN 99 is shared between sites during loss of inter-site communication.
  • “affinity” 108 , 208 is used to determine access privileges at step 212 .
  • Affinity is an indication that a particular site has or should be given permission to perform an IO to a particular set of data, including parts thereof, e.g., device-blocks; files or directories in a file system (or byte ranges in a file, or subsets of entries in a directory), tables, rows or columns in a database; particular keys in a key-value store; specific mailboxes in an email system, specific customer accounts in a customer relationship management (CRM) system; specific objects in any object based store; and the like or equivalent during a period of communication loss.
  • CCM customer relationship management
  • Affinity is orthogonal to traditional access permissions (ACLs) because in at least one embodiment it is only used during communication loss scenarios.
  • the servers 106 , 206 are adapted to utilize affinity to enforce access privileges for clients during periods of loss of inter-site communication. Consistency between records of affinity is maintained using any of the various techniques for maintaining data consistency when inter-site communication is available.
  • a resolution step 214 is performed when communication is restored in order to restore inter-site data consistency, if necessary. Techniques for detecting loss and restoration of communication are well understood in the art and will not be described further. Using affinity to determine access privileges and performing resolution are described in greater detail below.
  • affinity may be based on either or both of manual input and automated calculations.
  • IT administrators can explicitly set the affinity of a particular node(s) to particular sets of data.
  • affinity may be determined based on rules associated with a workflow management system 95 .
  • workflow management system rules can be translated into an affinity map or a set of affinity rules. For example, if it is known that one site works with the data during daylight hours and the other site works with the data during night time hours, then affinity can automatically transfer based on these workflow management rules.
  • Another example could be using the customer's country code in a CRM to place affinity with the primary server that is serving customers in that country.
  • affinity may be determined based on previous IOs.
  • a record may be maintained of which site has most recently performed either or both a Read and a Write, e.g., W 100 indicates a Write by site 100 in FIG. 3 .
  • the record might include a map in which new entries overwrite old entries for a given data-segment. Consequently, if a range of data segments 300 that was marked as having been written by site 100 and is subsequently written by site 200 , the map will replace the indication (W 100 ) of the Write by site 100 with the indication (W 200 ) of the Write by site 200 . Affinity can be determined based on the mapping.
  • the site which most recently performed a Write to a data segment or range of data segments may be granted affinity, e.g., such that it is the only site permitted to Read/Write to that data segment during communication loss.
  • affinity e.g., such that it is the only site permitted to Read/Write to that data segment during communication loss.
  • only the site which most recently performed either a Read or a Write is granted affinity, e.g., permitted to Read/Write to that segment or range of segments during communication loss.
  • Read and Write permissions can be kept separate and the last site which most recently performed a Write would be granted Write affinity, and the last site which most recently performed a Read would be granted Read affinity.
  • the site that was granted Write affinity may also be granted Read affinity; especially if its Write patterns are Read-Modify-Write cycles, as opposed to writes-at-the-end-of-the-data-set type writes, or “obliviously-replace-content-without-looking-at-the-prior-content type writes.”
  • affinity does not necessarily imply exclusive access.
  • other sites may or may not be permitted to perform certain IOs for that data segment or range of data segments, e.g., depending on whether the possibility of conflicting Writes or Reads of stale data presents a greater risk than temporarily disabling Writes or Reads.
  • the range of data segments that share the same affinity can be identified using a clustering algorithm that identifies ranges of data segments that likely have the same IO usage profile.
  • stale entries are timed out. For example, a Read or a Write entry becomes a “not accessed” entry after a predetermined period of time without further IO activity.
  • a data-segment or range of data-segments 302 marked “not accessed” may default during communication loss to exclusive Write or Read/Write by a particular site, or be available for Reads by all sites but Writes by only one site, or may be available for Reads and Writes by all sites. For example, Reads and Writes by all sites might be permitted if there is a relatively low likelihood of conflicting Writes to a data-segment which is rarely accessed. Further, there may be a possibility of resolving the conflicting Writes if they occur.
  • Those skilled in the art will readily appreciate which variations are more suitable for particular scenarios based on factors such as what the data represents and how it is used.
  • data-segments are grouped and affinity is applied to groupings of data-segments.
  • Groupings can be calculated based on clustering algorithms which are known for predicting associations between IOs. For example, it may be calculated that multiple Writes by a particular site are associated, e.g., with a particular file or directory. Furthermore, it may be calculated that other data-segments which have not been subject to Writes are also associated with the group, e.g., based on position.
  • exclusive affinity may be given to a particular site for a group of data-segments 304 that includes one or more data-segments marked with most recent Reads or Writes and also one or more data-segments which are marked in some other manner but calculated to be associated with the grouping.
  • information about the structure of the data may be used to determine affinity in a manner which further enhances data processing/workflow progress during communication loss.
  • a filesystem may be mounted on the LUN and each site may be granted affinity to particular files or directories. Consequently, clients 104 a , 204 a at different sites 100 , 200 working in different directories 400 , 401 , or in different files in the same directory, are able to continue to do so even after loss of communication by having affinity to those directories or files.
  • the two clients working in different directories might be prevented from transferring files between those directories or taking other actions which would create inconsistencies during a period of communication loss.
  • affinity to subdirectories or files may be inherited.
  • affinity (such as a file system hierarchy) is inherited to data under a level at which affinity is directly granted.
  • inherited affinity may be overridden by a different affinity, e.g., directly grated affinity.
  • site 100 has direct affinity to directory 400
  • the inherited affinity to directory 405 by site 100 is overridden by a direct assignment of affinity to site 200 .
  • affinity may be changed in real time based on criteria other than most recent Read/Write. For example, if some particular type of file is routinely created by site 100 and subsequently modified by site 200 then site 200 could be provided with affinity to a file of that type before site 200 performs any Reads or Writes to the file. Similarly, if some file or directory 400 is used by site 100 during a first daily time period and used by site 200 during a second daily time period, affinity can be changed to match those usage patterns. An indication that a particular set of data will be handled in a manner which is suitable for such a change of affinity may be gained, for example and without limitation, from the work-flow management system 95 , or by tracking and granting access dynamically during connected operation.
  • affinity can also, or alternatively, be applied to a granularity such as one or more byte ranges in a file.
  • a granularity such as one or more byte ranges in a file.
  • “big-data” scenarios such as film-editing
  • the same file is processed by multiple sites.
  • the file will be relatively large and clients at different sites process different parts of the file.
  • site 100 might be working on colorizing the 3 rd -7 th scenes of a movie file 603 at range 600 while site 200 is working on adding musical-scoring to the 10 th -12 th scenes of the movie at range 601 .
  • a mutual locking mechanism, and/or change control system is used to avoid conflicting IOs when inter-site communication is available.
  • a client associated with site 100 would “checkout” the scenes to be colorized.
  • progress during communication loss may be facilitated by linking affinity to the “checkout” process.
  • site 100 would have affinity to the byte range(s) 600 associated with the 3 rd -7 th scenes of the movie.
  • site 200 would have affinity to the byte range(s) 601 associated with the 10 th -12 th scenes of the movie because the client on site 200 would “checkout” those scenes for musical scoring. Consequently, progress on at least the checked-out data is possible during communication loss.
  • Other levels of granularity which are unambiguously understood can alternatively be used.
  • one site may be allowed to modify the first ten paragraphs of a document while the other site may be allowed to modify all subsequent paragraphs.
  • the system may allow any site to check out scenes that were not checked out when the communication loss occurred under the premise that the likelihood that more than one site will perform unreconcilable edits on the same scene at the same time is negligible, or that it is possible that the two sites will coordinate their work using other mechanisms such as telephone based coordination or that in the worse case the work of all but one of the sites will have to be redone, but at least some work will be salvageable and that not all the employee time during the communication loss will be wasted.
  • one or more files or directories are assigned “Read Only” affinity 700 .
  • Files and directories with Read Only affinity are made available for Reads by all sites.
  • Read Only affinity can be applied, for example, to files and directories that are not being actively modified. Alternatively, it may be applied as the default mode for the entire file-system by providing it on the root-directory of the file-system.
  • Read Only affinity can mean that any site may read the data at any time, e.g., during loss of communication and also when inter-site communication is available.
  • the system is configured such that Read Only affinity directly applied to a file or directory, or inherited downward, is overridden by direct and inherited affinities to specific sites.
  • affinities may be applied contemporaneously, i.e., blended.
  • blended affinity When blended affinity is applied to a data set, different sites have different access privileges to that same data set. In other words, multiple affinities are applied to the same data set. Blended affinity is advantageous in situations where conflicts or the possibility of reading stale data is either negligible, tolerable or easily reconciled.
  • Laissez Faire affinity 800 one type of blended affinity is Laissez Faire affinity 800 .
  • Laissez Faire affinity allows all sites to perform IOs to a data set, e.g., read, create, append and modify of a file or directory. This advantageously facilitates progress at multiple sites, but there is a risk of both data staleness and semantic-consistency of the data.
  • create operations performed in parallel can lead to the same file name being used to create two different files with the same name or purpose.
  • append operations may allocate the same logical region of a disk to two separate files. Consequently, Laissez Faire affinity will typically be applied only in certain situations.
  • Laissez Faire affinity 800 is applied to a temporary directory (e.g. “/tmp”) where each site creates files using unique identifiers (UIDs such as their IP address) as part of their file names.
  • UIDs such as their IP address
  • This application-convention can help guaranty that the two sites will never create conflicting content.
  • the two or more versions of the temporary directory can be resolved by adopting the convention that all files with UIDs in them that exist in any site are placed in the repaired version of the directory.
  • a special procedure may be used where stale data is still being used when communication is restored. For example, if site 100 made changes to the file while site 200 had read-affinity which it used to read the file, site 200 would be using stale data. If the application on site 200 is still reading that file when communication is restored then a snapshot of the stale data may continue to be used by the process on site 200 . The snapshot would be deleted after the process on site 200 completes. This ensures that site 200 will work with a consistent-to-itself view of the data.
  • FIG. 9 illustrates a Read-Laissez-Faire variation of blended affinity.
  • Read-Laissez-Faire affinity 900 allows one site to have Write affinity to a data set such as a file or directory, and at least one other site to have Read affinity to the data set.
  • site 100 could have Read/Write permission to directory 400 while site 200 has Read permission to directory 400 (based on Read Laissez Faire affinity 900 ).
  • site 200 will Read stale data if a Read of directory 400 is performed after a Write to directory 400 by site 100 , where both IOs occur during loss of communication.
  • such a risk may be appropriate in certain situations.
  • a risk of reading stale data may be acceptable when the stale data is still valid for an extended period of time, when the data is rarely subject to Writes, or when the Writes are unlikely to be associated with critical changes.
  • Examples of such data include social-networking elements of some web-sites. For example, it may be acceptable if an Internet retailer shows potentially stale versions of product-descriptions or product-reviews to customers, even if those product descriptions or product reviews have been updated on some other site during a loss of communication, because product-descriptions or product-reviews may be unlikely to be associated with critical changes. Sites would have to determine what a “critical change” is. For example, some Internet retailers may consider price changes to be a “critical change”, while others may be willing to honor stale pricing during the duration of a communication loss.
  • a variation of Read-Laissez-Faire affinity is timed-expiration for the Read-Laissez-Faire affinity.
  • Timed-expiration for the Read-Laissez-Faire affinity causes the Read portion of the affinity to expire after a predetermined period of time. For example, Reads by sites other than the site with Write affinity might be configured to expire 24 hours after a communication loss is detected. This allows sites with Read affinity to progress using potentially stale data for the predetermined period of time. If communication is not restored by the end of this period of time then the Read affinities lapse and those sites cannot perform Reads of the data set. However, the site with Read/Write affinity may continue to perform Reads and Writes.
  • the affinity determination based on previous IOs described above can be applied when the data structure is known.
  • a record may be maintained of which site has most recently performed either or both a Read and a Write.
  • the record might include a map or table in which new entries overwrite old entries for a given data set such as a file or directory. Affinity is determined based on the record.
  • the site which most recently performed a Write to the file or directory may be the only site permitted to Read/Write to that file or directory during communication loss.
  • only the site which most recently performed either a Read or a Write of the file or directory is permitted to Read/Write to that file or directory during communication loss.
  • stale entries may be timed out.
  • the operations associated with affinity to directories are not necessarily the same as the operations associated with affinity to files.
  • Primary operations associated with directories include read-directory (and associated information), create file, delete file and possibly rename file, access file meta data or modify file meta data, whereas the primary operations associated with files include Read and Write, or in a more sophisticated environment, sequential reads, random reads, append, modify or erase/delete. It will be appreciated that separate treatments of Read and Write affinities described herein could be applied to the operations associated with directories. For example, create, delete, change file attribute and rename operation affinity could reside with one site while read-directory could be shared.
  • affinity restrictions are applied to affinity relationships.
  • Affinity restrictions can limit affinities in terms of client ID, filename, directory name, time, and any other expression that can be processed.
  • Laissez Faire affinity on the/tmp directory 800 ( FIG. 8 ) could be subject to a scope restriction that site 100 is given the Laissez Faire affinity for any file that matches the pattern/tmp/site1?* while site 200 is given the Laissez Faire affinity for all other files.
  • affinity or affinities for opening a file or directory can be restricted to specific times.
  • the resolution step 214 is performed when communication is restored in order to provide inter-site data consistency.
  • the data that exists at the sites is reconciled to recreate a single view of that data, e.g., the virtual LUN.
  • Resolver software may utilize file-system semantics (merge of discrete updates) and affinities to perform resolution.
  • affinities limit the number of sites that can possibly modify any particular data set to at most one site, any changes that the site made are copied to other sites. For example, one simple resolution strategy is if site 100 had affinity for a particular byte range within a file, then any changes that site 100 made during the communication loss would be the resolved value for that byte range.
  • Resolution is performed at the directory entry level in the case of changes to a directory. For example, last file accessed time can be set to the latter of all the file accessed times across all sites.
  • affinities permit only one site to modify any particular data set there may be a possibility of inconsistency of free block allocation.
  • free block are used from a free data-segment list 1100 .
  • the sites coordinate using file system software to use the free data-segment list in a non-conflicting manner.
  • the two sites will use the same data-segment.
  • both sites may choose to use LUN block number 435620 to add a block to their respective files.
  • LUN block number 435620 can be resolved by moving the block for one of the sites to a different available free block.
  • the data can be moved to a different block number and the file-block-list pointers for the file, e.g., /app/site 200 /data, can be updated to point to the new block number.
  • This change of block number can be physical, e.g., move to another physical block, or logical, e.g., add an extra level of indirection to just change the logical number of the block while maintaining its physical location.
  • each site is given its own list of free blocks. Each list is different than the lists of free blocks that are given to all other sites. Each site would then only use blocks from its free block list when allocating new blocks during the course of communication failure. When the sites are in communication with each other, they would continuously update these available per-site free block lists to make sure that each site has an adequate supply of free blocks. This strategy would simplify reconciliation since no two sites would allocate the same blocks. When a site uses up all the blocks on its own per-site free block list, it might make use of other free blocks that are not on its own per site free block list, and then the reconciliation strategies described earlier could be used.
  • File changes such as file name changes, new links to files, file deletions and file creation may require more complex resolution procedures. For example, if two or more sites created (or deleted) files in the /tmp directory using some file naming convention (e.g. as described earlier), then those sites would not have created the same file. In that case, the resolved directory would be based on the sum of all the changes made by both sites.
  • Custom resolvers can be utilized if there is the potential that multiple sites made directory changes to the same file.
  • One example of a custom resolver is for Laissez-Faire affinity Resolution. Custom resolvers might require human intervention to decide the best way to handle both sites making changes to the same file or directory.
  • an automated algorithm may be sufficiently aware of the underlying work-flow, file structure and file use conventions to automatically resolve the differences.
  • Automated custom resolvers can work based on the semantics of the data set, e.g. at a file-level (and not at a block level) so as to guaranty that the data set (such as a file system) remains consistent, e.g., would pass programs like the Linux command fsck. Similar considerations would apply to other structured data sets such as data bases, email systems, CRM systems, key value pair storage and the like.
  • either or both affinities 108 , 208 and resolutions 175 , 177 are implemented using plug-ins 179 .
  • Each LUN could be assigned a plug-in, instances of which could be integrated into the block servers and invoke callbacks when various events occur. Such events could include file-reads, file-writes and other block level IO commands.
  • the server would use the plug-in associated with the LUN to determine whether to allow the IO or not.
  • a default plug-in could use block level semantics that use bias or quorum voting with the witness to decide if to allow the IO or not.
  • Plug-ins would be used, for example, to implement file-oriented affinity, database-oriented affinity scheme that is outlined below or some other custom decider for which IOs should be allowed on each site during a split-brain. If the IO is not allowed, then an IO error (data not available) would be sent to the host that issued the IO command. The file system could then react as it always does for failed disks. A plug-in could also be installed and called to resolve changes when communication is resumed.
  • affinities and resolutions described above would not require file system modification.
  • at least some of the features are integrated into the file system. For example, it may be advantageous to integrate Free Block resolution into the file system, and also a modification to journal all changes made to the file-system during the loss of communication.
  • affinity rules can be created that are database centric.
  • Database affinities could be created for particular tables, particular rows or columns of tables, and particular meta-data operations such as creating or deleting tables, changing the structure of tables, etc.
  • Particular rows to which an affinity applies can be based on SQL using clauses or more comprehensive regular expressions. For example, one site may be assigned the affinity for all customers with one set of zip codes, while the other site could be assigned the affinity for all customers with a different set of zip codes.
  • a more complex implementation could comprise: a first data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device; a second data storage device associated with a second site, a second copy of a set of data being maintained on the second data storage device; a third data storage device associated with a third site, a third copy of a set of data being maintained on the third data storage device, and potentially a fourth or more similarly configured storage devices, storage sites and sets of data; an interface for communicating between the sites to maintain consistency of the data; and control logic operative in response to loss of communication between some subset of the sites that results in a partition of the sites into two (or more) groups in which the sites of the each group have communication with each other but do not have communication with other sites and in response to restoration of communication between the groups of sites to resolve differences between the various copies.

Abstract

In a distributed storage system in which a first copy of a set of data such as a virtualized LUN is maintained by a first site, and a second copy is maintained by a second site, access is provided to both the first site and the second site during loss of communication between the first site and the second site. Affinity determines access privileges for each site, where affinity is an indication that a particular site has or should be given permission to perform an operation to a particular part of the data set. Affinities can be specified such that permissions are non-overlapping, or overlap in a manner which is acceptable and resolvable.

Description

FIELD OF THE INVENTION
This invention is generally related to distributed data storage systems, and more particularly to responding to loss of communication in such systems.
BACKGROUND OF THE INVENTION
A distributed storage system includes multiple sites which are in communication via a network. A separate copy of the same set of data may be maintained at each one of the sites. In one implementation of such a system, the data set represents a logical unit of storage (LUN) which can be presented to clients at the different sites. A client associated with a particular site may access the virtualized LUN by performing IOs on data maintained at the associated site. Changes made to data at the associated site are communicated to other sites to prompt corresponding changes which help maintain consistency of the copies of the data set across sites. Procedures may be used to prevent contemporaneous changes by different clients. For example, a portion of data such as a range of blocks can be locked down across sites so that two clients are prevented from contemporaneously modifying that data in a manner which creates inconsistency.
A problem can arise when there is a loss of communication between sites. Loss of communication can occur for various reasons, including but not limited to physical breakage of communication media (e.g., accidentally cutting a fiber), node failure, extreme weather, routine maintenance, and/or intentional scheduling of communication availability. Although loss of communication is usually temporary, there is a possibility that two or more clients at different sites could modify the same portion of data while procedures for maintaining consistency are rendered inoperative by the communication loss. For example, if two different sites were to write to the virtualized LUN then the two sites might write incompatible data. Similarly if one site were to write to the virtualized LUN and a different site were to read that data it could read stale data. Consequently, criteria such as a “bias” setting or a “vote” provided by a third “witness” site may be used to limit access to the LUN while inter-site communication is not possible. In particular, during communication breakage, only one site is permitted to access the LUN. Consequently, clients at other sites cannot read or write to the LUN. This helps ensure that data consistency is maintained, but may inhibit client progress at sites which are not provided access to the LUN.
Similarly, when there are three or more sites that are sharing the same set of data, and communication failures occur resulting in a partition of the sites, then various bias, voting, cluster witness or other quorum rules can be used to give write or read access to a subset of nodes that are in communication with each other and to prohibit write or read access from other nodes and thereby maintain data consistency until such time as communication is restored and all copies of the sets of data can be made consistent with each other.
SUMMARY OF THE INVENTION
In accordance with an aspect of the invention an apparatus comprises: a first data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device; a second data storage device associated with a second site, a second copy of a set of data being maintained on the second data storage device; an interface for communicating between the first site and the second site to maintain consistency of the data; and control logic operative in response to loss of communication between the first site and the second site to provide access to at least a first part of the first copy of the data set at the first site, and provide access to at least a second part of the second copy of the data set at the second site, and in response to restoration of communication between the first site and the second site to resolve differences between the first copy and the second copy.
In accordance with an aspect of the invention an apparatus comprises: a first data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device; a second data storage device associated with a second site, a second copy of a set of data being maintained on the second data storage device; a third data storage device associated with a third site, a third copy of a set of data being maintained on the third data storage device, and potentially a fourth or more similarly configured storage devices, storage sites and sets of data; an interface for communicating between the sites to maintain consistency of the data; and control logic operative in response to loss of communication between some subset of the sites that results in either a logical or physical partition of the sites into two (or more) groups in which the sites of the each group have communication with each other but do not have communication with other sites and in response to restoration of communication between the groups of sites to resolve differences between the various copies.
In accordance with another aspect of the invention a method of operating a distributed storage system comprises: maintaining a first copy of a set of data by a first site; maintaining a second copy of the set of data by a second site; communicating between the first site and the second site to maintain consistency of the data; in response to loss of communication between the first site and the second site, providing access to at least a first part of the first copy of the data set at the first site, and providing access to at least a second part of the second copy of the data set at the second site; and in response to restoration of communication between the first site and the second site, resolving differences between the first copy and the second copy.
One advantage of some aspects of the invention is that access to a data set can be provided to more than one site during loss of communication. This advantage is realized by using affinity to determine access privileges for each site, where affinity is an indication that a particular site has or should be given permission to perform an IO (Input/Output) operation to a particular part of the data set. Affinities can be specified such that permissions are non-overlapping, or overlap in a manner which is acceptable and resolvable when communication is restored.
Another advantage of some aspects of the invention is that differences between copies of data sets can be automatically resolved. In other words, a data storage device can be “repaired” upon inspection of read and write activity that occurred at each site following restoration of communications. This may be accomplished using plug-in resolvers.
Other features and advantages of the invention will be more apparent from the detailed description and the drawings.
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 illustrates a simplified distributed data storage system in which a virtualized LUN is maintained across multiple sites.
FIG. 2 illustrates steps for using affinity in a distributed storage system.
FIG. 3 illustrates affinity in an unstructured data environment.
FIG. 4 illustrates affinity in an structured data environment.
FIG. 5 illustrates real-time affinity changes.
FIG. 6 illustrates assignment of affinities to different parts of a file.
FIG. 7 illustrates Read Only affinity.
FIG. 8 illustrates Laissez Faire affinity.
FIG. 9 illustrates Read Laissez Faire affinity.
FIG. 10 illustrates data resolution.
FIG. 11 illustrates resolution of free block allocation inconsistency.
DETAILED DESCRIPTION
Aspects of this description are simplified to facilitate understanding of the principles and conceptual features of the invention to those of ordinary skill in the art. No attempt is made to show structural aspects in more detail than is necessary for a fundamental understanding. Furthermore, a simplified example with two sites is shown, but the principles and conceptual features described herein are not limited to two sites. Furthermore, multiple sites may be partitioned into two or more sets or groups of sites, where each set or group of sites retains communication with each other, but every two sets or groups of sites have no (or insufficient) communication with each other. Hence, the terms “group,” “set” and “site” should be broadly interpreted to include sites and partitions of sites. Furthermore, although files are used in the description and conceptual features described herein are applicable to any type of “set of data.”
Aspects of the invention may be implemented wholly or partly using computer program code. The program code may or may not operate in a distributed manner. However it is implemented, the program code will be stored in computer-readable non-transitory memory and utilized by processing hardware.
FIG. 1 illustrates a simplified distributed data storage system in which a virtualized LUN 99 is maintained across at least two sites (site 100 and site 200) which are in communication via a network 97. Physical data storage resources 102, 202 at each site maintain separate copies of the data associated with the LUN. Clients 104 a-104 n and 204 a-204 n associated with the sites are able to access the data storage resources, and thus the LUN, via server nodes 106, 206. The server nodes may include block servers, and one server node might support multiple sites. Moreover, the server nodes may be integrated with other devices. The servers maintain communication via the network in order to maintain data consistency. For example, the servers lock down portions of data that have been “checked out,” signal an indication of local data modifications to other sites, and mirror local changes signaled from other sites.
Referring to FIGS. 1 and 2, in contrast with problematic prior art systems which only allow one site to access the virtualized LUN during loss of inter-site communication, access to the illustrated LUN 99 is shared between sites during loss of inter-site communication. In particular, upon detecting a loss of communication at step 210, “affinity” 108, 208 is used to determine access privileges at step 212. Affinity is an indication that a particular site has or should be given permission to perform an IO to a particular set of data, including parts thereof, e.g., device-blocks; files or directories in a file system (or byte ranges in a file, or subsets of entries in a directory), tables, rows or columns in a database; particular keys in a key-value store; specific mailboxes in an email system, specific customer accounts in a customer relationship management (CRM) system; specific objects in any object based store; and the like or equivalent during a period of communication loss. In what follows, we will use the word data-segment to mean an identifiable piece of data such as those listed in the prior sentence, and range-of-data-segments to mean a contiguous set of data under some metric of contiguity. Affinity is orthogonal to traditional access permissions (ACLs) because in at least one embodiment it is only used during communication loss scenarios. The servers 106, 206 are adapted to utilize affinity to enforce access privileges for clients during periods of loss of inter-site communication. Consistency between records of affinity is maintained using any of the various techniques for maintaining data consistency when inter-site communication is available. A resolution step 214 is performed when communication is restored in order to restore inter-site data consistency, if necessary. Techniques for detecting loss and restoration of communication are well understood in the art and will not be described further. Using affinity to determine access privileges and performing resolution are described in greater detail below.
In general, affinity may be based on either or both of manual input and automated calculations. In a simple example, IT administrators can explicitly set the affinity of a particular node(s) to particular sets of data. It is also contemplated that affinity may be determined based on rules associated with a workflow management system 95. For example, and without limitation, workflow management system rules can be translated into an affinity map or a set of affinity rules. For example, if it is known that one site works with the data during daylight hours and the other site works with the data during night time hours, then affinity can automatically transfer based on these workflow management rules. Another example could be using the customer's country code in a CRM to place affinity with the primary server that is serving customers in that country.
Referring to FIGS. 1 and 3, affinity may be determined based on previous IOs. For each data-segment or range of data-segments of storage, a record may be maintained of which site has most recently performed either or both a Read and a Write, e.g., W100 indicates a Write by site 100 in FIG. 3. The record might include a map in which new entries overwrite old entries for a given data-segment. Consequently, if a range of data segments 300 that was marked as having been written by site 100 and is subsequently written by site 200, the map will replace the indication (W100) of the Write by site 100 with the indication (W200) of the Write by site 200. Affinity can be determined based on the mapping. For example, the site which most recently performed a Write to a data segment or range of data segments may be granted affinity, e.g., such that it is the only site permitted to Read/Write to that data segment during communication loss. Alternatively, only the site which most recently performed either a Read or a Write is granted affinity, e.g., permitted to Read/Write to that segment or range of segments during communication loss. Alternatively, Read and Write permissions can be kept separate and the last site which most recently performed a Write would be granted Write affinity, and the last site which most recently performed a Read would be granted Read affinity. The site that was granted Write affinity may also be granted Read affinity; especially if its Write patterns are Read-Modify-Write cycles, as opposed to writes-at-the-end-of-the-data-set type writes, or “obliviously-replace-content-without-looking-at-the-prior-content type writes.” However, affinity does not necessarily imply exclusive access. For example, other sites may or may not be permitted to perform certain IOs for that data segment or range of data segments, e.g., depending on whether the possibility of conflicting Writes or Reads of stale data presents a greater risk than temporarily disabling Writes or Reads.
In a variation, the range of data segments that share the same affinity can be identified using a clustering algorithm that identifies ranges of data segments that likely have the same IO usage profile.
In a variation of the embodiment described above, stale entries are timed out. For example, a Read or a Write entry becomes a “not accessed” entry after a predetermined period of time without further IO activity. A data-segment or range of data-segments 302 marked “not accessed” may default during communication loss to exclusive Write or Read/Write by a particular site, or be available for Reads by all sites but Writes by only one site, or may be available for Reads and Writes by all sites. For example, Reads and Writes by all sites might be permitted if there is a relatively low likelihood of conflicting Writes to a data-segment which is rarely accessed. Further, there may be a possibility of resolving the conflicting Writes if they occur. Those skilled in the art will readily appreciate which variations are more suitable for particular scenarios based on factors such as what the data represents and how it is used.
In another variation of the embodiment described above, data-segments are grouped and affinity is applied to groupings of data-segments. Groupings can be calculated based on clustering algorithms which are known for predicting associations between IOs. For example, it may be calculated that multiple Writes by a particular site are associated, e.g., with a particular file or directory. Furthermore, it may be calculated that other data-segments which have not been subject to Writes are also associated with the group, e.g., based on position. Consequently, exclusive affinity may be given to a particular site for a group of data-segments 304 that includes one or more data-segments marked with most recent Reads or Writes and also one or more data-segments which are marked in some other manner but calculated to be associated with the grouping.
Referring now to FIGS. 1 and 4, information about the structure of the data may be used to determine affinity in a manner which further enhances data processing/workflow progress during communication loss. For example, a filesystem may be mounted on the LUN and each site may be granted affinity to particular files or directories. Consequently, clients 104 a, 204 a at different sites 100, 200 working in different directories 400, 401, or in different files in the same directory, are able to continue to do so even after loss of communication by having affinity to those directories or files. However, the two clients working in different directories might be prevented from transferring files between those directories or taking other actions which would create inconsistencies during a period of communication loss.
In one embodiment affinity to subdirectories or files may be inherited. In particular, in a hierarchical data structure affinity (such as a file system hierarchy) is inherited to data under a level at which affinity is directly granted. Further, inherited affinity may be overridden by a different affinity, e.g., directly grated affinity. For example, site 100 has direct affinity to directory 400, and inherited affinity to directories 403, 405, and the inherited affinity to directory 405 by site 100 is overridden by a direct assignment of affinity to site 200.
Referring to FIGS. 1 and 5, affinity may be changed in real time based on criteria other than most recent Read/Write. For example, if some particular type of file is routinely created by site 100 and subsequently modified by site 200 then site 200 could be provided with affinity to a file of that type before site 200 performs any Reads or Writes to the file. Similarly, if some file or directory 400 is used by site 100 during a first daily time period and used by site 200 during a second daily time period, affinity can be changed to match those usage patterns. An indication that a particular set of data will be handled in a manner which is suitable for such a change of affinity may be gained, for example and without limitation, from the work-flow management system 95, or by tracking and granting access dynamically during connected operation.
Referring to FIGS. 1 and 6, affinity can also, or alternatively, be applied to a granularity such as one or more byte ranges in a file. In certain situations, e.g., “big-data” scenarios such as film-editing, the same file is processed by multiple sites. Often, the file will be relatively large and clients at different sites process different parts of the file. For example, site 100 might be working on colorizing the 3rd-7th scenes of a movie file 603 at range 600 while site 200 is working on adding musical-scoring to the 10th-12th scenes of the movie at range 601. A mutual locking mechanism, and/or change control system, is used to avoid conflicting IOs when inter-site communication is available. For example, a client associated with site 100 would “checkout” the scenes to be colorized. In such situations, progress during communication loss may be facilitated by linking affinity to the “checkout” process. For example, site 100 would have affinity to the byte range(s) 600 associated with the 3rd-7th scenes of the movie. Similarly, site 200 would have affinity to the byte range(s) 601 associated with the 10th-12th scenes of the movie because the client on site 200 would “checkout” those scenes for musical scoring. Consequently, progress on at least the checked-out data is possible during communication loss. Other levels of granularity which are unambiguously understood can alternatively be used. For example, one site may be allowed to modify the first ten paragraphs of a document while the other site may be allowed to modify all subsequent paragraphs. Similarly, the system may allow any site to check out scenes that were not checked out when the communication loss occurred under the premise that the likelihood that more than one site will perform unreconcilable edits on the same scene at the same time is negligible, or that it is possible that the two sites will coordinate their work using other mechanisms such as telephone based coordination or that in the worse case the work of all but one of the sites will have to be redone, but at least some work will be salvageable and that not all the employee time during the communication loss will be wasted.
Referring to FIG. 7, in another embodiment one or more files or directories are assigned “Read Only” affinity 700. Files and directories with Read Only affinity are made available for Reads by all sites. Read Only affinity can be applied, for example, to files and directories that are not being actively modified. Alternatively, it may be applied as the default mode for the entire file-system by providing it on the root-directory of the file-system. Further, Read Only affinity can mean that any site may read the data at any time, e.g., during loss of communication and also when inter-site communication is available. Typically, the system is configured such that Read Only affinity directly applied to a file or directory, or inherited downward, is overridden by direct and inherited affinities to specific sites.
In various other embodiments described below different affinities may be applied contemporaneously, i.e., blended. When blended affinity is applied to a data set, different sites have different access privileges to that same data set. In other words, multiple affinities are applied to the same data set. Blended affinity is advantageous in situations where conflicts or the possibility of reading stale data is either negligible, tolerable or easily reconciled.
Referring to FIG. 8, one type of blended affinity is Laissez Faire affinity 800. Laissez Faire affinity allows all sites to perform IOs to a data set, e.g., read, create, append and modify of a file or directory. This advantageously facilitates progress at multiple sites, but there is a risk of both data staleness and semantic-consistency of the data. Further, create operations performed in parallel can lead to the same file name being used to create two different files with the same name or purpose. Still further, append operations may allocate the same logical region of a disk to two separate files. Consequently, Laissez Faire affinity will typically be applied only in certain situations. For example, it is suitable for situations where data corruption is unlikely to occur, or where all data consistency conflicts are likely to be resolvable, e.g., where work-flow guarantees or other mechanisms ensure that the data can be resolved once communication is restored. In the illustrated example Laissez Faire affinity 800 is applied to a temporary directory (e.g. “/tmp”) where each site creates files using unique identifiers (UIDs such as their IP address) as part of their file names. This application-convention can help guaranty that the two sites will never create conflicting content. During the reconciliation step following restoration of communication the two or more versions of the temporary directory can be resolved by adopting the convention that all files with UIDs in them that exist in any site are placed in the repaired version of the directory.
A special procedure may be used where stale data is still being used when communication is restored. For example, if site 100 made changes to the file while site 200 had read-affinity which it used to read the file, site 200 would be using stale data. If the application on site 200 is still reading that file when communication is restored then a snapshot of the stale data may continue to be used by the process on site 200. The snapshot would be deleted after the process on site 200 completes. This ensures that site 200 will work with a consistent-to-itself view of the data.
FIG. 9 illustrates a Read-Laissez-Faire variation of blended affinity. Read-Laissez-Faire affinity 900 allows one site to have Write affinity to a data set such as a file or directory, and at least one other site to have Read affinity to the data set. For example, site 100 could have Read/Write permission to directory 400 while site 200 has Read permission to directory 400 (based on Read Laissez Faire affinity 900). It will be appreciated that site 200 will Read stale data if a Read of directory 400 is performed after a Write to directory 400 by site 100, where both IOs occur during loss of communication. However, it will also be appreciated that such a risk may be appropriate in certain situations. For example, a risk of reading stale data may be acceptable when the stale data is still valid for an extended period of time, when the data is rarely subject to Writes, or when the Writes are unlikely to be associated with critical changes. Examples of such data include social-networking elements of some web-sites. For example, it may be acceptable if an Internet retailer shows potentially stale versions of product-descriptions or product-reviews to customers, even if those product descriptions or product reviews have been updated on some other site during a loss of communication, because product-descriptions or product-reviews may be unlikely to be associated with critical changes. Sites would have to determine what a “critical change” is. For example, some Internet retailers may consider price changes to be a “critical change”, while others may be willing to honor stale pricing during the duration of a communication loss.
A variation of Read-Laissez-Faire affinity is timed-expiration for the Read-Laissez-Faire affinity. Timed-expiration for the Read-Laissez-Faire affinity causes the Read portion of the affinity to expire after a predetermined period of time. For example, Reads by sites other than the site with Write affinity might be configured to expire 24 hours after a communication loss is detected. This allows sites with Read affinity to progress using potentially stale data for the predetermined period of time. If communication is not restored by the end of this period of time then the Read affinities lapse and those sites cannot perform Reads of the data set. However, the site with Read/Write affinity may continue to perform Reads and Writes.
In some embodiments the affinity determination based on previous IOs described above can be applied when the data structure is known. For each file or directory, for example, a record may be maintained of which site has most recently performed either or both a Read and a Write. The record might include a map or table in which new entries overwrite old entries for a given data set such as a file or directory. Affinity is determined based on the record. For example, the site which most recently performed a Write to the file or directory may be the only site permitted to Read/Write to that file or directory during communication loss. Alternatively, only the site which most recently performed either a Read or a Write of the file or directory is permitted to Read/Write to that file or directory during communication loss. Furthermore, stale entries may be timed out.
It should be noted that the operations associated with affinity to directories are not necessarily the same as the operations associated with affinity to files. Primary operations associated with directories include read-directory (and associated information), create file, delete file and possibly rename file, access file meta data or modify file meta data, whereas the primary operations associated with files include Read and Write, or in a more sophisticated environment, sequential reads, random reads, append, modify or erase/delete. It will be appreciated that separate treatments of Read and Write affinities described herein could be applied to the operations associated with directories. For example, create, delete, change file attribute and rename operation affinity could reside with one site while read-directory could be shared.
In another variation affinity restrictions are applied to affinity relationships. Affinity restrictions can limit affinities in terms of client ID, filename, directory name, time, and any other expression that can be processed. For example, Laissez Faire affinity on the/tmp directory 800 (FIG. 8) could be subject to a scope restriction that site 100 is given the Laissez Faire affinity for any file that matches the pattern/tmp/site1?* while site 200 is given the Laissez Faire affinity for all other files. Further, that affinity or affinities for opening a file or directory can be restricted to specific times.
Referring to FIGS. 2 and 10, the resolution step 214 is performed when communication is restored in order to provide inter-site data consistency. In particular, the data that exists at the sites is reconciled to recreate a single view of that data, e.g., the virtual LUN. Resolver software may utilize file-system semantics (merge of discrete updates) and affinities to perform resolution. In a case where affinities limit the number of sites that can possibly modify any particular data set to at most one site, any changes that the site made are copied to other sites. For example, one simple resolution strategy is if site 100 had affinity for a particular byte range within a file, then any changes that site 100 made during the communication loss would be the resolved value for that byte range.
Resolution is performed at the directory entry level in the case of changes to a directory. For example, last file accessed time can be set to the latter of all the file accessed times across all sites.
Other resolution strategies are also possible. For example, in some cases, reading a time-order-merged journal of the two or more sites and then processing that journal would work. This merged journal may end up reporting some steps that could not be implemented; for example if one side deleted some information and then the other side tried to modify it. Still, such an approach would give a concise list of what was able to be merged and what was not. Automated or manual intervention could be used to resolve those last outstanding items. Journaled approaches could work for many kinds of data sets including files, databases and others. Items that were repeated in both journals (for example, deletions) could be merged; it is likely acceptable if both sides deleted the same item. This journal could either be explicitly created during the duration of the communication outage, or implicitly constructed during the resolution phase after communicating has been restored. Similarly, in many “database like situations” where there is a 1 to N or N to N relationship between entities in the database, and items were added or deleted on the N side of the relationship; that the resolution would indeed encompass all the changes. An example is email, (which is a 1 to N relationship between an email-box and the emails within it)—any changes made by either side would be accepted. An additional possible feature of the “resolving software” is that through the plug-in or through an eventing mechanism, software that had used that data could be informed of changes that may have been “retroactive” and be given the opportunity to handle those situations. This allows software to “fix the problems” that were caused by its using stale data.
Referring to FIG. 11, even in the case where affinities permit only one site to modify any particular data set there may be a possibility of inconsistency of free block allocation. When two sites append to an existing file or create a new file, free block are used from a free data-segment list 1100. When communication is available the sites coordinate using file system software to use the free data-segment list in a non-conflicting manner. However, when during loss of communication it is possible that the two sites will use the same data-segment. For example, if site 100 appends to file /app/site100/data and site 200 appends to file /app/site200/data, both sites may choose to use LUN block number 435620 to add a block to their respective files. This conflict can be resolved by moving the block for one of the sites to a different available free block. In particular, the data can be moved to a different block number and the file-block-list pointers for the file, e.g., /app/site200/data, can be updated to point to the new block number. This change of block number can be physical, e.g., move to another physical block, or logical, e.g., add an extra level of indirection to just change the logical number of the block while maintaining its physical location.
Another strategy is that each site is given its own list of free blocks. Each list is different than the lists of free blocks that are given to all other sites. Each site would then only use blocks from its free block list when allocating new blocks during the course of communication failure. When the sites are in communication with each other, they would continuously update these available per-site free block lists to make sure that each site has an adequate supply of free blocks. This strategy would simplify reconciliation since no two sites would allocate the same blocks. When a site uses up all the blocks on its own per-site free block list, it might make use of other free blocks that are not on its own per site free block list, and then the reconciliation strategies described earlier could be used.
File changes such as file name changes, new links to files, file deletions and file creation may require more complex resolution procedures. For example, if two or more sites created (or deleted) files in the /tmp directory using some file naming convention (e.g. as described earlier), then those sites would not have created the same file. In that case, the resolved directory would be based on the sum of all the changes made by both sites. Custom resolvers can be utilized if there is the potential that multiple sites made directory changes to the same file. One example of a custom resolver is for Laissez-Faire affinity Resolution. Custom resolvers might require human intervention to decide the best way to handle both sites making changes to the same file or directory. Alternatively an automated algorithm may be sufficiently aware of the underlying work-flow, file structure and file use conventions to automatically resolve the differences. Automated custom resolvers can work based on the semantics of the data set, e.g. at a file-level (and not at a block level) so as to guaranty that the data set (such as a file system) remains consistent, e.g., would pass programs like the Linux command fsck. Similar considerations would apply to other structured data sets such as data bases, email systems, CRM systems, key value pair storage and the like.
It will be appreciated by those skilled in the art that certain modifications may be desirable depending on the networked storage environment. For example, it is desirable that the file system is in, or can be placed in, a consistent state when communication loss occurs. OCFS2 (Oracle Cluster File System 2), NFS (Network File Systems), CIFS (Common Internet File System) and others support this functionality. However, modifications might be desirable if the functionality were not supported. Further, some of the variations described herein assume that the file system has a tree based naming hierarchy. However, Posix compliant file systems generally do not strictly obey this rule, e.g., a file may exist in multiple locations within the file system due to “hard links” to the inode. Consequently, variations such as affinity inheritance by sub-directories and files can lead to ambiguous situations. Modifications or use of different variations might therefore be selected in such situations.
Referring again to FIG. 1, as mentioned, in another embodiment either or both affinities 108, 208 and resolutions 175, 177 are implemented using plug-ins 179. Each LUN could be assigned a plug-in, instances of which could be integrated into the block servers and invoke callbacks when various events occur. Such events could include file-reads, file-writes and other block level IO commands. When a loss of communication occurs the server would use the plug-in associated with the LUN to determine whether to allow the IO or not. A default plug-in could use block level semantics that use bias or quorum voting with the witness to decide if to allow the IO or not. Alternative plug-ins would be used, for example, to implement file-oriented affinity, database-oriented affinity scheme that is outlined below or some other custom decider for which IOs should be allowed on each site during a split-brain. If the IO is not allowed, then an IO error (data not available) would be sent to the host that issued the IO command. The file system could then react as it always does for failed disks. A plug-in could also be installed and called to resolve changes when communication is resumed.
Some of the affinities and resolutions described above would not require file system modification. However, in an alternative embodiment at least some of the features are integrated into the file system. For example, it may be advantageous to integrate Free Block resolution into the file system, and also a modification to journal all changes made to the file-system during the loss of communication.
It will be appreciated that the principles described above can be generalized to data formats other than file systems. For example, if a LUN hosts a database then affinity rules can be created that are database centric. Database affinities could be created for particular tables, particular rows or columns of tables, and particular meta-data operations such as creating or deleting tables, changing the structure of tables, etc. Particular rows to which an affinity applies can be based on SQL using clauses or more comprehensive regular expressions. For example, one site may be assigned the affinity for all customers with one set of zip codes, while the other site could be assigned the affinity for all customers with a different set of zip codes. Other plug-ins could be created for Microsoft Exchange LUNs or other software that directly deals with a LUN and creates its own structure on that LUN for example to split affinity based on the last known geographic location of a mailbox owner. In the case of a database and other scenarios, “file-system like semantics” would be used to preserve the underlying data structural integrity, as well as the lower level data-semantic integrity.
In view of the teaching above it will be appreciated that a more complex implementation could comprise: a first data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device; a second data storage device associated with a second site, a second copy of a set of data being maintained on the second data storage device; a third data storage device associated with a third site, a third copy of a set of data being maintained on the third data storage device, and potentially a fourth or more similarly configured storage devices, storage sites and sets of data; an interface for communicating between the sites to maintain consistency of the data; and control logic operative in response to loss of communication between some subset of the sites that results in a partition of the sites into two (or more) groups in which the sites of the each group have communication with each other but do not have communication with other sites and in response to restoration of communication between the groups of sites to resolve differences between the various copies.
While the invention is described through the above exemplary embodiments, a wide variety of modifications to and variations of the illustrated embodiments may be made without departing from the inventive concepts herein disclosed. Moreover, while the exemplary embodiments are described in connection with various illustrative structures, the concepts and features may be embodied using a wide variety of specific structures. Accordingly, the invention should not be viewed as limited except by the scope and spirit of the appended claims.

Claims (27)

What is claimed is:
1. Apparatus comprising:
a first non-transitory physical data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device;
a second non-transitory physical data storage device associated with a second site, a second copy of the set of data being maintained on the second data storage device;
an interface for communicating between the first site and the second site to maintain consistency of the first and second copies of the set of data; and
control logic stored in computer-readable non-transitory memory and utilized by processing hardware, the control logic responsive to loss of communication between the first site and the second site to provide access to at least a first part of the first copy of the data set at the first site during the loss of communication between the first site and the second site, and provide access to at least a second part of the second copy of the data set at the second site during the loss of communication between the first site and the second site, the second part comprising different data than the first part, and in response to restoration of communication between the first site and the second site to resolve differences between the first copy and the second copy, wherein the control logic uses affinity to control access to the first copy and the second copy, where affinity is an indication that a particular site has or should be given permission to perform an operation to a particular part of the set of data.
2. The apparatus of claim 1 wherein the control logic determines affinity based on access history.
3. The apparatus of claim 2 wherein affinity to the part of the set of data is assigned to either the first site or second site based on whether the first or second site most recently modified the part of the set of data.
4. The apparatus of claim 2 wherein the control logic times-out records of most recent modification.
5. The apparatus of claim 1 wherein the control logic uses file system semantics to assign affinity to at least one of: a particular set of data, a part of a set of data, a directory in a file system, a set of directories in a file system, a file in a file system, a set of files in a file system, a part of a file, device-blocks, a data segment, a group of data segments, a range of data segments, a byte range or set of ranges in a file, subsets of entries in a directory, tables in a database, rows in a database, columns in a database, particular keys in a key-value store, specific mailboxes in an email system, and specific customer accounts in a CRM system, and specific objects in an object store of some kind.
6. The apparatus of claim 5 wherein the set of data is hierarchical or predominately hierarchical, and wherein the control logic assigns inherited affinity based on hierarchy.
7. The apparatus of claim 5 wherein the control logic reassigns affinity based on a recognized justification during loss of communication.
8. The apparatus of claim 1 wherein the control logic assigns Read Only affinity which causes an associated part of the set of data to be readable by all sites.
9. The apparatus of claim 1 wherein the control logic assigns Laissez Faire affinity which allows all sites to make changes to an associated part of the set of data.
10. The apparatus of claim 1 wherein the control logic assigns Read-Laissez-Faire affinity which allows the first site to have Write affinity to an associated part of the set of data and the second site to have Read affinity to the associated part of the set of data.
11. The apparatus of claim 1 wherein the control logic applies restrictions to affinity based on client ID, filename, directory name, time, or any other expression that can be processed.
12. The apparatus of claim 1 wherein the control logic automatically resolves at least some differences between the first copy and the second copy.
13. The apparatus of claim 12 wherein the control logic includes a plug-in resolver.
14. A method of operating a distributed storage system comprising:
maintaining a first copy of a set of data by a first site;
maintaining a second copy of the set of data by a second site;
communicating between the first site and the second site to maintain consistency of the first and second copies of the set of data;
in response to loss of communication between the first site and the second site, providing access to at least a first part of the first copy of the data set at the first site during the loss of communication between the first site and the second site, and providing access to at least a second part of the second copy of the data set at the second site during the loss of communication between the first site and the second site the second part comprising different data than the first part;
using affinity to control access to the first copy and the second copy, where affinity is an indication that a particular site has or should be given permission to perform an operation to a particular part of the set of data; and
in response to restoration of communication between the first site and the second site, resolving differences between the first copy and the second copy.
15. The method of claim 14 including determining affinity based on access history.
16. The method of claim 15 including assigning affinity for a third part of the set of data to either the first site or second site based on whether the first or second site most recently modified the third part of the set of data.
17. The method of claim 15 including timing-out records of most recent modification.
18. The method of claim 14 including using file system semantics to assign affinity to at least one of: a particular set of data, a part of a set of data, a directory in a file system, a set of directories in a file system, a file in a file system, a set of files in a file system, a part of a file, device-blocks, a data segment, a group of data segments, a range of data segments, a byte range or set of ranges in a file, subsets of entries in a directory, tables in a database, rows in a database, columns in a database, particular keys in a key-value store, specific mailboxes in an email system, and specific customer accounts in a CRM system, and specific objects in an object store of some kind.
19. The method of claim 18 wherein the set of data is hierarchical or predominately hierarchical, and including assigning inherited affinity based on hierarchy.
20. The method of claim 18 including reassigning affinity based on a recognized justification during loss of communication.
21. The method of claim 14 including assigning Read Only affinity which causes an associated part of the set of data to be readable by all sites.
22. The method of claim 14 including assigning Laissez Faire affinity which allows all sites to make changes to an associated part of the set of data.
23. The method of claim 14 including assigning Read-Laissez-Faire affinity which allows the first site to have Write affinity to an associated part of the set of data and the second site to have Read affinity to the associated part of the set of data.
24. The method of claim 14 including applying restrictions to affinity based on client ID, filename, directory name, time, or any other expression that can be processed.
25. The method of claim 14 including automatically resolving at least some differences between the first copy and the second copy.
26. The method of claim 25 including automatically resolving at least some differences between the first copy and the second copy by utilizing a plug-in resolver.
27. An apparatus comprising:
a first non-transitory physical data storage device associated with a first site, a first copy of a set of data being maintained on the first data storage device;
a second non-transitory physical data storage device associated with a second site, a second copy of the set of data being maintained on the second data storage device;
a third data storage device associated with a third site, a third copy of the set of data being maintained on the third data storage device;
an interface for communicating between the first site, the second site and the third site to maintain consistency of the copies of the set of data; and
control logic stored in computer-readable non-transitory memory and utilized by processing hardware, the control logic responsive to loss of communication between a first subset and a second subset of a set of sites, the first subset comprising the first site and the second site, and the second subset comprising the third site, to use affinity to control access to the first copy and the second copy and the third copy such that the first subset has access to a first part of the set of data and the second subset has access to a second part of the set of data, the second part comprising different data than the first part, where affinity is an indication that a particular subset of sites has or should be given permission to perform an operation to a particular part of the set of data and in response to restoration of communication between the first subset and the second subset to resolve differences between the various copies.
US13/465,275 2012-05-07 2012-05-07 Using affinity to mediate bias in a distributed storage system Active 2033-01-30 US9430343B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US13/465,275 US9430343B1 (en) 2012-05-07 2012-05-07 Using affinity to mediate bias in a distributed storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/465,275 US9430343B1 (en) 2012-05-07 2012-05-07 Using affinity to mediate bias in a distributed storage system

Publications (1)

Publication Number Publication Date
US9430343B1 true US9430343B1 (en) 2016-08-30

Family

ID=56739518

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/465,275 Active 2033-01-30 US9430343B1 (en) 2012-05-07 2012-05-07 Using affinity to mediate bias in a distributed storage system

Country Status (1)

Country Link
US (1) US9430343B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140344236A1 (en) * 2013-05-20 2014-11-20 Amazon Technologies, Inc. Index Update Pipeline
US20200142977A1 (en) * 2018-11-06 2020-05-07 Red Hat, Inc. Distributed file system with thin arbiter node
US11144252B2 (en) 2020-01-09 2021-10-12 EMC IP Holding Company LLC Optimizing write IO bandwidth and latency in an active-active clustered system based on a single storage node having ownership of a storage object
US20220179976A1 (en) * 2020-12-03 2022-06-09 Capital One Services, Llc Systems and methods for processing requests for access

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1130511A2 (en) * 2000-01-25 2001-09-05 FusionOne, Inc. Data transfer and synchronization system
US6493796B1 (en) * 1999-09-01 2002-12-10 Emc Corporation Method and apparatus for maintaining consistency of data stored in a group of mirroring devices
US20040030837A1 (en) * 2002-08-07 2004-02-12 Geiner Robert Vaughn Adjusting timestamps to preserve update timing information for cached data objects
US20040039888A1 (en) * 2002-08-21 2004-02-26 Lecrone Douglas E. Storage automated replication processing
US20050154845A1 (en) * 2004-01-09 2005-07-14 International Business Machines Corporation Maintaining consistency for remote copy using virtualization
US20060015507A1 (en) * 2004-07-17 2006-01-19 Butterworth Henry E Controlling data consistency guarantees in storage apparatus
US20060114917A1 (en) * 2002-12-20 2006-06-01 Christoph Raisch Secure system and method for san management in a non-trusted server environment
US20080256299A1 (en) * 2003-11-17 2008-10-16 Arun Kwangil Iyengar System and Method for Achieving Different Levels of Data Consistency
US20090150627A1 (en) * 2007-12-06 2009-06-11 International Business Machines Corporation Determining whether to use a repository to store data updated during a resynchronization
US20100251010A1 (en) * 2009-03-30 2010-09-30 The Boeing Company Computer architectures using shared storage
US20110060722A1 (en) * 2009-09-07 2011-03-10 Icon Business Systems Limited Centralized management mode backup disaster recovery system
US20120124640A1 (en) * 2010-11-15 2012-05-17 Research In Motion Limited Data source based application sandboxing

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6493796B1 (en) * 1999-09-01 2002-12-10 Emc Corporation Method and apparatus for maintaining consistency of data stored in a group of mirroring devices
EP1130511A2 (en) * 2000-01-25 2001-09-05 FusionOne, Inc. Data transfer and synchronization system
US20040030837A1 (en) * 2002-08-07 2004-02-12 Geiner Robert Vaughn Adjusting timestamps to preserve update timing information for cached data objects
US20040039888A1 (en) * 2002-08-21 2004-02-26 Lecrone Douglas E. Storage automated replication processing
US20060114917A1 (en) * 2002-12-20 2006-06-01 Christoph Raisch Secure system and method for san management in a non-trusted server environment
US20080256299A1 (en) * 2003-11-17 2008-10-16 Arun Kwangil Iyengar System and Method for Achieving Different Levels of Data Consistency
US20050154845A1 (en) * 2004-01-09 2005-07-14 International Business Machines Corporation Maintaining consistency for remote copy using virtualization
US20090055610A1 (en) * 2004-01-09 2009-02-26 International Business Machines Corporation Maintaining consistency for remote copy using virtualization
US20060015507A1 (en) * 2004-07-17 2006-01-19 Butterworth Henry E Controlling data consistency guarantees in storage apparatus
US20090150627A1 (en) * 2007-12-06 2009-06-11 International Business Machines Corporation Determining whether to use a repository to store data updated during a resynchronization
US20100251010A1 (en) * 2009-03-30 2010-09-30 The Boeing Company Computer architectures using shared storage
US20110060722A1 (en) * 2009-09-07 2011-03-10 Icon Business Systems Limited Centralized management mode backup disaster recovery system
US20120124640A1 (en) * 2010-11-15 2012-05-17 Research In Motion Limited Data source based application sandboxing

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140344236A1 (en) * 2013-05-20 2014-11-20 Amazon Technologies, Inc. Index Update Pipeline
US11841844B2 (en) * 2013-05-20 2023-12-12 Amazon Technologies, Inc. Index update pipeline
US20200142977A1 (en) * 2018-11-06 2020-05-07 Red Hat, Inc. Distributed file system with thin arbiter node
US11288237B2 (en) * 2018-11-06 2022-03-29 Red Hat, Inc. Distributed file system with thin arbiter node
US11144252B2 (en) 2020-01-09 2021-10-12 EMC IP Holding Company LLC Optimizing write IO bandwidth and latency in an active-active clustered system based on a single storage node having ownership of a storage object
US20220179976A1 (en) * 2020-12-03 2022-06-09 Capital One Services, Llc Systems and methods for processing requests for access

Similar Documents

Publication Publication Date Title
US11669544B2 (en) Allocation and reassignment of unique identifiers for synchronization of content items
US10623488B1 (en) Systems and methods for replicating data
US8972345B1 (en) Modifying data structures in distributed file systems
US20110191294A2 (en) Techniques for versioning file systems
JP2020502626A (en) Formation and operation of test data in a database system
US8214377B2 (en) Method, system, and program for managing groups of objects when there are different group types
US11093387B1 (en) Garbage collection based on transmission object models
US10733045B2 (en) Online repair of metadata for structured data including file systems
US11003364B2 (en) Write-once read-many compliant data storage cluster
US20230077938A1 (en) Managing objects stored at a remote storage
US9430343B1 (en) Using affinity to mediate bias in a distributed storage system
US11650967B2 (en) Managing a deduplicated data index
US11210273B2 (en) Online file system check using file system clone
US20130318086A1 (en) Distributed file hierarchy management in a clustered redirect-on-write file system
US11436089B2 (en) Identifying database backup copy chaining
US10521398B1 (en) Tracking version families in a file system
US9286318B2 (en) Edge server and storage control method
US20230119183A1 (en) Estimating data file union sizes using minhash
US11494105B2 (en) Using a secondary storage system to implement a hierarchical storage management plan
US11023333B2 (en) Online recovery approach to space accounting
US11748299B2 (en) Managing objects stored at a remote storage
Suk et al. WORM-Based Data Protection Approach
CN112181899A (en) Metadata processing method and device and computer readable storage medium
Gupta et al. Analysis of the frequency-domain block LMS algorithm

Legal Events

Date Code Title Description
AS Assignment

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TRACHTMAN, MICHAEL;GLADE, BRADFORD B.;SIGNING DATES FROM 20120502 TO 20120503;REEL/FRAME:028239/0250

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLATERAL AGENT, NORTH CAROLINA

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT, TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001

Effective date: 20160907

Owner name: CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH, AS COLLAT

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040134/0001

Effective date: 20160907

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., A

Free format text: SECURITY AGREEMENT;ASSIGNORS:ASAP SOFTWARE EXPRESS, INC.;AVENTAIL LLC;CREDANT TECHNOLOGIES, INC.;AND OTHERS;REEL/FRAME:040136/0001

Effective date: 20160907

AS Assignment

Owner name: EMC IP HOLDING COMPANY LLC, MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EMC CORPORATION;REEL/FRAME:040203/0001

Effective date: 20160906

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., T

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223

Effective date: 20190320

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES, INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:049452/0223

Effective date: 20190320

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., TEXAS

Free format text: SECURITY AGREEMENT;ASSIGNORS:CREDANT TECHNOLOGIES INC.;DELL INTERNATIONAL L.L.C.;DELL MARKETING L.P.;AND OTHERS;REEL/FRAME:053546/0001

Effective date: 20200409

AS Assignment

Owner name: WYSE TECHNOLOGY L.L.C., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: MOZY, INC., WASHINGTON

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: MAGINATICS LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: FORCE10 NETWORKS, INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: EMC IP HOLDING COMPANY LLC, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: EMC CORPORATION, MASSACHUSETTS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL SYSTEMS CORPORATION, TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL SOFTWARE INC., CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL MARKETING L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL INTERNATIONAL, L.L.C., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: CREDANT TECHNOLOGIES, INC., TEXAS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: AVENTAIL LLC, CALIFORNIA

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

Owner name: ASAP SOFTWARE EXPRESS, INC., ILLINOIS

Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:CREDIT SUISSE AG, CAYMAN ISLANDS BRANCH;REEL/FRAME:058216/0001

Effective date: 20211101

AS Assignment

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL INTERNATIONAL L.L.C., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (040136/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061324/0001

Effective date: 20220329

AS Assignment

Owner name: SCALEIO LLC, MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: EMC IP HOLDING COMPANY LLC (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MOZY, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: EMC CORPORATION (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO MAGINATICS LLC), MASSACHUSETTS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO FORCE10 NETWORKS, INC. AND WYSE TECHNOLOGY L.L.C.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL PRODUCTS L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL INTERNATIONAL L.L.C., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL USA L.P., TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING L.P. (ON BEHALF OF ITSELF AND AS SUCCESSOR-IN-INTEREST TO CREDANT TECHNOLOGIES, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

Owner name: DELL MARKETING CORPORATION (SUCCESSOR-IN-INTEREST TO ASAP SOFTWARE EXPRESS, INC.), TEXAS

Free format text: RELEASE OF SECURITY INTEREST IN PATENTS PREVIOUSLY RECORDED AT REEL/FRAME (045455/0001);ASSIGNOR:THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., AS NOTES COLLATERAL AGENT;REEL/FRAME:061753/0001

Effective date: 20220329

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8