WO2016040393A1 - Application transparent continuous availability using synchronous replication across data stores in a failover cluster - Google Patents

Application transparent continuous availability using synchronous replication across data stores in a failover cluster Download PDF

Info

Publication number
WO2016040393A1
WO2016040393A1 PCT/US2015/049042 US2015049042W WO2016040393A1 WO 2016040393 A1 WO2016040393 A1 WO 2016040393A1 US 2015049042 W US2015049042 W US 2015049042W WO 2016040393 A1 WO2016040393 A1 WO 2016040393A1
Authority
WO
WIPO (PCT)
Prior art keywords
replication
group
primary
replication group
cluster
Prior art date
Application number
PCT/US2015/049042
Other languages
French (fr)
Inventor
Ganesh PRASAD
Roopesh BATTEPATI
Vyacheslav Kuznetsov
Original Assignee
Microsoft Technology Licensing, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Technology Licensing, Llc filed Critical Microsoft Technology Licensing, Llc
Priority to EP15771806.5A priority Critical patent/EP3191958A1/en
Priority to CN201580048056.7A priority patent/CN106605217B/en
Publication of WO2016040393A1 publication Critical patent/WO2016040393A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/203Failover techniques using migration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2023Failover techniques
    • G06F11/2033Failover techniques switching over of hardware resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2097Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements maintaining the standby controller/processing unit updated
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/065Replication mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2041Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant with more than one idle spare processing component
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2048Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant where the redundant components share neither address space nor persistent storage

Definitions

  • the present disclosure provides a system and method for automatically moving an application from one site to another site in the event of a disaster.
  • the application Prior to coming back online the application is configured with information to allow it to run on the new site without having to perform those actions after the application has become online. This provides for a seamless experience to the user of the application while also reducing the associated downtime for the application.
  • the present disclosure also allows a cluster Replication resource to maintain cluster wide replication state of all target replicas which allows it to decide if a target is eligible to be source of replication in the event of a disaster.
  • the target replica connects to source replica without using a well known endpoint.
  • source replica fails over to a different node within a primary site, the target replica in secondary site discovers the new endpoint to connect to and resumes replication.
  • the cluster replication resource automatically adjusts the possible owners nodes of the source and target replica based on the replication state, replication service availability, storage connectivity, arrival or departure of nodes in cluster due to membership changes. This allows the application resource group to failover to only those nodes where this is a high chance of the success due to availability of all required resources.
  • FIG. 1 is a block diagram illustrating a system 1 00 for providing application transparent continuous availability using synchronous replication across data stores in a failover cluster according to an illustrative embodiment.
  • FIG. 2 is a flow diagram illustrating a process for selecting a secondary replication group and automatically performing role switching according to one illustrative embodiment.
  • FIG. 3 is a flow diagram illustrating a process for switching roles according to one embodiment.
  • FIG. 4 is a block diagram illustrating a computing device which can implement the enhanced indexing system according to one embodiment.
  • FIG. 4 is a block diagram illustrating a computing device which can implement the enhanced indexing system according to one embodiment.
  • Like reference numerals are used to designate like parts in the
  • the subject matter may be embodied as devices, systems, methods, and/or computer program products. Accordingly, some or all of the subject matter may be embodied in hardware and/or in software (including firmware, resident software, microcode, state machines, gate arrays, etc.) Furthermore, the subject matter may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system.
  • a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
  • the computer-usable or computer-readable medium may be for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium.
  • computer-readable media may comprise computer storage media and communication media.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and may be accessed by an instruction execution system.
  • the computer-usable or computer-readable medium can be paper or other suitable medium upon which the program is printed, as the program can be electronically captured via, for instance, optical scanning of the paper or other suitable medium, then compiled, interpreted, of otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
  • Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal can be defined as a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above-mentioned should also be included within the scope of computer-readable media.
  • the embodiment may comprise program modules, executed by one or more systems, computers, or other devices.
  • program modules include routines, programs, objects, components, data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • functionality of the program modules may be combined or distributed as desired in various embodiments.
  • FIG. 1 is a block diagram illustrating a system 1 00 for providing application transparent continuous availability using synchronous replication across data stores in a failover cluster according to an illustrative embodiment.
  • System 1 00 includes a first site 1 1 0 and a second site 1 60. While only two sites are illustrated in FIG. 1 any number of sites may be present in system 1 00.
  • the first or primary site 1 1 0 includes an application cluster resource group
  • the application resource cluster group 1 1 1 includes an application resource 1 1 5 , a data disk 1 20, a log disk 1 25 , a storage replication unit 1 28. These components are associated with the underlying application that uses the cluster group. The data generated by this application are stored in these components.
  • the first site may be a data center that hosts the application associated with the application resource 1 1 5 or may be a server (physical or virtual) that is hosting the associated application within a data center or other location.
  • Site 1 1 0 further includes a plurality of nodes 1 30 and 1 35. Two nodes are illustrated in FIG. 1 for the purposes of simplicity only. It should be noted that any number of nodes may be present on site 1 1 0. Each of the nodes is associated with an application resource cluster group 1 1 1 and more specifically an application resource 1 1 5. Each node l 30 and 1 35 of site 1 1 0 can host a different application resource. However, in some embodiments the same application resource group 1 1 1 can be hosted on both nodes 1 30 and 1 35. In other embodiments node 1 35 acts a failover node for the application resource group 1 1 1 on site 1 1 0. When a failure occurs to an application the application may failover to node 1 35. This can occur in situations where the failure is node related as opposed to site related. Site 1 1 0 further includes a replication service 1 40 and a physical data store 1 50.
  • the second or secondary site 1 60 includes a replication cluster resource group 1 61 .
  • the replication cluster resource group also includes a second data disk 1 70, a second log disk 1 75 and a second storage replication unit 1 78.
  • Second site 1 60 also includes a plurality of nodes 1 80 and 1 85. As discussed above with respect to site 1 1 0 only two nodes are illustrated in FIG. 1 for purposes of simplicity only. Again any number of nodes may be associated with site 1 60.
  • site 1 60 includes a second replication service 1 90 and a second physical data store 1 95. The components of the second site 1 60 are functionally similar to those of the first site 1 1 0, and will not necessarily be described separately.
  • the present disclosure allows for when the primary site 1 1 0 that is hosting an application goes down due to a disaster or otherwise fails, and the application resource group (which also contains replicated disks for the application) moves to the secondary site 1 60 which holds a synchronous target replica, that the cluster physical disk resource to physical data store mapping is modified to use the target replica, i.e. the replica on the secondary site 1 60 in data store 1 95 , before the application resource comes online on the secondary site 1 60 resulting in an automatic role switch.
  • This process is managed by the replication services 1 40 and 1 90. This process virtualizes the application dependent cluster physical disk resource from multiple synchronous copies of data in various sites allowing seamless failover and fallback capabilities.
  • Embodiments allow a cluster replication resource to maintain cluster wide replication state of all target replicas which allows it to decide if a target is eligible to be source of replication in the event of a disaster.
  • a target replica connects to source replica without using a well-known endpoint.
  • source replica fails over to a different node within a primary site
  • the target replica in the secondary site discovers the new endpoint to connect to and resumes replication.
  • the cluster replication resource automatically adjusts the possible owners nodes of the source and target replica based on the replication state, replication service availability, storage connectivity, arrival or departure of nodes in cluster due to membership changes. This allows the application resource group to failover to only those nodes where this is a high chance of the success due to availability of all required resources.
  • Each of the nodes associated with site 1 1 0 can form a replication group such as group 1 1 1 and group 1 61 . It should be noted that any number of nodes may form a replication group.
  • a replication group is in one embodiment a collection of replica instances on a system that are collectively depended on by an application using the data partitions of physical data storage 1 50.
  • the replication service 1 40 tracks the inter-device write ordering dependencies when replicating multiple devices.
  • a replication group is the unit of replication.
  • Cluster resource groups a collection of cluster resources that are grouped together in a cluster and are a unit of failover in a failover cluster. These are illustrated by example cluster groups 1 1 1 and 1 61 [0030]
  • Cluster Physical Disk Resource (PDR) A cluster resource that manages physical disk so that it can be accessible by applications, such as applications associated with application resource 1 1 5. Application typically depend on cluster physical disks resources so that the data is brought online before it can be accessed by applications
  • Storage Replica Resource A cluster resource that manages the replication of all replicas in a replication group.
  • Storage replication resource is represented by elements 1 28 and 1 78 in FIG. 1 . It should be noted that in FIG. 1 a "p" represents the features on the current primary site and an “s" represented features on a currently secondary site.
  • Asymmetric Storage Cluster An asymmetric storage cluster is a failover cluster deployment where a data store such as physical disks are not connected to every node of the cluster. Such deployments are typical found when the cluster spans multiple geographical sites where physical storage can be accessed by only the nodes in a given site.
  • the replication groups must first be created.
  • the replicas that belong to a group are grouped together into a cluster replication group (e.g. groups 1 1 1 and 1 61 ).
  • the cluster physical disk resources are part of replication group that represent the replicas.
  • the replication groups depend from a cluster physical disk resource, which in turn depends on the storage replication cluster resource.
  • the application cluster resources 1 1 5 that consume data from physical disks depend on the physical disk cluster resource. This dependency chain ensures that the start of the resources are done in an order that ensures that dependent resources are available before application can start consuming the data on the disks.
  • the physical data store 1 50 in one embodiment is a physical disc cluster resource that implements shared storage for the various nodes (e.g. nodes 1 30 and 1 35) of the cluster resource group (1 1 1 ).
  • Physical disk cluster resources typically have a private property that indicates the physical data store 1 50 it manages.
  • a physical data store 1 50 in a cluster is connected to multiple nodes (e.g. nodes 1 30 and 1 35) which allows the data to be available on multiple nodes when the application and physical disk cluster resource fail over to other nodes.
  • the physical disk cluster resource takes persistent reservation on physical storage 1 50 so that is accessible on only one of the nodes of cluster to avoid simultaneous edits to data from multiple nodes.
  • the physical disk cluster resource also mounts the volume/file systems when it comes online on a node. Collectively these are illustrated by block 1 50.
  • the replication service 1 40 is in one embodiment a replication cluster resource that is configured to determine if primary storage (physical disks 1 50 that are part of source replica) is connected to the node where the resource group is coming online (e.g. resource group 1 1 1 coming online on site 1 1 0). If the storage is not connected, the replication service 1 40 starts an automatic role switch process. Role switching is a process where the source and target of replication are switched, thereby reversing the direction of data replication. Role switching is typically performed, for example, as part of disaster recovery when the current primary site (e.g. site 1 1 0) goes down or when the current primary needs to be taken down for maintenance. Automatic role switching reverses the direction of replication automatically when the replication service detects that the current primary physical storage or nodes are no longer available thereby providing continuous availability of data to application, without the need for an administrator to interact directly with the system during the failover.
  • primary storage physical disks 1 50 that are part of source replica
  • the replication service 1 40 determines if the storage
  • the replication service 1 40 begins a process to role switch to one of the secondary replication groups. The process begins by determining if there are other replication groups (e.g. group 1 61 ) which can take over as new primary based on their replication status. If the replication group is in a sync state with an old primary, then it is also a candidate to be selected as new primary. Next the process determines if the replication group is connected to the cluster node where the resource group is coming online.
  • other replication groups e.g. group 1 61
  • a role switch will be performed. This role switch results in the current primary replication group, e.g. group 1 1 1 , will be changed to target of replication (secondary). The selected secondary replication group, e.g. group 1 61 , will be changed to source of replication (primary).
  • the process begins by swapping the physical disk cluster resource to physical data storage binding of primary and secondary physical cluster disk resources. Next the process swaps the replication group private property associated with the replication cluster resource. Next the secondary cluster resource group is moved to the primary site.
  • the process continues by updating the possible owners of the primary and secondary resource groups to include only those cluster nodes which are within those sites.
  • Possible owners of a cluster resource are a set of nodes where the cluster can try to bring the resource online.
  • a replication group may be hosted only on nodes where the replication service is available, and the physical data store is available. Again nodes may be located on different sites. Additionally when a primary replication group has synchronous partners the primary cluster resource group can also be failed over to those nodes where current synchronous secondary data store is available. Again in FIG. 1 a "p" indicates a primary and an "s" indicates a secondary.
  • the replication service tries to online the secondary replication groups (e.g. groups 1 1 1 and 1 61 ).
  • This process includes ensuring that the flow of data 1 45 from the primary site is able to reach the replication service 1 90 of the secondary site 1 60.
  • the system brings online the log and data associated with the primary resource group, e.g. elements 1 20 and 1 25.
  • the replication service 1 40 maintains a cluster wide view of replication status of all replication groups within a cluster.
  • the replications status indicates which replication group is in sync status and which one is not in sync status. If a synchronous secondary replication group loses replication connection to its primary or if there is a failure replicating certain data to a secondary, the replication status of the secondary replication group is changed to NOT IN SYNC before the primary deviates or allows any new input/output to proceed at the primary replication group to the secondary replication group.
  • a replication group can failover within a cluster. When that happens the replication to secondary should resume after failover completes.
  • a failover means a change in replication service endpoint as the node hosting the replication service changes for the source or target of replication. The older node to which a secondary was connected to is no longer valid. The secondary should rediscover the source replication service endpoint and resume replication.
  • the primary cluster resource group fails over to a different node, during the online sequence it restarts the secondary replication group.
  • the secondary then during its online sequence queries the cluster service to determine the owner node of the primary resource group and uses that node name as primary replication service endpoint.
  • the primary also sends a cluster resource notification to secondary replication groups to indicate the new replication endpoint.
  • FIG. 2 is a flow diagram illustrating a process for selecting a secondary replication group and automatically implementing a role switch according to one illustrative embodiment.
  • the process begins by determining if the primary storage 1 20 is connected to the node. This is illustrated at step 21 0. This determination can be made by sending a command to the primary storage 1 20 and awaiting a return from the primary storage 1 20. Alternatively, the process can be looking for a heartbeat to come from the primary storage 1 20. Other methods of determining if the primary storage 1 20 is connected to the node can be used as well. During normal operations this check occurs when the associated resource group is first coming online. However, this check can also occur while the resource group is operating. When the check occurs while the resource group is operating this check can occur at periodic intervals or can be a continuous check.
  • step 21 2 If the primary storage 1 20 is determined to be online and connected to the resource group, the operation of the system continues as normal. This normal operation is illustrated at step 21 2.
  • step 220 the process determines if there any replication groups 1 78 that can take over as primary replication group 1 28. Each candidate is added to a candidate list. This is illustrated at step 225. Steps 220 and 225 are discussed herein together.
  • the replication service 1 40, 1 90 maintains a cluster wide view of the replication status of all replication groups within a cluster. The replications status indicates which replication groups are in sync status and which one are not in sync status.
  • a synchronous primary replication group 1 78 loses replication connection to its primary or if there is a failure replicating certain data to a secondary, the replication status of the secondary replication groups is changed to NOT IN SYNC before the primary deviates or allows any new input or outputs to proceed at the primary replication group. If the replication status of the candidate replication group 1 61 is determined to be in sync with the old primary site that candidate replication group is consider do be a valid candidate replication group for selection as a new primary replication group. If it is not in sync with the old primary replication group 1 28 that candidate is removed from the list of potential candidates.
  • step 230 the process continues to determine which of the candidate replication groups is connected to the cluster node where the resource group is coming online. This is illustrated at step 230. If the replication group is connected to the cluster node then that replication group remains in the candidate list. At this time the system may gather information about that replication group and the associated connection between the replication group and the node or resource group. This information can include features such as the size or capacity of the replication group, the location of the replication group, connection speed, quality of the connection, etc. This information is gathered at step 240.
  • step 235 the system can remove the replication group from the candidate list of resource groups. However, in some embodiments the system can try to have a connection generated for the resource group to connect to the node. This is illustrated by optional step 233. Step 233 would typically occur before step 235. If a connection can be created the system causes a command to be generated that will cause the resource group to connect to the node. If the connection is successful, the analysis moves to step 240. Otherwise the replication group is removed from the list of candidate replication groups.
  • the process proceeds to select one of the candidate replication groups as the new primary replication group. This is illustrated at step 250.
  • the process may select as the new primary replication group 1 28 based upon the gathered characteristics for each of the candidates in the list of candidate replication groups. In some approaches the selection is based on an existing set of rules for the resource group. For example the location of the replication group may be constrained based on location. This can occur for certain applications where the data cannot leave a particular country. In this example the candidate replication groups not meeting this location requirement are now removed or not considered further.
  • the system can look at performance or other quality characteristics in choosing which candidate replication group to select. The system may select the best performing replication group from the candidates. However, the system may simply select a random candidate from the list of candidates.
  • step 250 continues by swapping or changing the roles of the two replication groups.
  • step 260 the current primary replication group 1 28 is changed to become a target of the replication. That is the old primary replication group 1 28 is now a secondary replication group.
  • step 262 the selected replication group will be changed to become the new primary replication group.
  • step 264 The processes of steps 262 and 264 are essentially the same except for the changes made to the corresponding replication group.
  • the process that occurs on each of the replication groups to initiate the role switch is described with respect to FIG. 3.
  • the process of FIG. 3 corresponds closely to step 260 and included steps 262 and 264 of FIG. 2.
  • the first step of the process is to swap the physical disk cluster resource to physical data storage binding of the primary physical disk cluster resources and the secondary physical cluster disk resources. This is illustrated at step 31 0.
  • the characteristics of the primary physical disk cluster resources are changed to mimic the secondary physical disk characteristics and the secondary physical disk's characteristics are changed to reflect those characteristics of the primary physical disk cluster resource.
  • each of the replication cluster resources is swapped. This is illustrated at step 320.
  • every physical disk cluster resource has a private property that indicates the physical data store it manages.
  • the physical data store in a cluster is connected to multiple nodes (e.g. nodes 1 30, 1 35 and nodes 1 80, 1 85) which allows the data to be available on multiple nodes so that the application and the physical disk cluster resource can fail over to other nodes.
  • the physical disk cluster resource takes persistent reservation on physical storage so that is accessible on only one of the nodes of cluster to avoid simultaneously edits to data from multiple nodes.
  • the private property of the cluster resource is changed to accept the edits from this node of the cluster.
  • the secondary cluster resource group is moved to the primary site. This is illustrated at step 330.
  • the possible owners of the primary and secondary resource groups are updated. This is illustrated at step 340.
  • the possible owners are updated to include only those cluster nodes which are within those sites.
  • Possible owners of a cluster resource are a set of nodes where the cluster can try to bring the resource online.
  • a replication group can be hosted only on nodes where the replication service 1 40, 1 90 is available and the physical data store is available. Additionally when a primary replication group 1 28 has synchronous partners the primary cluster resource group (e.g. I l l ) can also be failed over to those nodes where current synchronous secondary data store is available.
  • the secondary replication group(s) attempt to come online. This is illustrated at step 350.
  • the secondary may have to discover the replication endpoint.
  • the replication endpoint such as a cluster network name resource
  • the resource group is able to come online quickly or with minimal delay.
  • a change in the replication service endpoint can occur as the node that was hosting the replication service 1 40, 1 90 changes for the source and/or target of the replication. This occurs as the older node to which one of the secondary resource groups may no longer be valid.
  • the new endpoint is known.
  • the replication endpoint is unknown then the secondary resource group needs to discover the replication end point. This is illustrated at optional step 360.
  • the primary replication group 1 78 rediscovers the source replication endpoint and resumes the replication process. .
  • the primary replication group 1 78 then as part of the online sequence queries the cluster service to determine the owner node of the primary resource group and uses that node's name as primary replication service endpoint.
  • the primary replication group 1 28 can also send a cluster resource notification to the secondary replication groups to indicate the new replication endpoint.
  • FIG. 4 illustrates a component diagram of a computing device according to one embodiment.
  • the computing device 400 can be utilized to implement one or more computing devices, computer processes, or software modules described herein.
  • the computing device 400 can be utilized to process calculations, execute instructions, receive and transmit digital signals.
  • the computing device 400 can be utilized to process calculations, execute instructions, receive and transmit digital signals, receive and transmit search queries, and hypertext, compile computer code, as required by the system of the present embodiments.
  • computing device 400 can be a distributed computing device where components of computing device 400 are located on different computing devices that are connected to each other through network or other forms of connections.
  • computing device 400 can be a cloud based computing device.
  • the computing device 400 can be any general or special purpose computer now known or to become known capable of performing the steps and/or performing the functions described herein, either in software, hardware, firmware, or a combination thereof.
  • computing device 400 In its most basic configuration, computing device 400 typically includes at least one central processing unit (CPU) or processor 402 and memory 404. Depending on the exact configuration and type of computing device, memory 404 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. Additionally, computing device 400 may also have additional features/functionality. For example, computing device 400 may include multiple CPU's. The described methods may be executed in any manner by any processing unit in computing device 400. For example, the described process may be executed by both multiple CPU's in parallel. [0058] Computing device 400 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape.
  • additional storage removable and/or non-removable
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Memory 404 and storage 406 are all examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computing device 400. Any such computer storage media may be part of computing device 400.
  • Computing device 400 may also contain communications device(s) 41 2 that allow the device to communicate with other devices.
  • Communications device(s) 41 2 is an example of communication media.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
  • the term computer-readable media as used herein includes both computer storage media and communication media. The described methods may be encoded in any computer-readable media in any form, such as data, computer- executable instructions, and the like.
  • Computing device 400 may also have input device(s) 41 0 such as keyboard, mouse, pen, voice input device, touch input device, etc.
  • Output device(s) 408 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length.
  • a remote computer may store an example of the process described as software.
  • a local or terminal computer may access the remote computer and download a part or all of the software to run the program.
  • the local computer may download pieces of the software as needed, or distributively process by executing some software instructions at the local terminal and some at the remote computer (or computer network).
  • a dedicated circuit such as a DSP, programmable logic array, or the like.

Abstract

Disclosed herein is a system and method for automatically moving an application from one site to another site in the event of a disaster. Prior to coming back online the application is configured with information to allow it to run on the new site without having to perform the configuration actions after the application has come online. This enables a seamless experience to the user of the application while also reducing the associated downtime for the application.

Description

APPLICATION TRANSPARENT CONTINUOUS AVAILABILITY USING SYNCHRONOUS REPLICATION ACROSS DATA STORES IN A FAILOVER CLUSTER
BACKGROUND
[0001 ] Applications and sites fail for a variety of reasons. When they fail it becomes necessary to move the application to a new location to maintain application availability. Synchronous block replication in a failover cluster environment requires application downtime and manual storage resource dependency changes as a part of a disaster recovery workflow. This is because the application is moved from the location that has failed to another location that is capable of supporting the application. In order to achieve this, the physical disk resource to physical data store mapping needs to be changed to permit the associated application to operate on the new location. These changes are made after the associated application has been brought back up on the new location. This results in an extended period of application downtime for the user.
SUMMARY
[0002] The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
[0003] The present disclosure provides a system and method for automatically moving an application from one site to another site in the event of a disaster. Prior to coming back online the application is configured with information to allow it to run on the new site without having to perform those actions after the application has become online. This provides for a seamless experience to the user of the application while also reducing the associated downtime for the application.
[0004] When a primary site for an application goes down due to disaster or other reason and the application resource group (which also contains any replicated disks) moves to a secondary site which holds a synchronous target replica, the cluster physical disk resource to physical data store mapping is modified to use the target replica before the application resource comes online resulting in automatic role switch. This mechanism virtualizes the application dependent cluster physical disk resource from multiple synchronous copies of data in various sites allowing seamless failover and fallback capabilities.
[0005] The present disclosure also allows a cluster Replication resource to maintain cluster wide replication state of all target replicas which allows it to decide if a target is eligible to be source of replication in the event of a disaster. The target replica connects to source replica without using a well known endpoint. When source replica fails over to a different node within a primary site, the target replica in secondary site discovers the new endpoint to connect to and resumes replication.
[0006] The cluster replication resource automatically adjusts the possible owners nodes of the source and target replica based on the replication state, replication service availability, storage connectivity, arrival or departure of nodes in cluster due to membership changes. This allows the application resource group to failover to only those nodes where this is a high chance of the success due to availability of all required resources.
[0007] Many of the attendant features will be more readily appreciated as the same becomes better understood by reference to the following detailed description considered in connection with the accompanying drawings.
DESCRIPTION OF THE DRAWINGS
[0008] The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
[0009] FIG. 1 is a block diagram illustrating a system 1 00 for providing application transparent continuous availability using synchronous replication across data stores in a failover cluster according to an illustrative embodiment.
[0010] FIG. 2 is a flow diagram illustrating a process for selecting a secondary replication group and automatically performing role switching according to one illustrative embodiment.
[001 1 ] FIG. 3 is a flow diagram illustrating a process for switching roles according to one embodiment.
[001 2] FIG. 4 is a block diagram illustrating a computing device which can implement the enhanced indexing system according to one embodiment. [001 3] Like reference numerals are used to designate like parts in the
accompanying drawings.
DETAILED DESCRIPTION
[0014] The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
[001 5] When elements are referred to as being "connected" or "coupled," the elements can be directly connected or coupled together or one or more intervening elements may also be present. In contrast, when elements are referred to as being "directly connected" or "directly coupled," there are no intervening elements present.
[0016] The subject matter may be embodied as devices, systems, methods, and/or computer program products. Accordingly, some or all of the subject matter may be embodied in hardware and/or in software (including firmware, resident software, microcode, state machines, gate arrays, etc.) Furthermore, the subject matter may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. In the context of this document, a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
[001 7] The computer-usable or computer-readable medium may be for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media.
[001 8] Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and may be accessed by an instruction execution system. Note that the computer-usable or computer-readable medium can be paper or other suitable medium upon which the program is printed, as the program can be electronically captured via, for instance, optical scanning of the paper or other suitable medium, then compiled, interpreted, of otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
[001 9] Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" can be defined as a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of any of the above-mentioned should also be included within the scope of computer-readable media.
[0020] When the subject matter is embodied in the general context of computer- executable instructions, the embodiment may comprise program modules, executed by one or more systems, computers, or other devices. Generally, program modules include routines, programs, objects, components, data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
[0021 ] FIG. 1 is a block diagram illustrating a system 1 00 for providing application transparent continuous availability using synchronous replication across data stores in a failover cluster according to an illustrative embodiment. System 1 00 includes a first site 1 1 0 and a second site 1 60. While only two sites are illustrated in FIG. 1 any number of sites may be present in system 1 00. [0022] The first or primary site 1 1 0 includes an application cluster resource group
1 1 1 . The application resource cluster group 1 1 1 includes an application resource 1 1 5 , a data disk 1 20, a log disk 1 25 , a storage replication unit 1 28. These components are associated with the underlying application that uses the cluster group. The data generated by this application are stored in these components. The first site may be a data center that hosts the application associated with the application resource 1 1 5 or may be a server (physical or virtual) that is hosting the associated application within a data center or other location.
[0023] Site 1 1 0 further includes a plurality of nodes 1 30 and 1 35. Two nodes are illustrated in FIG. 1 for the purposes of simplicity only. It should be noted that any number of nodes may be present on site 1 1 0. Each of the nodes is associated with an application resource cluster group 1 1 1 and more specifically an application resource 1 1 5. Each node l 30 and 1 35 of site 1 1 0 can host a different application resource. However, in some embodiments the same application resource group 1 1 1 can be hosted on both nodes 1 30 and 1 35. In other embodiments node 1 35 acts a failover node for the application resource group 1 1 1 on site 1 1 0. When a failure occurs to an application the application may failover to node 1 35. This can occur in situations where the failure is node related as opposed to site related. Site 1 1 0 further includes a replication service 1 40 and a physical data store 1 50.
[0024] The second or secondary site 1 60 includes a replication cluster resource group 1 61 . The replication cluster resource group also includes a second data disk 1 70, a second log disk 1 75 and a second storage replication unit 1 78. Second site 1 60 also includes a plurality of nodes 1 80 and 1 85. As discussed above with respect to site 1 1 0 only two nodes are illustrated in FIG. 1 for purposes of simplicity only. Again any number of nodes may be associated with site 1 60. Further, site 1 60 includes a second replication service 1 90 and a second physical data store 1 95. The components of the second site 1 60 are functionally similar to those of the first site 1 1 0, and will not necessarily be described separately.
[0025] The present disclosure allows for when the primary site 1 1 0 that is hosting an application goes down due to a disaster or otherwise fails, and the application resource group (which also contains replicated disks for the application) moves to the secondary site 1 60 which holds a synchronous target replica, that the cluster physical disk resource to physical data store mapping is modified to use the target replica, i.e. the replica on the secondary site 1 60 in data store 1 95 , before the application resource comes online on the secondary site 1 60 resulting in an automatic role switch. This process is managed by the replication services 1 40 and 1 90. This process virtualizes the application dependent cluster physical disk resource from multiple synchronous copies of data in various sites allowing seamless failover and fallback capabilities. Embodiments allow a cluster replication resource to maintain cluster wide replication state of all target replicas which allows it to decide if a target is eligible to be source of replication in the event of a disaster.
[0026] A target replica connects to source replica without using a well-known endpoint. When source replica fails over to a different node within a primary site, the target replica in the secondary site discovers the new endpoint to connect to and resumes replication. The cluster replication resource automatically adjusts the possible owners nodes of the source and target replica based on the replication state, replication service availability, storage connectivity, arrival or departure of nodes in cluster due to membership changes. This allows the application resource group to failover to only those nodes where this is a high chance of the success due to availability of all required resources.
[0027] For the purposes of this discussion the following terms will be used to describe the functions of the components illustrated in FIG. 1 .
[0028] Each of the nodes associated with site 1 1 0 can form a replication group such as group 1 1 1 and group 1 61 . It should be noted that any number of nodes may form a replication group. A replication group is in one embodiment a collection of replica instances on a system that are collectively depended on by an application using the data partitions of physical data storage 1 50. The replication service 1 40 tracks the inter-device write ordering dependencies when replicating multiple devices. A replication group is the unit of replication.
[0029] Cluster resource groups: a collection of cluster resources that are grouped together in a cluster and are a unit of failover in a failover cluster. These are illustrated by example cluster groups 1 1 1 and 1 61 [0030] Cluster Physical Disk Resource (PDR): A cluster resource that manages physical disk so that it can be accessible by applications, such as applications associated with application resource 1 1 5. Application typically depend on cluster physical disks resources so that the data is brought online before it can be accessed by applications
[0031 ] Storage Replica Resource: A cluster resource that manages the replication of all replicas in a replication group. Storage replication resource is represented by elements 1 28 and 1 78 in FIG. 1 . It should be noted that in FIG. 1 a "p" represents the features on the current primary site and an "s" represented features on a currently secondary site.
[0032] Asymmetric Storage Cluster: An asymmetric storage cluster is a failover cluster deployment where a data store such as physical disks are not connected to every node of the cluster. Such deployments are typical found when the cluster spans multiple geographical sites where physical storage can be accessed by only the nodes in a given site.
[0033] In order to effectively implement the structure illustrated in FIG. 1 the replication groups must first be created. The replicas that belong to a group are grouped together into a cluster replication group (e.g. groups 1 1 1 and 1 61 ). The cluster physical disk resources are part of replication group that represent the replicas. The replication groups depend from a cluster physical disk resource, which in turn depends on the storage replication cluster resource.
[0034] In one illustrative embodiment the application cluster resources 1 1 5 that consume data from physical disks, depend on the physical disk cluster resource. This dependency chain ensures that the start of the resources are done in an order that ensures that dependent resources are available before application can start consuming the data on the disks.
[0035] The physical data store 1 50 in one embodiment is a physical disc cluster resource that implements shared storage for the various nodes (e.g. nodes 1 30 and 1 35) of the cluster resource group (1 1 1 ). Physical disk cluster resources typically have a private property that indicates the physical data store 1 50 it manages. A physical data store 1 50 in a cluster is connected to multiple nodes (e.g. nodes 1 30 and 1 35) which allows the data to be available on multiple nodes when the application and physical disk cluster resource fail over to other nodes. The physical disk cluster resource takes persistent reservation on physical storage 1 50 so that is accessible on only one of the nodes of cluster to avoid simultaneous edits to data from multiple nodes. The physical disk cluster resource also mounts the volume/file systems when it comes online on a node. Collectively these are illustrated by block 1 50.
[0036] The replication service 1 40 is in one embodiment a replication cluster resource that is configured to determine if primary storage (physical disks 1 50 that are part of source replica) is connected to the node where the resource group is coming online (e.g. resource group 1 1 1 coming online on site 1 1 0). If the storage is not connected, the replication service 1 40 starts an automatic role switch process. Role switching is a process where the source and target of replication are switched, thereby reversing the direction of data replication. Role switching is typically performed, for example, as part of disaster recovery when the current primary site (e.g. site 1 1 0) goes down or when the current primary needs to be taken down for maintenance. Automatic role switching reverses the direction of replication automatically when the replication service detects that the current primary physical storage or nodes are no longer available thereby providing continuous availability of data to application, without the need for an administrator to interact directly with the system during the failover.
[0037] In one embodiment the replication service 1 40 determines if the storage
1 50 is not currently connected by implementing an associated process. If the storage 1 50 is determined not to be connected the replication service 1 40 begins a process to role switch to one of the secondary replication groups. The process begins by determining if there are other replication groups (e.g. group 1 61 ) which can take over as new primary based on their replication status. If the replication group is in a sync state with an old primary, then it is also a candidate to be selected as new primary. Next the process determines if the replication group is connected to the cluster node where the resource group is coming online.
[0038] Once a candidate replication group is selected, a role switch will be performed. This role switch results in the current primary replication group, e.g. group 1 1 1 , will be changed to target of replication (secondary). The selected secondary replication group, e.g. group 1 61 , will be changed to source of replication (primary). [0039] To implement the role switch the process begins by swapping the physical disk cluster resource to physical data storage binding of primary and secondary physical cluster disk resources. Next the process swaps the replication group private property associated with the replication cluster resource. Next the secondary cluster resource group is moved to the primary site.
[0040] The process continues by updating the possible owners of the primary and secondary resource groups to include only those cluster nodes which are within those sites. Possible owners of a cluster resource are a set of nodes where the cluster can try to bring the resource online. A replication group may be hosted only on nodes where the replication service is available, and the physical data store is available. Again nodes may be located on different sites. Additionally when a primary replication group has synchronous partners the primary cluster resource group can also be failed over to those nodes where current synchronous secondary data store is available. Again in FIG. 1 a "p" indicates a primary and an "s" indicates a secondary.
[0041 ] Continuing with the process of role switching the replication service tries to online the secondary replication groups (e.g. groups 1 1 1 and 1 61 ). This process includes ensuring that the flow of data 1 45 from the primary site is able to reach the replication service 1 90 of the secondary site 1 60. Once the primary replication resource comes online the system brings online the log and data associated with the primary resource group, e.g. elements 1 20 and 1 25.
[0042] The replication service 1 40 maintains a cluster wide view of replication status of all replication groups within a cluster. The replications status indicates which replication group is in sync status and which one is not in sync status. If a synchronous secondary replication group loses replication connection to its primary or if there is a failure replicating certain data to a secondary, the replication status of the secondary replication group is changed to NOT IN SYNC before the primary deviates or allows any new input/output to proceed at the primary replication group to the secondary replication group.
[0043] A replication group can failover within a cluster. When that happens the replication to secondary should resume after failover completes. In the absence of a well known replication service endpoint (e.g. a cluster network name resource is an example of a well known endpoint), a failover means a change in replication service endpoint as the node hosting the replication service changes for the source or target of replication. The older node to which a secondary was connected to is no longer valid. The secondary should rediscover the source replication service endpoint and resume replication. In one illustrative implementation when the primary cluster resource group fails over to a different node, during the online sequence it restarts the secondary replication group. The secondary then during its online sequence queries the cluster service to determine the owner node of the primary resource group and uses that node name as primary replication service endpoint. The primary also sends a cluster resource notification to secondary replication groups to indicate the new replication endpoint.
[0044] FIG. 2 is a flow diagram illustrating a process for selecting a secondary replication group and automatically implementing a role switch according to one illustrative embodiment. As the replication cluster resource is at the bottom of a resource dependency chain, the process begins by determining if the primary storage 1 20 is connected to the node. This is illustrated at step 21 0. This determination can be made by sending a command to the primary storage 1 20 and awaiting a return from the primary storage 1 20. Alternatively, the process can be looking for a heartbeat to come from the primary storage 1 20. Other methods of determining if the primary storage 1 20 is connected to the node can be used as well. During normal operations this check occurs when the associated resource group is first coming online. However, this check can also occur while the resource group is operating. When the check occurs while the resource group is operating this check can occur at periodic intervals or can be a continuous check.
[0045] If the primary storage 1 20 is determined to be online and connected to the resource group, the operation of the system continues as normal. This normal operation is illustrated at step 21 2.
[0046] However, if it is determined that the storage is not connected the process moves to step 220 to begin the process of role switching the storage. At step 220 the process determines if there any replication groups 1 78 that can take over as primary replication group 1 28. Each candidate is added to a candidate list. This is illustrated at step 225. Steps 220 and 225 are discussed herein together. In order to be a candidate to take over as the primary replication group 1 28the process looks to or analyzes the replication status of each of the candidate replication groups. The replication service 1 40, 1 90 maintains a cluster wide view of the replication status of all replication groups within a cluster. The replications status indicates which replication groups are in sync status and which one are not in sync status. If a synchronous primary replication group 1 78 loses replication connection to its primary or if there is a failure replicating certain data to a secondary, the replication status of the secondary replication groups is changed to NOT IN SYNC before the primary deviates or allows any new input or outputs to proceed at the primary replication group. If the replication status of the candidate replication group 1 61 is determined to be in sync with the old primary site that candidate replication group is consider do be a valid candidate replication group for selection as a new primary replication group. If it is not in sync with the old primary replication group 1 28 that candidate is removed from the list of potential candidates.
[0047] Once a list of potential candidate replication groups has been determined the process continues to determine which of the candidate replication groups is connected to the cluster node where the resource group is coming online. This is illustrated at step 230. If the replication group is connected to the cluster node then that replication group remains in the candidate list. At this time the system may gather information about that replication group and the associated connection between the replication group and the node or resource group. This information can include features such as the size or capacity of the replication group, the location of the replication group, connection speed, quality of the connection, etc. This information is gathered at step 240.
[0048] If the replication group is not connected to the node the process moves to step 235. At this 235 the system can remove the replication group from the candidate list of resource groups. However, in some embodiments the system can try to have a connection generated for the resource group to connect to the node. This is illustrated by optional step 233. Step 233 would typically occur before step 235. If a connection can be created the system causes a command to be generated that will cause the resource group to connect to the node. If the connection is successful, the analysis moves to step 240. Otherwise the replication group is removed from the list of candidate replication groups.
[0049] Once a final list of replication groups is generated the process proceeds to select one of the candidate replication groups as the new primary replication group. This is illustrated at step 250. The process may select as the new primary replication group 1 28 based upon the gathered characteristics for each of the candidates in the list of candidate replication groups. In some approaches the selection is based on an existing set of rules for the resource group. For example the location of the replication group may be constrained based on location. This can occur for certain applications where the data cannot leave a particular country. In this example the candidate replication groups not meeting this location requirement are now removed or not considered further. The system can look at performance or other quality characteristics in choosing which candidate replication group to select. The system may select the best performing replication group from the candidates. However, the system may simply select a random candidate from the list of candidates.
[0050] Once the candidate is selected from the list the process at step 250 continues by swapping or changing the roles of the two replication groups. This is illustrated by step 260. At this step the current primary replication group 1 28 is changed to become a target of the replication. That is the old primary replication group 1 28 is now a secondary replication group. This is illustrated at step 262. Also the selected replication group will be changed to become the new primary replication group. This is illustrated at step 264. The processes of steps 262 and 264 are essentially the same except for the changes made to the corresponding replication group.
[0051 ] The process that occurs on each of the replication groups to initiate the role switch is described with respect to FIG. 3. The process of FIG. 3 corresponds closely to step 260 and included steps 262 and 264 of FIG. 2. The first step of the process is to swap the physical disk cluster resource to physical data storage binding of the primary physical disk cluster resources and the secondary physical cluster disk resources. This is illustrated at step 31 0. At this step the characteristics of the primary physical disk cluster resources are changed to mimic the secondary physical disk characteristics and the secondary physical disk's characteristics are changed to reflect those characteristics of the primary physical disk cluster resource.
[0052] Next the replication group private property of each of the replication cluster resources is swapped. This is illustrated at step 320. As discussed above every physical disk cluster resource has a private property that indicates the physical data store it manages. The physical data store in a cluster is connected to multiple nodes (e.g. nodes 1 30, 1 35 and nodes 1 80, 1 85) which allows the data to be available on multiple nodes so that the application and the physical disk cluster resource can fail over to other nodes. The physical disk cluster resource takes persistent reservation on physical storage so that is accessible on only one of the nodes of cluster to avoid simultaneously edits to data from multiple nodes. Thus, the private property of the cluster resource is changed to accept the edits from this node of the cluster.
[0053] Following the swapping of the private properties and the binding of the resources the secondary cluster resource group is moved to the primary site. This is illustrated at step 330. Next the possible owners of the primary and secondary resource groups are updated. This is illustrated at step 340. The possible owners are updated to include only those cluster nodes which are within those sites. Possible owners of a cluster resource are a set of nodes where the cluster can try to bring the resource online. A replication group can be hosted only on nodes where the replication service 1 40, 1 90 is available and the physical data store is available. Additionally when a primary replication group 1 28 has synchronous partners the primary cluster resource group (e.g. I l l ) can also be failed over to those nodes where current synchronous secondary data store is available.
[0054] Next the secondary replication group(s) attempt to come online. This is illustrated at step 350. At this point the secondary may have to discover the replication endpoint. If the secondary resource group knows the replication endpoint, such as a cluster network name resource, the resource group is able to come online quickly or with minimal delay. However, during a failover a change in the replication service endpoint can occur as the node that was hosting the replication service 1 40, 1 90 changes for the source and/or target of the replication. This occurs as the older node to which one of the secondary resource groups may no longer be valid. In some cases the new endpoint is known. However, if the replication endpoint is unknown then the secondary resource group needs to discover the replication end point. This is illustrated at optional step 360. In this case the primary replication group 1 78rediscovers the source replication endpoint and resumes the replication process. . The primary replication group 1 78 then as part of the online sequence queries the cluster service to determine the owner node of the primary resource group and uses that node's name as primary replication service endpoint. At this time, the primary replication group 1 28 can also send a cluster resource notification to the secondary replication groups to indicate the new replication endpoint. Once the endpoint has been discovered and the primary replication group 1 78 has come online the failover process is completed. Operation then returns to normal for the application.
[0055] FIG. 4 illustrates a component diagram of a computing device according to one embodiment. The computing device 400 can be utilized to implement one or more computing devices, computer processes, or software modules described herein. In one example, the computing device 400 can be utilized to process calculations, execute instructions, receive and transmit digital signals. In another example, the computing device 400 can be utilized to process calculations, execute instructions, receive and transmit digital signals, receive and transmit search queries, and hypertext, compile computer code, as required by the system of the present embodiments. Further, computing device 400 can be a distributed computing device where components of computing device 400 are located on different computing devices that are connected to each other through network or other forms of connections. Additionally, computing device 400 can be a cloud based computing device.
[0056] The computing device 400 can be any general or special purpose computer now known or to become known capable of performing the steps and/or performing the functions described herein, either in software, hardware, firmware, or a combination thereof.
[0057] In its most basic configuration, computing device 400 typically includes at least one central processing unit (CPU) or processor 402 and memory 404. Depending on the exact configuration and type of computing device, memory 404 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. Additionally, computing device 400 may also have additional features/functionality. For example, computing device 400 may include multiple CPU's. The described methods may be executed in any manner by any processing unit in computing device 400. For example, the described process may be executed by both multiple CPU's in parallel. [0058] Computing device 400 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in Figure 4 by storage 406. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 404 and storage 406 are all examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by computing device 400. Any such computer storage media may be part of computing device 400.
[0059] Computing device 400 may also contain communications device(s) 41 2 that allow the device to communicate with other devices. Communications device(s) 41 2 is an example of communication media. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. The term computer-readable media as used herein includes both computer storage media and communication media. The described methods may be encoded in any computer-readable media in any form, such as data, computer- executable instructions, and the like.
[0060] Computing device 400 may also have input device(s) 41 0 such as keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 408 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length.
[0061 ] Those skilled in the art will realize that storage devices utilized to store program instructions can be distributed across a network. For example a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively the local computer may download pieces of the software as needed, or distributively process by executing some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realize that by utilizing conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.

Claims

1 . A method for switching a primary replication group with a secondary replication group, comprising: swapping resources between the primary replication group and the secondary replication group; swapping private properties of the primary replication group and the secondary replication group; moving the secondary replication group to a primary site; and onlining the secondary replication group as a new primary replication group.
2. The method of claim 8 wherein onlining further comprises: determining if the secondary replication group is aware of a replication endpoint.
3. The method of claim 2 wherein when the secondary replication group is not aware of the replication endpoint, further comprising: determining the replication endpoint.
4. The method of claim 3 wherein determining the replication endpoint, further comprises: querying a cluster service to determine an owner node of the primary cluster
resource; and applying a name associated with the node as the replication endpoint.
5. The method of claim 3 wherein determining further comprises: sending a notification to the secondary replication group indicating a new
replication endpoint.
6. The method of claim 1 further comprising: updating possible owners of the primary replication group and the secondary replication group.
7. The method of claim 1 wherein swapping the private properties of the secondary replication group allows the secondary replication group to accept edits from a specific node.
8. The method of claim 1 wherein swapping resources further comprises: mimicking physical characteristics of the primary replication group on the
secondary replication group; and mimicking physical characteristics of the secondary replication group on the
primary replication group.
9. A system for automatically switching replication roles, comprising: a first site hosting at least one application cluster resource group; and a replication service disposed on the first site, the replication service configured to monitor a connection between the application cluster resource group and a physical data store through a node, and further configured to switch a primary replication group to a secondary replication group when the physical data store is determined not to be connected to the application cluster resource.
1 0. The system of claim 9 wherein the replication service is further configured to: swap resources between the primary replication group and the secondary
replication group; swap private properties of the primary replication group and the secondary
replication group; move the secondary replication group to a primary site; and online the secondary replication group.
1 1 . The system of claim 9 wherein the replication service is further configured to determine a replication endpoint for the secondary replication group and to notify the secondary replication of a name of a node associated with the replication endpoint.
1 2. The system of claim 9 wherein the replication service is further configured to: select a replication group that is connected to the node to become a new primary replication group; convert the selected replication group to become the new primary replication group; and convert the primary replication group to the secondary replication group.
1 3. The system of claim 9 wherein the secondary replication group is located on a second site different from the first site.
PCT/US2015/049042 2014-09-08 2015-09-08 Application transparent continuous availability using synchronous replication across data stores in a failover cluster WO2016040393A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15771806.5A EP3191958A1 (en) 2014-09-08 2015-09-08 Application transparent continuous availability using synchronous replication across data stores in a failover cluster
CN201580048056.7A CN106605217B (en) 2014-09-08 2015-09-08 For the method and system for being moved to another website from a website will to be applied

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201462047634P 2014-09-08 2014-09-08
US62/047,634 2014-09-08

Publications (1)

Publication Number Publication Date
WO2016040393A1 true WO2016040393A1 (en) 2016-03-17

Family

ID=54207728

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/049042 WO2016040393A1 (en) 2014-09-08 2015-09-08 Application transparent continuous availability using synchronous replication across data stores in a failover cluster

Country Status (4)

Country Link
US (2) US9804802B2 (en)
EP (1) EP3191958A1 (en)
CN (1) CN106605217B (en)
WO (1) WO2016040393A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10713131B2 (en) * 2017-07-31 2020-07-14 Wmware, Inc. Automatic selection of networks when recovering a virtual machine
US10721296B2 (en) * 2017-12-04 2020-07-21 International Business Machines Corporation Optimized rolling restart of stateful services to minimize disruption
US10379985B1 (en) * 2018-02-01 2019-08-13 EMC IP Holding Company LLC Automating and monitoring rolling cluster reboots
CN111459416B (en) * 2020-04-24 2021-02-23 杭州网银互联科技股份有限公司 Distributed storage-based thermal migration system and migration method thereof
US11762743B2 (en) * 2021-06-28 2023-09-19 International Business Machines Corporation Transferring task data between edge devices in edge computing
CN116610499B (en) * 2023-07-19 2023-11-03 联想凌拓科技有限公司 Cluster role switching method, device, equipment and medium in file system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870537A (en) * 1996-03-13 1999-02-09 International Business Machines Corporation Concurrent switch to shadowed device for storage controller and device errors
US20120066394A1 (en) * 2010-09-15 2012-03-15 Oracle International Corporation System and method for supporting lazy deserialization of session information in a server cluster
US20120310887A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation Automatic configuration of a recovery service
US8600945B1 (en) * 2012-03-29 2013-12-03 Emc Corporation Continuous data replication

Family Cites Families (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6173420B1 (en) * 1997-10-31 2001-01-09 Oracle Corporation Method and apparatus for fail safe configuration
US7392421B1 (en) * 2002-03-18 2008-06-24 Symantec Operating Corporation Framework for managing clustering and replication
US7007042B2 (en) * 2002-03-28 2006-02-28 Hewlett-Packard Development Company, L.P. System and method for automatic site failover in a storage area network
US7814050B2 (en) 2002-10-22 2010-10-12 Brocade Communications Systems, Inc. Disaster recovery
JP4290975B2 (en) 2002-12-19 2009-07-08 株式会社日立製作所 Database processing method and apparatus, processing program therefor, disaster recovery method and system
US20040181707A1 (en) * 2003-03-11 2004-09-16 Hitachi, Ltd. Method and apparatus for seamless management for disaster recovery
US20060047776A1 (en) * 2004-08-31 2006-03-02 Chieng Stephen S Automated failover in a cluster of geographically dispersed server nodes using data replication over a long distance communication link
US7475204B2 (en) * 2004-11-24 2009-01-06 International Business Machines Corporation Automatically managing the state of replicated data of a computing environment
US7941537B2 (en) * 2005-10-03 2011-05-10 Genband Us Llc System, method, and computer-readable medium for resource migration in a distributed telecommunication system
US7480817B2 (en) 2006-03-31 2009-01-20 International Business Machines Corporation Method for replicating data based on probability of concurrent failure
GB0616375D0 (en) 2006-08-17 2006-09-27 Ibm An apparatus for facilitating disaster recovery
US7594072B2 (en) * 2006-09-15 2009-09-22 Hitachi, Ltd. Method and apparatus incorporating virtualization for data storage and protection
CN101640688B (en) * 2009-08-20 2014-03-12 中兴通讯股份有限公司 Content delivery network (CDN)-based switching method for main node controller and spare controller and CDN
JP5508798B2 (en) * 2009-09-29 2014-06-04 株式会社日立製作所 Management method and system for managing replication in consideration of clusters
US8335765B2 (en) 2009-10-26 2012-12-18 Amazon Technologies, Inc. Provisioning and managing replicated data instances
US8572031B2 (en) 2010-12-23 2013-10-29 Mongodb, Inc. Method and apparatus for maintaining replica sets
US8566635B2 (en) * 2011-01-21 2013-10-22 Lsi Corporation Methods and systems for improved storage replication management and service continuance in a computing enterprise
US9009196B2 (en) 2011-03-16 2015-04-14 Microsoft Technology Licensing, Llc Discovery and client routing to database nodes
US8782358B2 (en) * 2011-04-27 2014-07-15 International Business Machines Corporation Transparent input / output switching between synchronously mirrored storage volumes
US8522068B2 (en) * 2011-05-02 2013-08-27 International Business Machines Corporation Coordinated disaster recovery production takeover operations
US9176829B2 (en) * 2011-07-01 2015-11-03 Microsoft Technology Licensing, Llc Managing recovery virtual machines in clustered environment
US9110717B2 (en) * 2012-07-05 2015-08-18 International Business Machines Corporation Managing use of lease resources allocated on fallover in a high availability computing environment
US8904231B2 (en) * 2012-08-08 2014-12-02 Netapp, Inc. Synchronous local and cross-site failover in clustered storage systems
US10049022B2 (en) * 2013-06-24 2018-08-14 Oracle International Corporation Systems and methods to retain and reclaim resource locks and client states after server failures
US9400718B2 (en) * 2013-08-02 2016-07-26 Sanovi Technologies Pvt. Ltd. Multi-tenant disaster recovery management system and method for intelligently and optimally allocating computing resources between multiple subscribers
CN103560906B (en) * 2013-10-22 2017-01-25 珠海多玩信息技术有限公司 Data replication method and device
US10693955B2 (en) * 2013-12-14 2020-06-23 Netapp, Inc. Techniques for SAN storage cluster synchronous disaster recovery
US9965363B2 (en) * 2013-12-14 2018-05-08 Netapp, Inc. Techniques for LIF placement in SAN storage cluster synchronous disaster recovery
US9632887B2 (en) * 2014-09-19 2017-04-25 International Business Machines Corporation Automatic client side seamless failover
US9684562B2 (en) * 2015-07-21 2017-06-20 International Business Machines Corporation Automatic serial starting of resource groups on failover based on the prediction of aggregate resource usage

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870537A (en) * 1996-03-13 1999-02-09 International Business Machines Corporation Concurrent switch to shadowed device for storage controller and device errors
US20120066394A1 (en) * 2010-09-15 2012-03-15 Oracle International Corporation System and method for supporting lazy deserialization of session information in a server cluster
US20120310887A1 (en) * 2011-06-06 2012-12-06 Microsoft Corporation Automatic configuration of a recovery service
US8600945B1 (en) * 2012-03-29 2013-12-03 Emc Corporation Continuous data replication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3191958A1 *

Also Published As

Publication number Publication date
CN106605217A (en) 2017-04-26
US10592172B2 (en) 2020-03-17
US20180067698A1 (en) 2018-03-08
EP3191958A1 (en) 2017-07-19
US9804802B2 (en) 2017-10-31
US20160070624A1 (en) 2016-03-10
CN106605217B (en) 2019-08-27

Similar Documents

Publication Publication Date Title
US10592172B2 (en) Application transparent continuous availability using synchronous replication across data stores in a failover cluster
US11016944B2 (en) Transferring objects between different storage devices based on timestamps
EP3338186B1 (en) Optimal storage and workload placement, and high resiliency, in geo-distributed cluster systems
US10120787B1 (en) Automated code testing in a two-dimensional test plane utilizing multiple data versions from a copy data manager
US20130151683A1 (en) Load balancing in cluster storage systems
US20150229715A1 (en) Cluster management
CN112470112A (en) Distributed copy of block storage system
US11647075B2 (en) Commissioning and decommissioning metadata nodes in a running distributed data storage system
US9262148B2 (en) Modular architecture for distributed system management
US10846284B1 (en) View-based data mart management system
JP2019526106A (en) Data processing method and device
US20180181383A1 (en) Controlling application deployment based on lifecycle stage
CN103493003A (en) Deploying a copy of a disk image from source storage to target storage
CN105205143A (en) File storage and processing method, device and system
US11223522B1 (en) Context-based intelligent re-initiation of microservices
US9917740B2 (en) Reducing internodal communications in a clustered system
US20190179807A1 (en) Table and index communications channels
KR20150111608A (en) Method for duplication of virtualization server and Virtualization control apparatus thereof
US10437797B1 (en) In-memory distributed database with a remote data store
US11748004B2 (en) Data replication using active and passive data storage modes
US20220229687A1 (en) Non-disruptive container runtime changes
US20170353515A1 (en) Techniques for warming up a node in a distributed data store
US10275467B2 (en) Multi-level high availability model for an object storage service
CN114328434A (en) Data processing system, method, device and storage medium
US10613789B1 (en) Analytics engine using consistent replication on distributed sites

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15771806

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015771806

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015771806

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE