CN102780769A - Cloud computing platform-based disaster recovery storage method - Google Patents

Cloud computing platform-based disaster recovery storage method Download PDF

Info

Publication number
CN102780769A
CN102780769A CN2012102292274A CN201210229227A CN102780769A CN 102780769 A CN102780769 A CN 102780769A CN 2012102292274 A CN2012102292274 A CN 2012102292274A CN 201210229227 A CN201210229227 A CN 201210229227A CN 102780769 A CN102780769 A CN 102780769A
Authority
CN
China
Prior art keywords
data
node
storage
nodes
piece
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102292274A
Other languages
Chinese (zh)
Other versions
CN102780769B (en
Inventor
付雄
王义波
王汝传
孙力娟
韩志杰
季一木
戴华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Dunhua Traffic Technology Co., Ltd.
Original Assignee
Nanjing Post and Telecommunication University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Post and Telecommunication University filed Critical Nanjing Post and Telecommunication University
Priority to CN201210229227.4A priority Critical patent/CN102780769B/en
Publication of CN102780769A publication Critical patent/CN102780769A/en
Application granted granted Critical
Publication of CN102780769B publication Critical patent/CN102780769B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a cloud computing platform-based disaster recovery storage model. By using the model, the requirements on network bandwidth and node storage capacity can be reduced in deployment; and meanwhile, when P data nodes of the data nodes fail at the same time, the data recovery still can be realized very well and very quickly, and the data integrity is guaranteed extremely well. The principal of the model is that when the data uploaded by a user is large, the data is segmented to a certain degree, and then the segmented data blocks are stored on the data nodes in a crossed way. According to the storage realized in such a way, P nodes in an node cluster are allowed to fail at the same time, namely when the P nodes in the cluster fail at the same time, the data integrity still can be guaranteed, and meanwhile, the recovery speed is much higher than that of the current mainstream full replica placement model, wherein P is a constant number which is smaller than the number of the nodes.

Description

A kind of disaster tolerance storage means based on cloud computing platform
Technical field
The present invention is a kind of disaster tolerance storage means based on cloud computing platform, is mainly used in the reliability and the fail safe that ensure data in the cluster.Belong to Distributed Calculation and cloud computing field.
Background technology
Along with improving constantly of Internet development of internet technology and computer technology, the ability straight line of the data of transmission through network and processing increases.People hope to obtain a kind of directly, computing mode easily, do not need installation application software, as long as connect the Internet, just can utilize the computer resource that is connected the free time in the network to carry out the task processing.
Under this background, cloud computing is arisen at the historic moment, and so-called cloud computing goes to connect the cloud computing platform that is made up of a large amount of servers, memory device cluster through computer network exactly, obtains the needed service of Terminal Server Client.Cloud computing service provider is divided into several parts with a complex calculations task; Through being distributed in the collaborative computer cooperation in the computer network; Operation result is transferred to client the most at last, thereby realizes the computing of personal data at long-range computational resource cluster.
The notion and the cloud computing of cloud storage are similar; It is meant through functions such as cluster application, grid or distributed file systems; A large amount of various dissimilar memory devices in the network are gathered collaborative work through application software, externally provide by the storage of user's request and a system of Operational Visit function jointly.
Cloud storage is not meant some concrete equipment to the user, and is meant one by aggregate that various memory devices and servers constituted.The user uses the cloud storage, is not to use some memory devices, and a kind of data access service that is to use whole cloud storage system to bring.So strictness, the cloud storage is not storage, but a kind of service.In a word, the core of cloud storage is that application software combines with memory device, realizes the transformation of memory device to stores service through application software.
The reliability and the fail safe of the existing network bandwidth, storage data, these are to limit the key factor that the cloud storing technology becomes popular at present.Through which type of memory model; Can under the existing network bandwidth, data be uploaded to the cloud storage server more fast, and can guarantee the reliability and the fail safe of data; This problem is a current relatively more popular topic, has also attracted a lot of technical staff to come further to explore.
At present, in order to ensure the reliability and the fail safe of storage data, the way of main flow is that data are carried out full backup, is deployed to different nodes at each copy, avoids data because the consequence of server failure or bringing on a disaster property of natural calamity.But above-mentioned way; When having ensured data reliability and fail safe; Also brought and duplicated long problem consuming time, and possibly have node failure in duplicating, through research these problems; The present invention proposes a kind of disaster tolerance memory model, when taking into account these problems, also can well ensure the high reliability and the fail safe of data.
Summary of the invention
Technical problem:For the data that guarantee to store have higher reliability, tend to adopt copy disaster tolerance mechanism in the cloud storage, can guarantee that like this node has efficient recovery when disaster takes place, but when disposing, because the restriction of the network bandwidth now, speed is often slow; Meanwhile, because the node data memory space is bigger, when node generation disaster; It is slower to carry out the data resume speed from a node; The probability that causes take place losing efficacy at data convalescence backup node is bigger, to these problems, the present invention proposes a kind of disaster tolerance memory model based on cloud computing.
Technical scheme:The present invention is a kind of disaster tolerance memory model, when disposing, can reduce the requirement to the network bandwidth and node storage capacity; Simultaneously, when P data node breaks down simultaneously, still can efficiently intactly realize the recovery of data, ensure the reliability of data.Its principle is the data uploaded as user when bigger, earlier data is carried out cutting apart to a certain degree, again with divided data piece interleaved to back end.The storage of using this mode to realize, P node breaks down simultaneously in the permission node cluster, promptly under the situation that P data node breaks down simultaneously, still can guarantee the integrality of user data, and resume speed is fast.Wherein P is a constant less than the back end number.
The key step of this method is following:
Step 1. is according to designing requirement and systematic function; Confirm replica node number
Figure 2012102292274100002DEST_PATH_IMAGE001
; Confirm the threshold values
Figure 2012102292274100002DEST_PATH_IMAGE003
of subscriber data file size
Figure 2012102292274100002DEST_PATH_IMAGE002
simultaneously, wherein threshold values
Figure 2012102292274100002DEST_PATH_IMAGE004
is used for confirming the copy replication scheme in the future;
Step 2. is calculated subscriber data file size
Figure 693737DEST_PATH_IMAGE002
; Execution in step 3 during as , otherwise execution in step 4;
Step 3. is duplicated data
Figure 280052DEST_PATH_IMAGE001
part, on each node a complete data trnascription of each storage and its verification with;
Step 4. is divided into whole data file
Figure 2012102292274100002DEST_PATH_IMAGE006
part of identical size according to back end number
Figure 325368DEST_PATH_IMAGE001
;
Step 5. is divided into each piece of data part of identical size again; Wherein
Figure 957786DEST_PATH_IMAGE007
is user's setup parameter; Cut apart each piece of data size of back for
Figure 2012102292274100002DEST_PATH_IMAGE008
, its big I through type calculates:
Figure 2012102292274100002DEST_PATH_IMAGE010
Figure 851542DEST_PATH_IMAGE009
Figure 316021DEST_PATH_IMAGE011
piece of data mean allocation that step 6. will be cut apart is to
Figure 594687DEST_PATH_IMAGE001
individual node; Then each node storage
Figure 2012102292274100002DEST_PATH_IMAGE012
piece of data claims that these data are the local data
Figure 2012102292274100002DEST_PATH_IMAGE014
of node
Figure 401100DEST_PATH_IMAGE013
.Annotate:
Figure 30796DEST_PATH_IMAGE015
represent local data,
Figure 33387DEST_PATH_IMAGE014
represent the local data of node ;
Step 7. is carried out the logical groups division with the local data piece of node storage; Individual data divide one group into, and the group after will dividing is numbered to be about to ;
Step 8. can be known by step 6; The logical groups number that node local data can be divided is individual for
Figure 772782DEST_PATH_IMAGE017
; Make that logical groups is
Figure 2012102292274100002DEST_PATH_IMAGE018
Figure 126534DEST_PATH_IMAGE019
; Make
Figure 2012102292274100002DEST_PATH_IMAGE020
expression node
Figure 379792DEST_PATH_IMAGE021
all logical groups,
Figure 2012102292274100002DEST_PATH_IMAGE022
representes current logical groups;
All node set of participating in storing that step 9. order is excluded behind the node
Figure 42855DEST_PATH_IMAGE013
are the residue node set; Find out the residue node set of each node, i.e. formula
Figure 2012102292274100002DEST_PATH_IMAGE024
Figure 338674DEST_PATH_IMAGE025
Figure 2012102292274100002DEST_PATH_IMAGE026
Figure 915280DEST_PATH_IMAGE023
Step 10.? The node
Figure 296714DEST_PATH_IMAGE013
data
Figure 512931DEST_PATH_IMAGE020
stored to the eliminating
Figure 457754DEST_PATH_IMAGE021
other
Figure 682674DEST_PATH_IMAGE027
node
Figure DEST_PATH_IMAGE028
, namely , and satisfies
Figure DEST_PATH_IMAGE030
,
Figure 626939DEST_PATH_IMAGE031
Wherein:
Figure DEST_PATH_IMAGE032
is the constant of appointment,
Figure 82244DEST_PATH_IMAGE033
represent to store into.
Beneficial effect:The present invention proposes a kind of disaster tolerance memory model based on cloud computing platform; The disaster tolerance memory model of full placement of present main flow; The main advantage of this model is: when a few data nodes break down simultaneously in the back end; Still can the fine recovery that realizes data soon, ensured the integrality of data admirably; Meanwhile; Because what on back end, place is not complete copy; Therefore, no matter be incipient copy place or break down after data recover, this model all has fast speeds; Requirement to the network bandwidth is also lower, and these have all further ensured the high reliability of data.At last, this model also has very big advantage aspect the utilizing of back end memory space.
Provide bright specifically below:
When the user need upload data to the cloud storage server, traditional copy was placed the node number that model can be placed as required, user's data is duplicated fully, and be placed into each node respectively.This model all has than higher requirement the network bandwidth, node capacity, and after back end breaks down, slow to the recovery of data, if during restoration there is replica node to break down again, just can't ensure the integrality of data.
Disaster tolerance memory model among utilization the present invention, the data that when the user uploads data, need upload the user are earlier analyzed, if data are excessive, then can adopt the incomplete copy storage mode of intersection, if data are little, then use full traditional storage mode.
This disaster tolerance memory model; Reduced requirement to a great extent to the node storage capacity and the network bandwidth; The most important thing is this memory module in addition; Can ensure under the situation that a few data nodes break down simultaneously, still can realize the recovery of data quickly, ensure the reliability of data.
Description of drawings
Fig. 1 general frame figure.
Fig. 2 is based on the flow chart of the disaster tolerance memory model of cloud computing platform.
Embodiment
Disaster tolerance storage means based on cloud computing platform of the present invention still can realize the recovery of data very soon well under the situation that a few data nodes break down simultaneously, ensure the reliability of data, and its step is mainly following:
Step 1. is according to designing requirement and systematic function; Confirm replica node number
Figure 195693DEST_PATH_IMAGE001
; Confirm the threshold values
Figure 423861DEST_PATH_IMAGE004
of subscriber data file size
Figure 933973DEST_PATH_IMAGE002
simultaneously, wherein threshold values
Figure 95144DEST_PATH_IMAGE004
is used for confirming the copy replication scheme in the future;
Step 2. is calculated subscriber data file size ; Execution in step 3 during as , otherwise execution in step 4;
Step 3. is duplicated data
Figure 726775DEST_PATH_IMAGE001
part, on each node a complete data trnascription of each storage and its verification with;
Step 4. is divided into whole data file
Figure 811723DEST_PATH_IMAGE006
part of identical size according to back end number ;
Step 5. is divided into each piece of data
Figure 814445DEST_PATH_IMAGE007
part of identical size again; Wherein
Figure 979847DEST_PATH_IMAGE007
is user's setup parameter; Cut apart each piece of data size of back for
Figure 155745DEST_PATH_IMAGE008
, its big I through type
Figure 722993DEST_PATH_IMAGE009
calculates:
Figure 145884DEST_PATH_IMAGE010
Figure 145719DEST_PATH_IMAGE009
piece of data mean allocation that step 6. will be cut apart is to
Figure 281483DEST_PATH_IMAGE001
individual node; Then each node storage
Figure 891587DEST_PATH_IMAGE012
piece of data claims that these data are the local data
Figure 939494DEST_PATH_IMAGE014
of node
Figure 500423DEST_PATH_IMAGE013
.Annotate:
Figure 559962DEST_PATH_IMAGE015
represent local data,
Figure 793498DEST_PATH_IMAGE014
represent the local data of node
Figure 951947DEST_PATH_IMAGE016
;
Step 7. is carried out the logical groups division with the local data piece of node
Figure 270408DEST_PATH_IMAGE013
storage; Individual data divide one group into, and the group after will dividing is numbered to be about to
Figure 401175DEST_PATH_IMAGE007
;
Step 8. can be known by step 6; The logical groups number that node local data can be divided is individual for
Figure 149819DEST_PATH_IMAGE017
; Make that logical groups is
Figure 61592DEST_PATH_IMAGE019
; Make
Figure 656652DEST_PATH_IMAGE020
expression node
Figure 231990DEST_PATH_IMAGE021
all logical groups,
Figure 365031DEST_PATH_IMAGE022
representes current logical groups;
All node set of participating in storing that step 9. order is excluded behind the node
Figure 19436DEST_PATH_IMAGE013
are the residue node set; Find out the residue node set of each node, i.e. formula
Figure 124795DEST_PATH_IMAGE023
Figure 28291DEST_PATH_IMAGE024
Figure 586311DEST_PATH_IMAGE025
Figure 75379DEST_PATH_IMAGE023
Step 10.? The node
Figure 333797DEST_PATH_IMAGE013
data stored to the eliminating
Figure 462738DEST_PATH_IMAGE021
Other
Figure 277111DEST_PATH_IMAGE027
node
Figure 974939DEST_PATH_IMAGE028
, namely
Figure 507552DEST_PATH_IMAGE029
, and satisfies ,
Figure 784785DEST_PATH_IMAGE031
Figure 309307DEST_PATH_IMAGE030
Wherein:
Figure 673424DEST_PATH_IMAGE032
is the constant of appointment,
Figure 223485DEST_PATH_IMAGE033
represent to store into.
One, data split
Before data split, need earlier the data of user's upload server to be analyzed, if the threshold values that the data volume comparison system of uploading is preset is little, then adopt full to place model; If the data volume of uploading is higher than the threshold values of systemic presupposition, then to data according to splitting, split process sees step 3 for details, 4,5.
Two, data cross storage
This stage roughly can be divided into three processes, promptly divide logical groups, search residue node set and interleaved, preceding two preparatory stages that process is an interleaved, last process be only interleaved real the implementation phase.Divide logical groups, promptly the data of each node storage are divided, make it logically form certain grouping, see step 6 for details, 7,8; Search the residue node set, promptly search in the node of participating in storage, all the other nodes except that self are referred to as self residue node set, see step 9 for details; Interleaved promptly according to certain mode, is unit with the data of self storing with the logical groups of dividing, and interleaved sees step 10 for details on the residue node set of self.

Claims (1)

1. the disaster tolerance storage means based on cloud computing platform is characterized in that this method under the situation that a few data nodes break down simultaneously, still can realize the recovery of data very soon well, ensures the reliability of data, and its step is mainly following:
Step 1. is according to designing requirement and systematic function; Confirm replica node number
Figure 2012102292274100001DEST_PATH_IMAGE001
; Confirm the threshold values
Figure DEST_PATH_IMAGE003
of subscriber data file size
Figure 435870DEST_PATH_IMAGE002
simultaneously, wherein threshold values
Figure 949023DEST_PATH_IMAGE004
is used for confirming the copy replication scheme in the future;
Step 2. is calculated subscriber data file size
Figure 5971DEST_PATH_IMAGE002
; Execution in step 3 during as
Figure DEST_PATH_IMAGE005
, otherwise execution in step 4;
Step 3. is duplicated data
Figure 143780DEST_PATH_IMAGE001
part, on each node a complete data trnascription of each storage and its verification with;
Step 4. is divided into whole data file
Figure 547396DEST_PATH_IMAGE006
part of identical size according to back end number ;
Step 5. is divided into each piece of data
Figure 458852DEST_PATH_IMAGE007
part of identical size again; Wherein
Figure 898054DEST_PATH_IMAGE007
is user's setup parameter; Cut apart each piece of data size of back for , its big I through type
Figure 202445DEST_PATH_IMAGE009
calculates:
Figure 171669DEST_PATH_IMAGE010
Figure 844090DEST_PATH_IMAGE009
Figure 345870DEST_PATH_IMAGE011
piece of data mean allocation that step 6. will be cut apart is to
Figure 581679DEST_PATH_IMAGE001
individual node; Then each node storage
Figure 467727DEST_PATH_IMAGE012
piece of data claims that these data are the local data
Figure 220230DEST_PATH_IMAGE014
of node
Figure 248732DEST_PATH_IMAGE013
; Annotate:
Figure 728572DEST_PATH_IMAGE015
represent local data,
Figure 734705DEST_PATH_IMAGE014
represent the local data of node
Figure 748929DEST_PATH_IMAGE016
;
Step 7. is carried out the logical groups division with the local data piece of node
Figure 145406DEST_PATH_IMAGE013
storage; Individual data divide one group into, and the group after will dividing is numbered to be about to
Figure 270488DEST_PATH_IMAGE007
;
Step 8. can be known by step 6; The logical groups number that node local data can be divided is individual for
Figure 318079DEST_PATH_IMAGE017
; Make that logical groups is
Figure 443435DEST_PATH_IMAGE019
; Make
Figure 309891DEST_PATH_IMAGE020
expression node
Figure 211987DEST_PATH_IMAGE021
all logical groups,
Figure 505697DEST_PATH_IMAGE022
representes current logical groups;
All node set of participating in storing that step 9. order is excluded behind the node
Figure 860455DEST_PATH_IMAGE013
are the residue node set; Find out the residue node set of each node, i.e. formula
Figure 592918DEST_PATH_IMAGE023
Figure 100254DEST_PATH_IMAGE024
Figure 814132DEST_PATH_IMAGE025
Figure 672498DEST_PATH_IMAGE026
Figure 130024DEST_PATH_IMAGE023
Step 10.? The node
Figure 554183DEST_PATH_IMAGE013
data
Figure 449415DEST_PATH_IMAGE020
stored to the eliminating
Figure 778765DEST_PATH_IMAGE021
Other
Figure 525136DEST_PATH_IMAGE017
node
Figure 256331DEST_PATH_IMAGE027
, namely
Figure 390640DEST_PATH_IMAGE028
, and satisfies
Figure 692440DEST_PATH_IMAGE029
,
Figure 757348DEST_PATH_IMAGE030
Figure 93782DEST_PATH_IMAGE029
Wherein:
Figure 133414DEST_PATH_IMAGE031
is the constant of appointment,
Figure 437356DEST_PATH_IMAGE032
represent to store into.
CN201210229227.4A 2012-07-04 2012-07-04 Cloud computing platform-based disaster recovery storage method Expired - Fee Related CN102780769B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210229227.4A CN102780769B (en) 2012-07-04 2012-07-04 Cloud computing platform-based disaster recovery storage method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210229227.4A CN102780769B (en) 2012-07-04 2012-07-04 Cloud computing platform-based disaster recovery storage method

Publications (2)

Publication Number Publication Date
CN102780769A true CN102780769A (en) 2012-11-14
CN102780769B CN102780769B (en) 2015-01-28

Family

ID=47125521

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210229227.4A Expired - Fee Related CN102780769B (en) 2012-07-04 2012-07-04 Cloud computing platform-based disaster recovery storage method

Country Status (1)

Country Link
CN (1) CN102780769B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050015A (en) * 2014-06-27 2014-09-17 国家计算机网络与信息安全管理中心 Mirror image storage and distribution system for virtual machines
CN104184741A (en) * 2014-09-05 2014-12-03 重庆市汇链信息科技有限公司 Method for distributing massive audio and video data into distribution server
CN106027653A (en) * 2016-05-23 2016-10-12 华中科技大学 Multi-cloud storage system expansion method based on RAID4 (Redundant Array of Independent Disks)
CN107395745A (en) * 2017-08-20 2017-11-24 长沙曙通信息科技有限公司 A kind of distributed memory system data disperse Realization of Storing
CN107528719A (en) * 2017-03-08 2017-12-29 深圳市泽云科技有限公司 A kind of implementation method for lifting cloud storage system high availability
CN108241544A (en) * 2016-12-23 2018-07-03 航天星图科技(北京)有限公司 A kind of fault handling method based on cluster
CN109445704A (en) * 2018-10-29 2019-03-08 南京录信软件技术有限公司 A method of it is comprehensive to store mass data using plurality of devices
CN109669930A (en) * 2018-12-14 2019-04-23 成都四方伟业软件股份有限公司 Quality of data report-generating method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070079082A1 (en) * 2005-09-30 2007-04-05 Gladwin S C System for rebuilding dispersed data
CN101630282A (en) * 2009-07-29 2010-01-20 国网电力科学研究院 Data backup method based on Erasure coding and copying technology
CN101902498A (en) * 2010-07-02 2010-12-01 广州鼎甲计算机科技有限公司 Network technology based storage cloud backup method
CN202110552U (en) * 2011-04-18 2012-01-11 江苏技术师范学院 Software protection device based on multi-body interleaved storage technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070079082A1 (en) * 2005-09-30 2007-04-05 Gladwin S C System for rebuilding dispersed data
CN101630282A (en) * 2009-07-29 2010-01-20 国网电力科学研究院 Data backup method based on Erasure coding and copying technology
CN101902498A (en) * 2010-07-02 2010-12-01 广州鼎甲计算机科技有限公司 Network technology based storage cloud backup method
CN202110552U (en) * 2011-04-18 2012-01-11 江苏技术师范学院 Software protection device based on multi-body interleaved storage technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
付雄: "树型数据网格环境下副本放置算法研究", 《南京邮电大学学报》, 30 June 2011 (2011-06-30) *
祝建武: "云存储在企业容灾备份中全新模式探析", 《现代商贸工业》, 31 March 2011 (2011-03-31) *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050015A (en) * 2014-06-27 2014-09-17 国家计算机网络与信息安全管理中心 Mirror image storage and distribution system for virtual machines
CN104184741A (en) * 2014-09-05 2014-12-03 重庆市汇链信息科技有限公司 Method for distributing massive audio and video data into distribution server
CN106027653A (en) * 2016-05-23 2016-10-12 华中科技大学 Multi-cloud storage system expansion method based on RAID4 (Redundant Array of Independent Disks)
CN106027653B (en) * 2016-05-23 2019-04-12 华中科技大学 A kind of cloudy storage system extended method based on RAID4
CN108241544A (en) * 2016-12-23 2018-07-03 航天星图科技(北京)有限公司 A kind of fault handling method based on cluster
CN108241544B (en) * 2016-12-23 2023-06-06 中科星图股份有限公司 Fault processing method based on clusters
CN107528719A (en) * 2017-03-08 2017-12-29 深圳市泽云科技有限公司 A kind of implementation method for lifting cloud storage system high availability
CN107395745A (en) * 2017-08-20 2017-11-24 长沙曙通信息科技有限公司 A kind of distributed memory system data disperse Realization of Storing
CN109445704A (en) * 2018-10-29 2019-03-08 南京录信软件技术有限公司 A method of it is comprehensive to store mass data using plurality of devices
CN109669930A (en) * 2018-12-14 2019-04-23 成都四方伟业软件股份有限公司 Quality of data report-generating method and system

Also Published As

Publication number Publication date
CN102780769B (en) 2015-01-28

Similar Documents

Publication Publication Date Title
CN102780769A (en) Cloud computing platform-based disaster recovery storage method
US11539793B1 (en) Responding to membership changes to a set of storage systems that are synchronously replicating a dataset
EP3014446B1 (en) Asynchronous message passing for large graph clustering
CN103778034B (en) Data backup disaster recovery method and system based on cloud storage
US9471590B2 (en) Method and apparatus for replicating virtual machine images using deduplication metadata
CN103116661B (en) A kind of data processing method of database
US9251230B2 (en) Exchanging locations of an out of synchronization indicator and a change recording indicator via pointers
CN104320401A (en) Big data storage and access system and method based on distributed file system
CN106502823A (en) data cloud backup method and system
CN102902600A (en) Efficient application-aware disaster recovery
CN103440244A (en) Large-data storage and optimization method
CN111104069B (en) Multi-region data processing method and device of distributed storage system and electronic equipment
CN103581332B (en) HDFS framework and pressure decomposition method for NameNodes in HDFS framework
CN104239493A (en) Cross-cluster data migration method and system
CN108038201B (en) A kind of data integrated system and its distributed data integration system
CN105187464A (en) Data synchronization method, device and system in distributed storage system
US10095415B2 (en) Performance during playback of logged data storage operations
CN104641650A (en) Source reference replication in a data storage subsystem
Xia et al. Performance and availability modeling of ITSystems with data backup and restore
CN104035837A (en) Method for backing up isomorphic/isomerous UNIX/Linux host on line
CN105554132A (en) Hadoop online capacity expansion method
CN102521073A (en) Increasing database availability during fault recovery
US9424133B2 (en) Providing an eventually-consistent snapshot of nodes in a storage network
CN107205024A (en) Data duplicate removal method and system in a kind of cloud storage system
KR20120090320A (en) Method for effective data recovery in distributed file system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20160309

Address after: 210046 Jiangsu Province Economic and Technological Development Zone Nanjing Xing Zhi road Xingzhi Science Park building B room 0910

Patentee after: Nanjing Dunhua Traffic Technology Co., Ltd.

Address before: 210003 Gulou District, Jiangsu, Nanjing new model road, No. 66

Patentee before: Nanjing Post & Telecommunication Univ.

PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Cloud computing platform-based disaster recovery storage method

Effective date of registration: 20190830

Granted publication date: 20150128

Pledgee: Zijin Branch of Nanjing Bank Co., Ltd.

Pledgor: Nanjing Dunhua Traffic Technology Co., Ltd.

Registration number: Y2019980000087

PE01 Entry into force of the registration of the contract for pledge of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150128

Termination date: 20200704

CF01 Termination of patent right due to non-payment of annual fee