CN102739720B - Distributed cache server system and application method thereof, cache clients and cache server terminals - Google Patents

Distributed cache server system and application method thereof, cache clients and cache server terminals Download PDF

Info

Publication number
CN102739720B
CN102739720B CN201110093902.0A CN201110093902A CN102739720B CN 102739720 B CN102739720 B CN 102739720B CN 201110093902 A CN201110093902 A CN 201110093902A CN 102739720 B CN102739720 B CN 102739720B
Authority
CN
China
Prior art keywords
buffer service
service end
client
cache
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201110093902.0A
Other languages
Chinese (zh)
Other versions
CN102739720A (en
Inventor
戴林
吴丽梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201110093902.0A priority Critical patent/CN102739720B/en
Priority to PCT/CN2011/075964 priority patent/WO2012139328A1/en
Publication of CN102739720A publication Critical patent/CN102739720A/en
Application granted granted Critical
Publication of CN102739720B publication Critical patent/CN102739720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/18Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals

Abstract

Provided in the invention is a distributed cache server system, comprising: cache clients, which are used for obtaining all cache server terminal information from a main memory database server, establishing connection with cache server terminals and generating and regularly maintaining links and link tables; the main memory database server, which is used for establishing and preserving a cache server terminal information table and a catalogue table of correspondence of data storage type and cache server terminals and carrying out processing on the received cache server terminal information reported by the cache server terminals; and cache server terminals, which are used for reporting the cache server terminal information to the main memory database server and completing management of cache data blocks. In addition, the invention also provides an application method of the distributed cache server system, cache clients and cache server terminals. With utilization of the above-mentioned technical scheme, the arrangement and the usage of the cache server system become concise and convenient; the access speed is fast; and the system can be extended and updated automatically.

Description

Distributed cache server system and application process, cache client
Technical field
The present invention relates to field of data storage, particularly relate to a kind of distributed cache server system and application process, cache client, buffer service end.
Background technology
Along with the development of Internet technology and WEB application, caching technology uses more and more extensive, a hundred flowers blossom for multiple caching technology, and wherein distributed cache system is in internet, applications, obviously can solve the performance access bottleneck between database server and WEB server.
In existing distributed cache server system, adopt host-guest architecture more, namely master server sets up data directory for storing the data message from server stores, and accept the request of data of user, go access from server according to data directory, meanwhile, master server also needs to manage each from the storage data server, considers load balancing.Under this pattern, on the one hand, there is very large pressure in master server, and master server is delayed, after machine, whole system cannot use, and when the content of buffer memory reaches some levels, on master server, data directory structure is very large, and data query catalogue efficiency is very low; On the other hand, because principal and subordinate's server info all needs to configure in advance, whole principal and subordinate's caching system operationally, system update and dilatation extremely inconvenient.
Summary of the invention
The technical problem that the present invention solves provides a kind of distributed cache server system and application process, cache client, buffer service end, to solve existing distributed cache system operationally, renewal dilatation is extremely inconvenient, and master server inefficiency problem when data volume is large.
For solving the problem, the invention provides a kind of distributed cache server system, comprising one or more cache client, one or more buffer service end and a memory database server, wherein,
Cache client, for obtaining all buffer service client informations from memory database server, connects with buffer service end, generates and timed maintenance link chained list;
Memory database server, for setting up and preserving buffer service client information table and data storage classification catalogue listing corresponding to buffer service end, and after receiving the buffer service client information that described buffer service end reports, if containing this buffer service client information in database, then upgrade corresponding disk space data, otherwise by this buffer service client information data inserting storehouse;
Buffer service end, for reporting described buffer service client information to memory database server, and completes the management of caching data block.
Above-mentioned system, wherein, described buffer service client information comprises, the ip address of local terminal, buffer service port, current free disk space.
Above-mentioned system, wherein, cache client is used for from memory database server, obtain all buffer service client informations, and connect with buffer service end, link generation chained list is specially,
Described cache client, for upon actuation, timing is to all buffer service client informations of memory database server lookup, according to all buffer service client informations that described cache database server returns, link chained list is set up in cache client, wherein store the information of each buffer service end, cache client and buffer service end connect, and by connection status stored in link chained list;
Wherein, the effect of cache client link chained list is, when there being request of data, cache client by after the Data classification of described request of data, to the buffer service end that this classification of data base querying is deposited, when database return cache service end information is to cache client, cache client needs according to this link chained list, verify this buffer service client information, meet the requirements just by request forward to buffer service end, otherwise return errored response.
Above-mentioned system, wherein, cache client is used for timed maintenance link chained list and is specially,
Cache client, obtains all buffer service client informations for timing from database, newly-increased buffer service client information is added incoming link chained list; And
Cache client, for using heartbeat message timing to detect the state with each buffer service end link, and by link connection renewal write link chained list, timing upgrades link chained list, and described link chained list is used for link verification during cache client process request.
Preferably, described cache client, also for after the insertion request process receiving application program, decision data request classification, then stores corresponding catalogue listing to internal storage data library inquiry data category with buffer service end; And whether effective for the buffer service client information returned according to described link chained list verification database, if effectively, forward the request of insertion immediately; If invalid, in current ink chained list, find effective link, carry out insertion request forward; And after inserting request for the treatment of forwarding application program, the data category of correspondence and memory buffers service end information are sent to database, upgrade;
Described memory database server, corresponding buffer service client information whether is there is in described database for judging, if there is not corresponding buffer service client information in database, then search the buffer service client information that disk free space is maximum, return to cache client; If there is corresponding buffer service client information in database, then by the buffer service client information return cache client of described correspondence;
Described buffer service end, after receiving the request of insertion, is inserted into data block in internal memory, and by described data block writing in files stored in disk; Concrete mode is, sets up data block hash chained list in internal memory, the data block of insertion is left in internal storage data block chained list, is linked as key value with data block request, leave data block writing in files in disk simultaneously.
Above-mentioned system, wherein, described buffer service end, also for managing internal memory data block chained list, the data block newly inserted or newly access, for using least recently used lru algorithm, is placed on chained list head by concrete mode, and whole chained list is according to access time sequencing arrangement.
Preferably, buffer service end is used for data block writing in files to be specially stored in disk,
Buffer service end, for being encrypted request of data classification, obtains 32 ciphertext classification codes, then according to described ciphertext classification code character synthesis store path, described data block is written as under binary file leaves described store path in.
Above-mentioned system, wherein, described 32 ciphertext classification code characters synthesis store path is specially,
32 ciphertext passwords are formed by Roman number and small English combination of numbers, the front two of 32 passwords is ground floor directory name, the the 3rd, 4 of password is second layer subdirectory name, and third layer subdirectory is run after fame with these 32 passwords, and the 4th layer is then the binary file of data block write;
Preferably, described cache client, for receive application program inquiry request after, decision data request classification, stores corresponding catalogue listing to internal storage data library inquiry data category with buffer service end, finds the rear buffer service client information returned according to database, the state that this buffer service end is current is verified in link chained list, if available, described inquiry request is forwarded to described buffer service end, after waiting for buffer service end data block positioning searching, result is returned; Otherwise return failure response;
Buffer service end, obtains for the location of carrying out data block according to described inquiry request from buffer memory service end internal memory.
Above-mentioned system, wherein, the location acquisition that described buffer service end is used for carrying out from buffer memory service end internal memory according to described inquiry request data block is specially,
Described buffer service end, for the key value according to inquiry request, search hash table, after finding, corresponding data block chained list is upgraded, and by data return cache client, if do not find in internal memory, then according to described data block file designation and storage mode, be directly targeted to this data block, have then return data, without then returning failure response.
Present invention also offers a kind of cache client, described cache client is used for from memory database server, obtain all buffer service client informations, connects with buffer service end, generates and timed maintenance link chained list.
Present invention also offers a kind of buffer service end, described buffer service end, for reporting described buffer service client information to memory database server, and completing the management of caching data block.
Present invention also offers a kind of distributed cache server systematic difference method, comprise,
Buffer service end reports buffer service client information to memory database server;
Memory database server is set up and is preserved buffer service client information table and data storage classification catalogue listing corresponding to buffer service end, and after receiving the buffer service client information that described buffer service end reports, if containing this buffer service client information in database, then upgrade corresponding disk space data, otherwise by this buffer service client information data inserting storehouse;
Cache client obtains all buffer service client informations from memory database server, connects with buffer service end, generates and timed maintenance link chained list.
Above-mentioned method, wherein, described buffer service client information comprises the ip address of local terminal, buffer service port and current free disk space.
Above-mentioned method, wherein, cache client obtains all buffer service client informations from memory database server, connects with buffer service end, and link generation chained list is specially,
Described cache client upon actuation, timing is to all buffer service client informations of memory database server lookup, according to all buffer service client informations that described cache database server returns, link chained list is set up in cache client, wherein store the information of each buffer service end, cache client and buffer service end connect, and by connection status stored in link chained list;
Wherein, the effect of cache client link chained list is, when there being request of data, cache client by after the Data classification of described request of data, to the buffer service end that this classification of data base querying is deposited, when database return cache service end information is to cache client, cache client needs according to this link chained list, verify this buffer service client information, meet the requirements just by request forward to buffer service end, otherwise return errored response.
Above-mentioned method, wherein, cache client timed maintenance link chained list is specially,
Cache client timing obtains all buffer service client informations from database, and newly-increased buffer service client information is added incoming link chained list; And
Cache client uses heartbeat message timing to detect the state with each buffer service end link, and by link connection renewal write link chained list, timing upgrades link chained list, and described link chained list is used for link verification during cache client process request.
Above-mentioned method, wherein, described method also comprises,
Described cache client is after the insertion request process receiving application program, and decision data request classification, then stores corresponding catalogue listing to internal storage data library inquiry data category with buffer service end;
Described memory database server judges whether there is corresponding buffer service client information in described database, if there is not corresponding buffer service client information in database, then searches the buffer service client information that disk free space is maximum, returns to cache client; If there is corresponding buffer service client information in database, then by the buffer service client information return cache client of described correspondence;
Whether the buffer service client information that described cache client returns according to described link chained list verification database is effective, if effectively, forwards the request of insertion immediately; If invalid, in current ink chained list, find effective link, carry out insertion request forward; And after processing forward application program insertion request, the data category of correspondence and memory buffers service end information are sent to database, upgrade;
Data block is inserted in internal memory after receiving the request of insertion by described buffer service termination, and by described data block writing in files stored in disk; Concrete mode is, sets up data block hash chained list in internal memory, the data block of insertion is left in internal storage data block chained list, is linked as key value with data block request, leave data block writing in files in disk simultaneously.
Above-mentioned method, wherein, data block writing in files is specially stored in disk by buffer service end,
Buffer service end is encrypted request of data classification, obtains 32 ciphertext classification codes, then according to described ciphertext classification code character synthesis store path, described data block is written as under binary file leaves described store path in.
Above-mentioned method, wherein, described 32 ciphertext classification code characters synthesis store path is specially,
32 ciphertext passwords are formed by Roman number and small English combination of numbers, the front two of 32 passwords is ground floor directory name, the the 3rd, 4 of password is second layer subdirectory name, and third layer subdirectory is run after fame with these 32 passwords, and the 4th layer is then the binary file of data block write;
Above-mentioned method, wherein, described method also comprises,
After described cache client receives the inquiry request of application program, decision data request classification, corresponding catalogue listing is stored with buffer service end to internal storage data library inquiry data category, find the rear buffer service client information returned according to database, the state that this buffer service end is current is verified in link chained list, if available, described inquiry request is forwarded to described buffer service end, after waiting for buffer service end data block positioning searching, result is returned; Otherwise return failure response;
The location that buffer service end carries out data block according to described inquiry request from buffer memory service end internal memory obtains.
Above-mentioned method, wherein, the location acquisition that described buffer service end carries out data block according to described inquiry request from buffer memory service end internal memory is specially,
Described buffer service end is according to the key value of inquiry request, search hash table, after finding, corresponding data block chained list is upgraded, and by data return cache client, if do not find in internal memory, then according to described data block file designation and storage mode, be directly targeted to this data block, there is then return data, without then returning failure response.
Adopt technical scheme of the present invention, can make the setting of caching server system and use more succinctly to facilitate, access speed is faster, and system can automatic expansion, and automatically upgrades;
Adopt preferred version of the present invention, the insertion of client process application program, data query request, only need to carry out a database manipulation, insert and completely carry out a database update again, can effectively reduce database access pressure, improve system access speed;
Adopt preferred version of the present invention, by the method for above-mentioned structure store path and filename, bibliographic structure is clear, and buffer service end can directly hit each data block and each data block file, and data access is quick and easy.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide a further understanding of the present invention, forms a part of the present invention, and schematic description and description of the present invention, for explaining the present invention, does not form inappropriate limitation of the present invention.In the accompanying drawings:
Fig. 1 is embodiment of the present invention system construction drawing;
Fig. 2 is that buffer service end is by the naming method figure of data block writing in files stored in disk;
Fig. 3 is the inventive method embodiment flow chart;
Fig. 4 is that the data of the inventive method embodiment insert flow chart;
Fig. 5 is the data query flow chart of the inventive method embodiment.
Embodiment
In order to make technical problem to be solved by this invention, technical scheme and beneficial effect clearly, understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
As shown in Figure 1, be embodiment of the present invention system construction drawing, a kind of distributed cache server system is provided, comprise one or more cache client, one or more buffer service end and a memory database server, wherein,
Cache client, for obtaining buffer service client information from memory database server, generates and timed maintenance link chained list;
Particularly, cache client belongs to be needed to use caching server systematic difference program side, uses ICP/IP protocol to be connected with inner-mesh network; After cache client obtains all buffer service client informations when starting from memory database server, initiatively connect with buffer service end, timing afterwards obtains buffer service client information from database, maintains the link with buffer service end;
Memory database server, for setting up and preserving buffer service client information table (comprising: the information such as buffer service end ip address, buffer service port, disk free space) and data storage classification catalogue listing corresponding to buffer service end, after receiving the buffer service client information that described buffer service end reports, if had in database, then upgrade corresponding disk space data, otherwise by data data inserting storehouse;
Buffer service end, for reporting this buffer service client information to memory database server, specifically upon actuation, timing, by information such as the ip address of local terminal, buffer service port, local current free disk spaces, sends to memory database server.
In said system, after cache client starts, specifically comprise from being dynamically connected and link maintenance for carrying out,
Cache client is used for upon actuation, timing sends message to memory database, inquire about all buffer service client informations, generate cache client link chained list, wherein cache client obtains buffer service client information from database, and the mode of link generation chained list is: all buffer service client informations returned according to database, chained list is set up in client, wherein store the information of each buffer service end, cache client initiatively connects with buffer service end, and by connection status stored in chained list; The effect of cache client link chained list is, arrive when there being request of data, client is by after the Data classification of request, to the buffer service end that this classification of data base querying is deposited, when database return cache service end information is to client, client needs according to this link chained list, verifies this buffer service client information, meet the requirements ability by request forward to opposite end, otherwise returns errored response.
The information that cache client returns according to memory database server, initiatively initiates to each buffer service end the request of connecting; All buffer service client informations that database is returned, and the timing of link establishment situation upgrades link chained list; Wherein, cache client timing obtains all buffer service client informations from database, and newly-increased buffer service client information is added chained list,
The link chained list that cache client timed backup is current, in order to follow-up use; Cache client uses heartbeat message timing to detect the state with each bar buffer service end link, timing upgrades link chained list, wherein, the mode that cache client timing upgrades link chained list is: timed sending heartbeat message, detect the link connection of each buffer service end, and link connection is upgraded write link chained list, verify for link during client process request.
The insertion request that cache client receives application program processes, and specifically comprises,
Cache client, for decision data request classification, then corresponding catalogue listing is stored to internal storage data library inquiry data category with buffer service end, there are following two kinds of situations: situation one, as do not found in database, then in database caches service end information table, search the buffer service client information that disk free space is maximum, return to cache client; Situation two, as found in database, then returns to client by the buffer service client information found, and then whether the buffer service end that returns of cli-ent checks database is effective, if effectively, and Forward-reques immediately; If invalid, in current ink chained list, find effective link, carry out request forward, the data category of correspondence and memory buffers service end information are sent to database, upgrade after inserting request by processing forward application program;
Data, after accepting request, are inserted in internal memory by buffer service end, and by data block according to ad hoc fashion writing in files stored in disk; The mode of buffer service end process data inserting request is: in internal memory, set up data block hash chained list, the data of insertion first left in internal storage data block chained list, be linked as key value with request of data, leave data writing in files in disk simultaneously; The mode of buffer service end managing internal memory data block chained list is: use least recently used lru algorithm, and the data block newly inserted or newly access is placed on chained list head, and whole chained list is according to access time sequencing arrangement.
As shown in Figure 2, be buffer service end by the naming method figure of data block writing in files stored in disk, particularly,
The mode that data block is write disk by buffer service end is: be encrypted request of data classification, obtains 32 ciphertext classification codes, then synthesizes complete store path according to this ciphertext classification code character, data block is written as under binary file leaves this path in; Particularly,
Buffer service end is according to data category, use 32 ciphertext classification code combination absolute path after encryption, its mode is: 32 ciphertext passwords are formed by Roman number and small English combination of numbers, the front two of 32 passwords is ground floor directory name, the the 3rd, 4 of password is second layer subdirectory name, third layer subdirectory is run after fame with these 32 passwords, and the 4th layer is then the binary file of data block write.
Data block file designation mode is: request of data link be encrypted, obtain 32 ciphertext hyperlink request codes, name this file according to these 32 passwords.
After cache client and each buffer service end connect, the inquiry request accepting application program processes, and is specially,
Cache client, for decision data request classification, then corresponding catalogue listing is stored to internal storage data library inquiry data category with buffer service end, find the rear buffer service client information returned according to database, the state that this buffer service end is current is verified in link chained list, if available again by request forward to this buffer service end, after waiting for buffer service end data positioning searching, result is returned; Do not meet if more than have ready conditions, then return failure response immediately.
Buffer service end, location for carrying out data block from buffer memory service end internal memory obtains, according to the key value of inquiry request, directly search hash table, after finding, correspondence database chained list is upgraded, and data are returned client, as do not found in internal memory, then according to above-mentioned file designation and storage mode, be directly targeted to this data block, there is then return data, without then returning failure response.
As shown in Figure 3, be the inventive method embodiment flow chart, provide a kind of distributed cache server systematic difference method, comprise,
Step S301, buffer service end reports buffer service client information to memory database server;
Step S302, memory database server is set up and is preserved buffer service client information table and data storage classification catalogue listing corresponding to buffer service end, and after receiving the buffer service client information that described buffer service end reports, if containing this buffer service client information in database, then upgrade corresponding disk space data, otherwise by this buffer service client information data inserting storehouse;
Step S303, cache client obtains all buffer service client informations from memory database server, connects with buffer service end, generates and timed maintenance link chained list.
In said method, described buffer service client information comprises the ip address of local terminal, buffer service port and current free disk space.
In said method, cache client obtains all buffer service client informations from memory database server, connects with buffer service end, and link generation chained list is specially,
Described cache client upon actuation, timing is to all buffer service client informations of memory database server lookup, according to all buffer service client informations that described cache database server returns, link chained list is set up in cache client, wherein store the information of each buffer service end, cache client and buffer service end connect, and by connection status stored in link chained list;
Wherein, the effect of cache client link chained list is, when there being request of data, cache client by after the Data classification of described request of data, to the buffer service end that this classification of data base querying is deposited, when database return cache service end information is to cache client, cache client needs according to this link chained list, verify this buffer service client information, meet the requirements just by request forward to buffer service end, otherwise return errored response.
In said method, cache client timed maintenance link chained list is specially,
Cache client timing obtains all buffer service client informations from database, and newly-increased buffer service client information is added incoming link chained list; And
Cache client uses heartbeat message timing to detect the state with each buffer service end link, and by link connection renewal write link chained list, timing upgrades link chained list, and described link chained list is used for link verification during cache client process request.
As shown in Figure 4, be that the data of the inventive method embodiment insert flow chart, comprise,
Step S401, described cache client is after the insertion request process receiving application program, and decision data request classification, then stores corresponding catalogue listing to internal storage data library inquiry data category with buffer service end;
Step S402, described memory database server judges whether there is corresponding buffer service client information in described database, if there is not corresponding buffer service client information in database, then search the buffer service client information that disk free space is maximum, return to cache client; If there is corresponding buffer service client information in database, then by the buffer service client information return cache client of described correspondence;
Step S403, whether the buffer service client information that described cache client returns according to described link chained list verification database is effective, if effectively, forwards the request of insertion immediately; If invalid, in current ink chained list, find effective link, carry out insertion request forward; And after processing forward application program insertion request, the data category of correspondence and memory buffers service end information are sent to database, upgrade;
Step S404, data block is inserted in internal memory after receiving the request of insertion by described buffer service termination, and by described data block writing in files stored in disk; Concrete mode is, sets up data block hash chained list in internal memory, the data block of insertion is left in internal storage data block chained list, is linked as key value with data block request, leave data block writing in files in disk simultaneously.
In said method, data block writing in files is specially stored in disk by buffer service end,
Buffer service end is encrypted request of data classification, obtains 32 ciphertext classification codes, then according to described ciphertext classification code character synthesis store path, described data block is written as under binary file leaves described store path in.
In said method, described 32 ciphertext classification code characters synthesis store path is specially,
32 ciphertext passwords are formed by Roman number and small English combination of numbers, the front two of 32 passwords is ground floor directory name, the the 3rd, 4 of password is second layer subdirectory name, and third layer subdirectory is run after fame with these 32 passwords, and the 4th layer is then the binary file of data block write;
As shown in Figure 5, be the data query flow chart of the inventive method embodiment, comprise,
Step S501, after described cache client receives the inquiry request of application program, decision data request classification, corresponding catalogue listing is stored with buffer service end to internal storage data library inquiry data category, find the rear buffer service client information returned according to database, in link chained list, verify the state that this buffer service end is current, if available, described inquiry request is forwarded to described buffer service end, after waiting for buffer service end data block positioning searching, result is returned; Otherwise return failure response;
Step S502, the location that buffer service end carries out data block according to described inquiry request from buffer memory service end internal memory obtains.
In said method, the location acquisition that described buffer service end carries out data block according to described inquiry request from buffer memory service end internal memory is specially,
Described buffer service end is according to the key value of inquiry request, search hash table, after finding, corresponding data block chained list is upgraded, and by data return cache client, if do not find in internal memory, then according to described data block file designation and storage mode, be directly targeted to this data block, there is then return data, without then returning failure response.
Above-mentioned explanation illustrate and describes a preferred embodiment of the present invention, but as previously mentioned, be to be understood that the present invention is not limited to the form disclosed by this paper, should not regard the eliminating to other embodiments as, and can be used for other combinations various, amendment and environment, and can in invention contemplated scope described herein, changed by the technology of above-mentioned instruction or association area or knowledge.And the change that those skilled in the art carry out and change do not depart from the spirit and scope of the present invention, then all should in the protection range of claims of the present invention.

Claims (20)

1. a distributed cache server system, is characterized in that, comprises one or more cache client, one or more buffer service end and a memory database server, wherein,
Cache client, for obtaining all buffer service client informations from memory database server, connects with buffer service end, generates and timed maintenance link chained list;
Memory database server, for setting up and preserving buffer service client information table and data storage classification catalogue listing corresponding to buffer service end, and after receiving the buffer service client information that described buffer service end reports, if containing this buffer service client information in database, then upgrade corresponding disk space data, otherwise by this buffer service client information data inserting storehouse;
Buffer service end, for reporting described buffer service client information to memory database server, and completes the management of caching data block.
2. system according to claim 1, is characterized in that, described buffer service client information comprises the ip address of local terminal, buffer service port and current free disk space.
3. system according to claim 1 and 2, is characterized in that, cache client is used for from memory database server, obtain all buffer service client informations, and connect with buffer service end, link generation chained list is specially,
Described cache client, for upon actuation, timing is to all buffer service client informations of memory database server lookup, according to all buffer service client informations that described cache database server returns, link chained list is set up in cache client, wherein store the information of each buffer service end, cache client and buffer service end connect, and by connection status stored in link chained list;
Wherein, the effect of cache client link chained list is, when there being request of data, cache client by after the Data classification of described request of data, to the buffer service end that this classification of data base querying is deposited, when database return cache service end information is to cache client, cache client needs according to this link chained list, verify this buffer service client information, meet the requirements just by request forward to buffer service end, otherwise return errored response.
4. system according to claim 3, cache client is used for timed maintenance link chained list and is specially,
Cache client, obtains all buffer service client informations for timing from database, newly-increased buffer service client information is added incoming link chained list; And
Cache client, for using heartbeat message timing to detect the state with each buffer service end link, and by link connection renewal write link chained list, timing upgrades link chained list, and described link chained list is used for link verification during cache client process request.
5. system according to claim 1 and 2, is characterized in that,
Described cache client, also for after the insertion request process receiving application program, decision data request classification, then stores corresponding catalogue listing to internal storage data library inquiry data category with buffer service end; And whether effective for the buffer service client information returned according to described link chained list verification database, if effectively, forward the request of insertion immediately; If invalid, in current ink chained list, find effective link, carry out insertion request forward; And after inserting request for the treatment of forwarding application program, the data category of correspondence and memory buffers service end information are sent to database, upgrade;
Described memory database server, corresponding buffer service client information whether is there is in described database for judging, if there is not corresponding buffer service client information in database, then search the buffer service client information that disk free space is maximum, return to cache client; If there is corresponding buffer service client information in database, then by the buffer service client information return cache client of described correspondence;
Described buffer service end, after receiving the request of insertion, is inserted into data block in internal memory, and by described data block writing in files stored in disk; Concrete mode is, sets up data block hash chained list in internal memory, the data block of insertion is left in internal storage data block chained list, is linked as key value with data block request, leave data block writing in files in disk simultaneously.
6. system according to claim 5, is characterized in that,
Described buffer service end, also for managing internal memory data block chained list, the data block newly inserted or newly access, for using least recently used lru algorithm, is placed on chained list head by concrete mode, and whole chained list is according to access time sequencing arrangement.
7. system according to claim 5, is characterized in that, buffer service end is used for data block writing in files to be specially stored in disk,
Buffer service end, for being encrypted request of data classification, obtains 32 ciphertext classification codes, then according to described ciphertext classification code character synthesis store path, described data block is written as under binary file leaves described store path in.
8. system according to claim 7, is characterized in that, described 32 ciphertext classification code characters synthesis store path is specially,
32 ciphertext passwords are formed by Roman number and small English combination of numbers, the front two of 32 passwords is ground floor directory name, the the 3rd, 4 of password is second layer subdirectory name, and third layer subdirectory is run after fame with these 32 passwords, and the 4th layer is then the binary file of data block write.
9. system according to claim 1 and 2, is characterized in that,
Described cache client, for receive application program inquiry request after, decision data request classification, corresponding catalogue listing is stored with buffer service end to internal storage data library inquiry data category, find the rear buffer service client information returned according to database, in link chained list, verify the state that this buffer service end is current, if available, described inquiry request is forwarded to described buffer service end, after waiting for buffer service end data block positioning searching, result is returned; Otherwise return failure response;
Buffer service end, obtains for the location of carrying out data block according to described inquiry request from buffer memory service end internal memory.
10. system according to claim 9, is characterized in that, the location acquisition that described buffer service end is used for carrying out from buffer memory service end internal memory according to described inquiry request data block is specially,
Described buffer service end, for the key value according to inquiry request, search hash table, after finding, corresponding data block chained list is upgraded, and by data return cache client, if do not find in internal memory, then according to described data block file designation and storage mode, be directly targeted to this data block, have then return data, without then returning failure response.
11. 1 kinds of cache client, are applied to arbitrary described system in claim 1 to 10, it is characterized in that,
Described cache client is used for from memory database server, obtain all buffer service client informations, connect with buffer service end, generate and timed maintenance link chained list, wherein, the effect of cache client link chained list is, when there being request of data, cache client is by after the Data classification of described request of data, to the buffer service end that this classification of data base querying is deposited, when database return cache service end information is to cache client, cache client needs according to this link chained list, verify this buffer service client information, meet the requirements ability by request forward to buffer service end, otherwise return errored response.
12. 1 kinds of distributed cache server systematic difference methods, is characterized in that, comprise,
Buffer service end reports buffer service client information to memory database server;
Memory database server is set up and is preserved buffer service client information table and data storage classification catalogue listing corresponding to buffer service end, and after receiving the buffer service client information that described buffer service end reports, if containing this buffer service client information in database, then upgrade corresponding disk space data, otherwise by this buffer service client information data inserting storehouse;
Cache client obtains all buffer service client informations from memory database server, connects with buffer service end, generates and timed maintenance link chained list.
13. methods according to claim 12, is characterized in that, described buffer service client information comprises the ip address of local terminal, buffer service port and current free disk space.
14. methods according to claim 12 or 13, it is characterized in that, cache client obtains all buffer service client informations from memory database server, connects with buffer service end, and link generation chained list is specially,
Described cache client upon actuation, timing is to all buffer service client informations of memory database server lookup, according to all buffer service client informations that described cache database server returns, link chained list is set up in cache client, wherein store the information of each buffer service end, cache client and buffer service end connect, and by connection status stored in link chained list;
Wherein, the effect of cache client link chained list is, when there being request of data, cache client by after the Data classification of described request of data, to the buffer service end that this classification of data base querying is deposited, when database return cache service end information is to cache client, cache client needs according to this link chained list, verify this buffer service client information, meet the requirements just by request forward to buffer service end, otherwise return errored response.
15. methods according to claim 14, cache client timed maintenance link chained list is specially,
Cache client timing obtains all buffer service client informations from database, and newly-increased buffer service client information is added incoming link chained list; And
Cache client uses heartbeat message timing to detect the state with each buffer service end link, and by link connection renewal write link chained list, timing upgrades link chained list, and described link chained list is used for link verification during cache client process request.
16. methods according to claim 12 or 13, it is characterized in that, described method also comprises,
Described cache client is after the insertion request process receiving application program, and decision data request classification, then stores corresponding catalogue listing to internal storage data library inquiry data category with buffer service end;
Described memory database server judges whether there is corresponding buffer service client information in described database, if there is not corresponding buffer service client information in database, then searches the buffer service client information that disk free space is maximum, returns to cache client; If there is corresponding buffer service client information in database, then by the buffer service client information return cache client of described correspondence;
Whether the buffer service client information that described cache client returns according to described link chained list verification database is effective, if effectively, forwards the request of insertion immediately; If invalid, in current ink chained list, find effective link, carry out insertion request forward; And after inserting request for the treatment of forwarding application program, the data category of correspondence and memory buffers service end information are sent to database, upgrade;
Data block is inserted in internal memory after receiving the request of insertion by described buffer service termination, and by described data block writing in files stored in disk; Concrete mode is, sets up data block hash chained list in internal memory, the data block of insertion is left in internal storage data block chained list, is linked as key value with data block request, leave data block writing in files in disk simultaneously.
17. methods according to claim 16, is characterized in that, data block writing in files is specially stored in disk by buffer service end,
Buffer service end is encrypted request of data classification, obtains 32 ciphertext classification codes, then according to described ciphertext classification code character synthesis store path, described data block is written as under binary file leaves described store path in.
18. methods according to claim 17, is characterized in that, described 32 ciphertext classification code characters synthesis store path is specially,
32 ciphertext passwords are formed by Roman number and small English combination of numbers, the front two of 32 passwords is ground floor directory name, the the 3rd, 4 of password is second layer subdirectory name, and third layer subdirectory is run after fame with these 32 passwords, and the 4th layer is then the binary file of data block write.
19. methods according to claim 12 or 13, it is characterized in that, described method also comprises,
After described cache client receives the inquiry request of application program, decision data request classification, corresponding catalogue listing is stored with buffer service end to internal storage data library inquiry data category, find the rear buffer service client information returned according to database, the state that this buffer service end is current is verified in link chained list, if available, described inquiry request is forwarded to described buffer service end, after waiting for buffer service end data block positioning searching, result is returned; Otherwise return failure response;
The location that buffer service end carries out data block according to described inquiry request from buffer memory service end internal memory obtains.
20. methods according to claim 19, is characterized in that, the location acquisition that described buffer service end carries out data block according to described inquiry request from buffer memory service end internal memory is specially,
Described buffer service end is according to the key value of inquiry request, search hash table, after finding, corresponding data block chained list is upgraded, and by data return cache client, if do not find in internal memory, then according to described data block file designation and storage mode, be directly targeted to this data block, there is then return data, without then returning failure response.
CN201110093902.0A 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals Active CN102739720B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201110093902.0A CN102739720B (en) 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals
PCT/CN2011/075964 WO2012139328A1 (en) 2011-04-14 2011-06-20 Cache server system and application method thereof, cache client, and cache server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110093902.0A CN102739720B (en) 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals

Publications (2)

Publication Number Publication Date
CN102739720A CN102739720A (en) 2012-10-17
CN102739720B true CN102739720B (en) 2015-01-28

Family

ID=46994499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110093902.0A Active CN102739720B (en) 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals

Country Status (2)

Country Link
CN (1) CN102739720B (en)
WO (1) WO2012139328A1 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853727B (en) * 2012-11-29 2018-07-31 深圳中兴力维技术有限公司 Improve the method and system of big data quantity query performance
CN103118099B (en) * 2013-01-25 2016-03-02 福建升腾资讯有限公司 Based on the graph image caching method of hashing algorithm
CN104461929B (en) * 2013-09-23 2018-03-23 中国银联股份有限公司 Distributed data cache method based on blocker
CN105426371A (en) * 2014-09-17 2016-03-23 上海三明泰格信息技术有限公司 Database system
US9876873B1 (en) 2015-10-21 2018-01-23 Perfect Sense, Inc. Caching techniques
CN106649408B (en) * 2015-11-04 2020-10-13 中国移动通信集团重庆有限公司 Big data retrieval method and device
CN107038174B (en) * 2016-02-04 2020-11-24 北京京东尚科信息技术有限公司 Data synchronization method and device for data system
CN106331147B (en) * 2016-09-09 2019-09-06 深圳市彬讯科技有限公司 A kind of REDIS distribution call method
CN106506613B (en) * 2016-10-31 2018-04-13 大唐高鸿信安(浙江)信息科技有限公司 The data storage location encryption method of distributed key value storage systems
CN106934001A (en) * 2017-03-03 2017-07-07 广州天源迪科信息技术有限公司 Distributed quick inventory inquiry system and method
CN107087232B (en) * 2017-04-07 2020-03-27 优地网络有限公司 User real-time state detection method and system
CN107133183B (en) * 2017-04-11 2020-06-30 深圳市联云港科技有限公司 Cache data access method and system based on TCMU virtual block device
CN107071059B (en) * 2017-05-25 2018-10-02 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
CN107797859B (en) * 2017-11-16 2021-08-20 山东浪潮云服务信息科技有限公司 Scheduling method of timing task and scheduling server
CN108009250B (en) * 2017-12-01 2021-09-07 武汉斗鱼网络科技有限公司 Multi-classification event data cache establishing and querying method and device
CN110022257B (en) * 2018-01-08 2023-04-07 北京京东尚科信息技术有限公司 Distributed messaging system
CN110298677B (en) * 2018-03-22 2021-08-13 中移(苏州)软件技术有限公司 Cloud computing resource charging method and device, electronic equipment and storage medium
CN109086380B (en) * 2018-07-25 2022-09-16 光大环境科技(中国)有限公司 Method and system for compressing and storing historical data
CN110232044B (en) * 2019-06-17 2023-03-28 浪潮通用软件有限公司 System and method for realizing big data summarizing and scheduling service
CN110535977B (en) * 2019-09-29 2022-04-01 深圳市网心科技有限公司 File distribution method and device, computer device and storage medium
CN110825986B (en) * 2019-11-05 2023-03-21 上海携程商务有限公司 Method, system, storage medium and electronic device for client to request data
CN111858664B (en) * 2020-06-23 2023-01-10 苏州浪潮智能科技有限公司 Data persistence method and system based on BMC
CN112383415A (en) * 2020-10-30 2021-02-19 上海蜜度信息技术有限公司 Server side marking method and equipment
CN114422570A (en) * 2021-12-31 2022-04-29 深圳市联软科技股份有限公司 Cross-platform multi-module communication method and system
CN115633005A (en) * 2022-10-26 2023-01-20 云度新能源汽车有限公司 Real-time high-concurrency connection processing method and processing system thereof
CN116360711B (en) * 2023-06-02 2023-08-11 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308513A (en) * 2008-06-27 2008-11-19 福建星网锐捷网络有限公司 Distributed system cache data synchronous configuration method and apparatus
CN101493826A (en) * 2008-12-23 2009-07-29 中兴通讯股份有限公司 Database system based on WEB application and data management method thereof
CN101562543A (en) * 2009-05-25 2009-10-21 阿里巴巴集团控股有限公司 Cache data processing method and processing system and device thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006330B (en) * 2010-12-01 2013-06-12 北京瑞信在线系统技术有限公司 Distributed cache system, data caching method and inquiring method of cache data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308513A (en) * 2008-06-27 2008-11-19 福建星网锐捷网络有限公司 Distributed system cache data synchronous configuration method and apparatus
CN101493826A (en) * 2008-12-23 2009-07-29 中兴通讯股份有限公司 Database system based on WEB application and data management method thereof
CN101562543A (en) * 2009-05-25 2009-10-21 阿里巴巴集团控股有限公司 Cache data processing method and processing system and device thereof

Also Published As

Publication number Publication date
CN102739720A (en) 2012-10-17
WO2012139328A1 (en) 2012-10-18

Similar Documents

Publication Publication Date Title
CN102739720B (en) Distributed cache server system and application method thereof, cache clients and cache server terminals
CN101064630B (en) Data synchronization method and system
AU2005312895B2 (en) Bidirectional data transfer optimization and content control for networks
KR101725306B1 (en) Energy-efficient content caching with custodian-based routing in content-centric networks
CN103037312B (en) Information push method and device
KR102100710B1 (en) Method for transmitting packet of node and content owner in content centric network
US9465819B2 (en) Distributed database
CN101764839B (en) Data access method and uniform resource locator (URL) server
CN102137145B (en) Method, device and system for managing distributed contents
US8434156B2 (en) Method, access node, and system for obtaining data
WO2011150830A1 (en) Method and node for obtaining the content and content network
US8539041B2 (en) Method, apparatus, and network system for acquiring content
CN101964799A (en) Solution method of address conflict in point-to-network tunnel mode
CN105635196A (en) Method and system of file data obtaining, and application server
CN101090371A (en) Method and system for user information management in at-once communication system
CN105653473A (en) Cache data access method and device based on binary identification
US20150256601A1 (en) System and method for efficient content caching in a streaming storage
CN102857547A (en) Distributed caching method and device
WO2013091343A1 (en) Content acquisition method, device and network system
JP5918361B2 (en) Routing by resolution
WO2008010212A2 (en) A network cache and method for managing files
CN102833295A (en) Data manipulation method and device in distributed cache system
US9788192B2 (en) Making subscriber data addressable as a device in a mobile data network
CN102497402B (en) Content injection method and system thereof, and content delivery method and system thereof
CN113610529A (en) Block storage and acquisition method, device, node and storage medium of alliance chain

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant