CN102739720A - Distributed cache server system and application method thereof, cache clients and cache server terminals - Google Patents

Distributed cache server system and application method thereof, cache clients and cache server terminals Download PDF

Info

Publication number
CN102739720A
CN102739720A CN2011100939020A CN201110093902A CN102739720A CN 102739720 A CN102739720 A CN 102739720A CN 2011100939020 A CN2011100939020 A CN 2011100939020A CN 201110093902 A CN201110093902 A CN 201110093902A CN 102739720 A CN102739720 A CN 102739720A
Authority
CN
China
Prior art keywords
service end
buffer memory
memory service
chained list
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011100939020A
Other languages
Chinese (zh)
Other versions
CN102739720B (en
Inventor
戴林
吴丽梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201110093902.0A priority Critical patent/CN102739720B/en
Priority to PCT/CN2011/075964 priority patent/WO2012139328A1/en
Publication of CN102739720A publication Critical patent/CN102739720A/en
Application granted granted Critical
Publication of CN102739720B publication Critical patent/CN102739720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/18Information format or content conversion, e.g. adaptation by the network of the transmitted or received information for the purpose of wireless delivery to users or terminals

Abstract

Provided in the invention is a distributed cache server system, comprising: cache clients, which are used for obtaining all cache server terminal information from a main memory database server, establishing connection with cache server terminals and generating and regularly maintaining links and link tables; the main memory database server, which is used for establishing and preserving a cache server terminal information table and a catalogue table of correspondence of data storage type and cache server terminals and carrying out processing on the received cache server terminal information reported by the cache server terminals; and cache server terminals, which are used for reporting the cache server terminal information to the main memory database server and completing management of cache data blocks. In addition, the invention also provides an application method of the distributed cache server system, cache clients and cache server terminals. With utilization of the above-mentioned technical scheme, the arrangement and the usage of the cache server system become concise and convenient; the access speed is fast; and the system can be extended and updated automatically.

Description

Distributed cache server system and application process thereof, cache client, buffer memory service end
Technical field
The present invention relates to field of data storage, relate in particular to a kind of distributed cache server system and application process thereof, cache client, buffer memory service end.
Background technology
Along with Internet technology and WEB application and development; The caching technology utilization more and more widely; A hundred flowers blossom for multiple caching technology, and wherein the distributed caching system can obviously solve the performance visit bottleneck between database server and the WEB server in internet, applications.
In existing distributed caching server system; The host-guest architectures that adopt more, promptly master server is set up data directory and is used to store the data message from server stores, and accepts the user's data request; Go visit from server according to data directory; Simultaneously, master server also need be managed each storage data from the server, considers load balancing.Under this pattern, on the one hand, there is very big pressure in master server, and the master server whole system of delaying behind the machine can't use, and when the content of buffer memory reached some levels, the data directory structure was very big on the master server, and data query catalogue efficient is very low; On the other hand, because principal and subordinate's server info all need configure in advance, whole principal and subordinate's caching system is when operation, and system update and dilatation are extremely inconvenient.
Summary of the invention
The technical problem that the present invention solves provides a kind of distributed cache server system and application process thereof, cache client, buffer memory service end; To solve existing distributed caching system when moving; It is extremely inconvenient to upgrade dilatation, and data volume master server inefficiency problem when big.
For addressing the above problem, the invention provides a kind of distributed cache server system, comprise one or more cache client, one or more buffer memory service end and a memory database server, wherein,
Cache client is used for obtaining all buffer memory service end information from the memory database server, connects with the buffer memory service end, generates and timing maintenance link chained list;
The memory database server; Be used for setting up and preserving buffer memory service end information table and storage classification and the corresponding catalogue listing of buffer memory service end; And after receiving the buffer memory service end information that said buffer memory service end reports; If contain this buffer memory service end information in the database, then upgrade corresponding disk space data, otherwise this buffer memory service end information is inserted database;
The buffer memory service end is used for reporting said buffer memory service end information to the memory database server, and accomplishes the management of caching data block.
Above-mentioned system, wherein, said buffer memory service end information comprises, the ip address of local terminal, buffer memory serve port, current free disk space.
Above-mentioned system, wherein, cache client is used for obtaining all buffer memory service end information from the memory database server, connects with the buffer memory service end, and generate the link chained list and be specially,
Said cache client; Be used for after startup, regularly to all buffer memory service end information of memory database server lookup, all buffer memory service end information of returning according to said cache database server; Set up the link chained list in cache client; Wherein store the information of each buffer memory service end, cache client and buffer memory service end connect, and deposit connection status in the link chained list;
Wherein, the effect of cache client link chained list is, when request of data; After the data qualification of cache client with said request of data, to the buffer memory service end that this classification of data base querying is deposited, when database return cache service end information is given cache client; Cache client need be according to this link chained list; Verify this buffer memory service end information, meeting the requirements just is forwarded to the buffer memory service end with request, otherwise returns errored response.
Above-mentioned system, wherein, cache client is used for timing maintenance link chained list and is specially,
Cache client is used for regularly obtaining all buffer memory service end information from database, and newly-increased buffer memory service end information is added the incoming link chained list; And
Cache client is used to use heartbeat message regularly to detect the state with each buffer memory service end link, and link connection situation is upgraded write the incoming link chained list, regularly upgrades the link chained list, and the link that said link chained list is used for cache client when handling request is verified.
Preferably, said cache client also is used for after receiving the insertion Request Processing of application program, and decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end then; And whether the buffer memory service end information that is used for returning according to said link chained list verification database is effective, if effectively then transmit the request of insertion immediately; Then in current link chained list, find effective link as if invalid, the request of inserting is transmitted; And after being used for the request of inserting of processing forward application program, the data category and the memory buffers service end information of correspondence is sent to database, upgrade;
Said memory database server; Be used for judging whether said database exists corresponding cache service end information; If do not have corresponding cache service end information in the database, then search the maximum buffer memory service end information of disk free space, return to cache client; If have corresponding cache service end information in the database, then with said corresponding cache service end information return cache client;
Said buffer memory service end, be used for receive inserting request after, data block is inserted in the internal memory, and said data block is write file deposits disk in; Specifically mode does, in internal memory, sets up data block hash chained list, and the data block of inserting is left in the internal storage data piece chained list, is linked as the key value with data block request, simultaneously data block is write document storage at disk.
Above-mentioned system, wherein, said buffer memory service end; Also be used for managing internal memory data block chained list; Concrete mode is placed on the chained list head for using least recently used lru algorithm with the data block of newly inserting perhaps new visit, and whole chained list is arranged according to the access time sequencing.
Preferably, the buffer memory service end is used for that data block is write file and deposits disk in and be specially,
The buffer memory service end is used for data request classification is encrypted, and obtains 32 ciphertext classification sign indicating numbers, according to the synthetic store path of said ciphertext classification code character, said data block is written as binary file leaves under the said store path again.
Above-mentioned system, wherein, the synthetic store path of said 32 ciphertext classification code characters is specially,
32 ciphertext passwords are to be combined by Roman number and small letter English digital; The front two of 32 passwords is the ground floor directory name; The the 3rd, 4 of password is second layer subdirectory name, and the 3rd straton catalogue is run after fame with these 32 passwords, the 4th layer of binary file that then writes for data block;
Preferably, said cache client is after being used to receive the query requests of application program; Decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end, finds the buffer memory service end information that the back is returned according to database; In the link chained list, verify this buffer memory service end current states; As if available said query requests is forwarded to said buffer memory service end, after search wait buffer memory service end data block location, the result is returned; Otherwise return failure response;
The buffer memory service end is used for obtaining from the location that buffer memory service end internal memory carries out data block according to said query requests.
Above-mentioned system, wherein, said buffer memory service end is used for obtaining from the location that buffer memory service end internal memory carries out data block according to said query requests and is specially,
Said buffer memory service end is used for the key value according to query requests, searches the hash table; After finding corresponding data piece chained list is upgraded, and with data return cache client, if in internal memory, do not find; Then according to said data block file designation and storage mode; Directly navigate to this data block, then return data is arranged, do not have and then return failure response.
The present invention also provides a kind of cache client, and said cache client is used for obtaining all buffer memory service end information from the memory database server, connects with the buffer memory service end, generates and timing maintenance link chained list.
The present invention also provides a kind of buffer memory service end, and said buffer memory service end is used for reporting said buffer memory service end information to the memory database server, and accomplishes the management of caching data block.
The present invention also provides the application process of a kind of distributed cache server system, comprises,
The buffer memory service end reports buffer memory service end information to the memory database server;
Buffer memory service end information table and storage classification and the corresponding catalogue listing of buffer memory service end are set up and preserved to the memory database server; And after receiving the buffer memory service end information that said buffer memory service end reports; If contain this buffer memory service end information in the database; Then upgrade corresponding disk space data, otherwise this buffer memory service end information is inserted database;
Cache client is obtained all buffer memory service end information from the memory database server, connect with the buffer memory service end, generates and timing maintenance link chained list.
Above-mentioned method, wherein, said buffer memory service end information comprises ip address, buffer memory serve port and the current free disk space of local terminal.
Above-mentioned method, wherein, cache client is obtained all buffer memory service end information from the memory database server, connects with the buffer memory service end, and generate the link chained list and be specially,
Said cache client is after startup; Regularly to all buffer memory service end information of memory database server lookup; According to all buffer memory service end information that said cache database server returns, set up the link chained list in cache client, wherein store the information of each buffer memory service end; Cache client and buffer memory service end connect, and deposit connection status in the link chained list;
Wherein, the effect of cache client link chained list is, when request of data; After the data qualification of cache client with said request of data, to the buffer memory service end that this classification of data base querying is deposited, when database return cache service end information is given cache client; Cache client need be according to this link chained list; Verify this buffer memory service end information, meeting the requirements just is forwarded to the buffer memory service end with request, otherwise returns errored response.
Above-mentioned method, wherein, cache client regularly maintenance link chained list is specially,
Cache client is regularly obtained all buffer memory service end information from database, and newly-increased buffer memory service end information is added the incoming link chained list; And
Cache client use heartbeat message regularly detects the state with each buffer memory service end link, and the incoming link chained list is write in the renewal of link connection situation, regularly upgrades the link chained list, and said link chained list is used for cache client and handles the link verification when asking.
Above-mentioned method, wherein, said method also comprises,
Said cache client is after receiving the insertion Request Processing of application program, and decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end then;
Said memory database server judges whether there is corresponding cache service end information in the said database, if do not have corresponding cache service end information in the database, then searches the maximum buffer memory service end information of disk free space, returns to cache client; If have corresponding cache service end information in the database, then with said corresponding cache service end information return cache client;
Whether said cache client is effective according to the buffer memory service end information that said link chained list verification database returns, if effectively then transmit the request of insertion immediately; Then in current link chained list, find effective link as if invalid, the request of inserting is transmitted; And after the request of inserting of processing forward application program, the data category and the memory buffers service end information of correspondence is sent to database, upgrade;
Said buffer memory service end is inserted into data block in the internal memory after receiving the request of insertion, and said data block is write file deposits disk in; Specifically mode does, in internal memory, sets up data block hash chained list, and the data block of inserting is left in the internal storage data piece chained list, is linked as the key value with data block request, simultaneously data block is write document storage at disk.
Above-mentioned method, wherein, the buffer memory service end writes file with data block and deposits disk in and be specially,
The buffer memory service end is encrypted data request classification, obtains 32 ciphertext classification sign indicating numbers, according to the synthetic store path of said ciphertext classification code character, said data block is written as binary file leaves under the said store path again.
Above-mentioned method, wherein, the synthetic store path of said 32 ciphertext classification code characters is specially,
32 ciphertext passwords are to be combined by Roman number and small letter English digital; The front two of 32 passwords is the ground floor directory name; The the 3rd, 4 of password is second layer subdirectory name, and the 3rd straton catalogue is run after fame with these 32 passwords, the 4th layer of binary file that then writes for data block;
Above-mentioned method, wherein, said method also comprises,
After said cache client receives the query requests of application program; Decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end, finds the buffer memory service end information that the back is returned according to database; In the link chained list, verify this buffer memory service end current states; As if available said query requests is forwarded to said buffer memory service end, after search wait buffer memory service end data block location, the result is returned; Otherwise return failure response;
The buffer memory service end is carried out data block according to said query requests from buffer memory service end internal memory location obtains.
Above-mentioned method, wherein, said buffer memory service end is carried out data block according to said query requests from buffer memory service end internal memory location obtains and is specially,
Said buffer memory service end is searched the hash table according to the key value of query requests, after finding corresponding data piece chained list is upgraded; And with data return cache client; If in internal memory, do not find, then, directly navigate to this data block according to said data block file designation and storage mode; Then return data is arranged, do not have and then return failure response.
Adopt technical scheme of the present invention, can make the setting of caching server system and use more conveniently, access speed is faster, and system can automatic expansion, and upgrades automatically;
Adopt preferred version of the present invention, the insertion of client process application program, data query request only need be carried out one time database manipulation, and inserting finishes carries out database update again one time, can effectively reduce database access pressure, improves system's access speed;
Adopt preferred version of the present invention, through the method for above-mentioned structure store path and filename, bibliographic structure is clear, and the buffer memory service end can directly hit each data block and each data block file, and data access is quick and easy.
Description of drawings
Accompanying drawing described herein is used to provide further understanding of the present invention, constitutes a part of the present invention, and illustrative examples of the present invention and explanation thereof are used to explain the present invention, does not constitute improper qualification of the present invention.In the accompanying drawings:
Fig. 1 is an embodiment of the invention system construction drawing;
Fig. 2 is that the buffer memory service end writes the naming method figure that file deposits disk in data block;
Fig. 3 is the inventive method embodiment flow chart;
Fig. 4 is that the data of the inventive method embodiment are inserted flow chart;
Fig. 5 is the data query flow chart of the inventive method embodiment.
Embodiment
In order to make technical problem to be solved by this invention, technical scheme and beneficial effect clearer, clear,, the present invention is further elaborated below in conjunction with accompanying drawing and embodiment.Should be appreciated that specific embodiment described herein only in order to explanation the present invention, and be not used in qualification the present invention.
As shown in Figure 1, be embodiment of the invention system construction drawing, a kind of distributed cache server system is provided, comprise one or more cache client, one or more buffer memory service end and a memory database server, wherein,
Cache client is used for obtaining buffer memory service end information from the memory database server, generates and timing maintenance link chained list;
Particularly, cache client belongs to the application program side that needs use the caching server system, uses ICP/IP protocol to be connected with inner-mesh network; After from the memory database server, obtaining all buffer memory service end information when cache client starts, initiatively connect, regularly obtain buffer memory service end information afterwards, keep link with the buffer memory service end from database with the buffer memory service end;
The memory database server; Information such as buffer memory service end ip address, buffer memory serve port, disk free space) and storage classification and the corresponding catalogue listing of buffer memory service end be used for setting up and preserve buffer memory service end information table and (comprising:; After receiving the buffer memory service end information that said buffer memory service end reports; If have in the database, then upgrade corresponding disk space data, otherwise data are inserted database;
The buffer memory service end is used for reporting this buffer memory service end information to the memory database server, specifically is after startup, regularly with information such as the ip address of local terminal, buffer memory serve port, local current free disk spaces, sends to the memory database server.
In the said system, after cache client starts, be used to carry out specifically comprise from being dynamically connected and link maintenance,
Cache client is used for after startup, regularly sends message to memory database, inquires about all buffer memory service end information; Generate cache client link chained list, wherein cache client is obtained buffer memory service end information from database, and the mode that generates the link chained list is: all buffer memory service end information of returning according to database; Set up chained list in client; Wherein store the information of each buffer memory service end, cache client initiatively connects with the buffer memory service end, and deposits connection status in chained list; The effect of cache client link chained list is, arrives as request of data, after the data qualification of client with request; The buffer memory service end of depositing to this classification of data base querying; When database return cache service end information was given client, client need be verified this buffer memory service end information according to this link chained list; Meeting the requirements just is forwarded to the opposite end with request, otherwise returns errored response.
The information that cache client is returned according to the memory database server is initiatively initiated the request of connecting to each buffer memory service end; With all buffer memory service end information that database returns, reach the link establishment situation and regularly upgrade the link chained list; Wherein, cache client is regularly obtained all buffer memory service end information from database, and newly-increased buffer memory service end information is added chained list,
The link chained list that the cache client timed backup is current is in order to follow-up use; Cache client use heartbeat message regularly detects the state with each bar buffer memory service end link; Regularly upgrade the link chained list; Wherein, the mode that cache client is regularly upgraded the link chained list is: the timed sending heartbeat message, and the link that detects each buffer memory service end connects situation; And link connection situation upgraded write the incoming link chained list, the link when being used for the client process request is verified.
Cache client receives the insertion request of application program and handles, specifically comprises,
Cache client; Be used for decision data request classification, store corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end then, following two kinds of situation are arranged: situation one; As not finding in the database; Then in database caches service end information table, search the maximum buffer memory service end information of disk free space, return to cache client; Situation two as finding in the database, then returns to client with the buffer memory service end information that finds, and then whether the buffer memory service end returned of client verification database is effective, if effectively then transmit request immediately; If invalidly then in current link chained list, find effective link, ask to transmit, after the processing forward application program is inserted request, the data category and the memory buffers service end information of correspondence is sent to database, upgrade;
The buffer memory service end after being used to accept request, is inserted into data in the internal memory, and data block is write file according to ad hoc fashion deposits disk in; The buffer memory service end is handled the mode of inserting request of data: in internal memory, set up data block hash chained list, the data of inserting are left in earlier in the internal storage data piece chained list, be linked as the key value with request of data, simultaneously data are write document storage at disk; The mode of buffer memory service end managing internal memory data block chained list is: use least recently used lru algorithm, the data block of newly inserting or newly visiting is placed on the chained list head, whole chained list is arranged according to the access time sequencing.
As shown in Figure 2, be that the buffer memory service end writes the naming method figure that file deposits disk in data block, particularly,
The buffer memory service end with the mode that data block writes disk is: data request classification is encrypted, obtain 32 ciphertext classification sign indicating numbers, according to the synthetic complete store path of this ciphertext classification code character, data block is written as binary file leaves under this path again; Particularly,
The buffer memory service end is according to data category; Use 32 ciphertext classification code combination absolute path after encrypting; Its mode is: 32 ciphertext passwords are to be combined by Roman number and small letter English digital, and the front two of 32 passwords is the ground floor directory name, and the 3rd, 4 of password is second layer subdirectory name; The 3rd straton catalogue is run after fame with these 32 passwords, the 4th layer of binary file that then writes for data block.
Data block file designation mode is: the request of data link is encrypted, obtain 32 ciphertext hyperlink request sign indicating numbers, according to these 32 password name this document.
After cache client and each buffer memory service end connect, accept the query requests of application program and handle, be specially,
Cache client; Be used for decision data request classification, store corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end then, find the buffer memory service end information that the back is returned according to database; In the link chained list, verify this buffer memory service end current states; Again request is forwarded to this buffer memory service end as if available, after search wait buffer memory service end data location, the result is returned; If more than have ready conditions and to satisfy, then return failure response immediately.
The buffer memory service end, the location that is used for carrying out from buffer memory service end internal memory data block obtains, according to the key value of query requests; Directly search the hash table, after finding the correspondence database chained list is upgraded, and data are returned client; As in internal memory, do not find, then according to above-mentioned file designation and storage mode, directly navigate to this data block; Then return data is arranged, do not have and then return failure response.
As shown in Figure 3, be the inventive method embodiment flow chart, the application process of a kind of distributed cache server system is provided, comprise,
Step S301, the buffer memory service end reports buffer memory service end information to the memory database server;
Step S302; Buffer memory service end information table and storage classification and the corresponding catalogue listing of buffer memory service end are set up and preserved to the memory database server; And after receiving the buffer memory service end information that said buffer memory service end reports; If contain this buffer memory service end information in the database, then upgrade corresponding disk space data, otherwise this buffer memory service end information is inserted database;
Step S303, cache client is obtained all buffer memory service end information from the memory database server, connect with the buffer memory service end, generates and timing maintenance link chained list.
In the said method, said buffer memory service end information comprises ip address, buffer memory serve port and the current free disk space of local terminal.
In the said method,, cache client is obtained all buffer memory service end information from the memory database server, connect with the buffer memory service end, and generate the link chained list and be specially,
Said cache client is after startup; Regularly to all buffer memory service end information of memory database server lookup; According to all buffer memory service end information that said cache database server returns, set up the link chained list in cache client, wherein store the information of each buffer memory service end; Cache client and buffer memory service end connect, and deposit connection status in the link chained list;
Wherein, the effect of cache client link chained list is, when request of data; After the data qualification of cache client with said request of data, to the buffer memory service end that this classification of data base querying is deposited, when database return cache service end information is given cache client; Cache client need be according to this link chained list; Verify this buffer memory service end information, meeting the requirements just is forwarded to the buffer memory service end with request, otherwise returns errored response.
In the said method, cache client regularly maintenance link chained list is specially,
Cache client is regularly obtained all buffer memory service end information from database, and newly-increased buffer memory service end information is added the incoming link chained list; And
Cache client use heartbeat message regularly detects the state with each buffer memory service end link, and the incoming link chained list is write in the renewal of link connection situation, regularly upgrades the link chained list, and said link chained list is used for cache client and handles the link verification when asking.
As shown in Figure 4, be that the data of the inventive method embodiment are inserted flow chart, comprise,
Step S401, said cache client is after receiving the insertion Request Processing of application program, decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end then;
Step S402; Said memory database server judges whether there is corresponding cache service end information in the said database; If do not have corresponding cache service end information in the database, then search the maximum buffer memory service end information of disk free space, return to cache client; If have corresponding cache service end information in the database, then with said corresponding cache service end information return cache client;
Step S403, whether said cache client is effective according to the buffer memory service end information that said link chained list verification database returns, if effectively then transmit the request of insertion immediately; Then in current link chained list, find effective link as if invalid, the request of inserting is transmitted; And after the request of inserting of processing forward application program, the data category and the memory buffers service end information of correspondence is sent to database, upgrade;
Step S404, said buffer memory service end is inserted into data block in the internal memory after receiving the request of insertion, and said data block is write file deposits disk in; Specifically mode does, in internal memory, sets up data block hash chained list, and the data block of inserting is left in the internal storage data piece chained list, is linked as the key value with data block request, simultaneously data block is write document storage at disk.
In the said method, the buffer memory service end writes file with data block and deposits disk in and be specially,
The buffer memory service end is encrypted data request classification, obtains 32 ciphertext classification sign indicating numbers, according to the synthetic store path of said ciphertext classification code character, said data block is written as binary file leaves under the said store path again.
In the said method, the synthetic store path of said 32 ciphertext classification code characters is specially,
32 ciphertext passwords are to be combined by Roman number and small letter English digital; The front two of 32 passwords is the ground floor directory name; The the 3rd, 4 of password is second layer subdirectory name, and the 3rd straton catalogue is run after fame with these 32 passwords, the 4th layer of binary file that then writes for data block;
As shown in Figure 5, be the data query flow chart of the inventive method embodiment, comprise,
Step S501, after said cache client receives the query requests of application program, decision data request classification; Store corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end; Find the buffer memory service end information that the back is returned according to database, in the link chained list, verify this buffer memory service end current states, said query requests is forwarded to said buffer memory service end as if available; After waiting for that buffer memory service end data block location is searched, the result is returned; Otherwise return failure response;
Step S502, the buffer memory service end is carried out data block according to said query requests from buffer memory service end internal memory location obtains.
In the said method, said buffer memory service end is carried out data block according to said query requests from buffer memory service end internal memory location obtains and is specially,
Said buffer memory service end is searched the hash table according to the key value of query requests, after finding corresponding data piece chained list is upgraded; And with data return cache client; If in internal memory, do not find, then, directly navigate to this data block according to said data block file designation and storage mode; Then return data is arranged, do not have and then return failure response.
Above-mentioned explanation illustrates and has described a preferred embodiment of the present invention; But as previously mentioned; Be to be understood that the present invention is not limited to the form that this paper discloses, should do not regard eliminating as, and can be used for various other combinations, modification and environment other embodiment; And can in invention contemplated scope described herein, change through the technology or the knowledge of above-mentioned instruction or association area.And change that those skilled in the art carried out and variation do not break away from the spirit and scope of the present invention, then all should be in the protection range of accompanying claims of the present invention.

Claims (21)

1. a distributed cache server system is characterized in that, comprises one or more cache client, one or more buffer memory service end and a memory database server, wherein,
Cache client is used for obtaining all buffer memory service end information from the memory database server, connects with the buffer memory service end, generates and timing maintenance link chained list;
The memory database server; Be used for setting up and preserving buffer memory service end information table and storage classification and the corresponding catalogue listing of buffer memory service end; And after receiving the buffer memory service end information that said buffer memory service end reports; If contain this buffer memory service end information in the database, then upgrade corresponding disk space data, otherwise this buffer memory service end information is inserted database;
The buffer memory service end is used for reporting said buffer memory service end information to the memory database server, and accomplishes the management of caching data block.
2. system according to claim 1 is characterized in that, said buffer memory service end information comprises ip address, buffer memory serve port and the current free disk space of local terminal.
3. system according to claim 1 and 2 is characterized in that, cache client is used for obtaining all buffer memory service end information from the memory database server, connects with the buffer memory service end, generate the link chained list and be specially,
Said cache client; Be used for after startup, regularly to all buffer memory service end information of memory database server lookup, all buffer memory service end information of returning according to said cache database server; Set up the link chained list in cache client; Wherein store the information of each buffer memory service end, cache client and buffer memory service end connect, and deposit connection status in the link chained list;
Wherein, the effect of cache client link chained list is, when request of data; After the data qualification of cache client with said request of data, to the buffer memory service end that this classification of data base querying is deposited, when database return cache service end information is given cache client; Cache client need be according to this link chained list; Verify this buffer memory service end information, meeting the requirements just is forwarded to the buffer memory service end with request, otherwise returns errored response.
4. system according to claim 3, cache client are used for timing maintenance link chained list and are specially,
Cache client is used for regularly obtaining all buffer memory service end information from database, and newly-increased buffer memory service end information is added the incoming link chained list; And
Cache client is used to use heartbeat message regularly to detect the state with each buffer memory service end link, and link connection situation is upgraded write the incoming link chained list, regularly upgrades the link chained list, and the link that said link chained list is used for cache client when handling request is verified.
5. system according to claim 1 and 2 is characterized in that,
Said cache client also is used for after receiving the insertion Request Processing of application program, and decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end then; And whether the buffer memory service end information that is used for returning according to said link chained list verification database is effective, if effectively then transmit the request of insertion immediately; Then in current link chained list, find effective link as if invalid, the request of inserting is transmitted; And after being used for the request of inserting of processing forward application program, the data category and the memory buffers service end information of correspondence is sent to database, upgrade;
Said memory database server; Be used for judging whether said database exists corresponding cache service end information; If do not have corresponding cache service end information in the database, then search the maximum buffer memory service end information of disk free space, return to cache client; If have corresponding cache service end information in the database, then with said corresponding cache service end information return cache client;
Said buffer memory service end, be used for receive inserting request after, data block is inserted in the internal memory, and said data block is write file deposits disk in; Specifically mode does, in internal memory, sets up data block hash chained list, and the data block of inserting is left in the internal storage data piece chained list, is linked as the key value with data block request, simultaneously data block is write document storage at disk.
6. system according to claim 5 is characterized in that,
Said buffer memory service end also is used for managing internal memory data block chained list, and concrete mode is placed on the chained list head for using least recently used lru algorithm with the data block of newly inserting perhaps new visit, and whole chained list is arranged according to the access time sequencing.
7. system according to claim 5 is characterized in that, the buffer memory service end is used for that data block is write file and deposits disk in and be specially,
The buffer memory service end is used for data request classification is encrypted, and obtains 32 ciphertext classification sign indicating numbers, according to the synthetic store path of said ciphertext classification code character, said data block is written as binary file leaves under the said store path again.
8. system according to claim 7 is characterized in that, the synthetic store path of said 32 ciphertext classification code characters is specially,
32 ciphertext passwords are to be combined by Roman number and small letter English digital; The front two of 32 passwords is the ground floor directory name; The the 3rd, 4 of password is second layer subdirectory name, and the 3rd straton catalogue is run after fame with these 32 passwords, the 4th layer of binary file that then writes for data block.
9. system according to claim 1 and 2 is characterized in that,
Said cache client, after being used to receive the query requests of application program, decision data request classification; Store corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end; Find the buffer memory service end information that the back is returned according to database, in the link chained list, verify this buffer memory service end current states, said query requests is forwarded to said buffer memory service end as if available; After waiting for that buffer memory service end data block location is searched, the result is returned; Otherwise return failure response;
The buffer memory service end is used for obtaining from the location that buffer memory service end internal memory carries out data block according to said query requests.
10. system according to claim 9 is characterized in that, said buffer memory service end is used for obtaining from the location that buffer memory service end internal memory carries out data block according to said query requests and is specially,
Said buffer memory service end is used for the key value according to query requests, searches the hash table; After finding corresponding data piece chained list is upgraded, and with data return cache client, if in internal memory, do not find; Then according to said data block file designation and storage mode; Directly navigate to this data block, then return data is arranged, do not have and then return failure response.
11. a cache client is characterized in that,
Said cache client is used for obtaining all buffer memory service end information from the memory database server, connects with the buffer memory service end, generates and timing maintenance link chained list.
12. a buffer memory service end is characterized in that,
Said buffer memory service end is used for reporting said buffer memory service end information to the memory database server, and accomplishes the management of caching data block.
13. the application process of a distributed cache server system is characterized in that, comprises,
The buffer memory service end reports buffer memory service end information to the memory database server;
Buffer memory service end information table and storage classification and the corresponding catalogue listing of buffer memory service end are set up and preserved to the memory database server; And after receiving the buffer memory service end information that said buffer memory service end reports; If contain this buffer memory service end information in the database; Then upgrade corresponding disk space data, otherwise this buffer memory service end information is inserted database;
Cache client is obtained all buffer memory service end information from the memory database server, connect with the buffer memory service end, generates and timing maintenance link chained list.
14. method according to claim 13 is characterized in that, said buffer memory service end information comprises ip address, buffer memory serve port and the current free disk space of local terminal.
15., it is characterized in that cache client is obtained all buffer memory service end information according to claim 13 or 14 described methods from the memory database server, connect with the buffer memory service end, generate the link chained list and be specially,
Said cache client is after startup; Regularly to all buffer memory service end information of memory database server lookup; According to all buffer memory service end information that said cache database server returns, set up the link chained list in cache client, wherein store the information of each buffer memory service end; Cache client and buffer memory service end connect, and deposit connection status in the link chained list;
Wherein, the effect of cache client link chained list is, when request of data; After the data qualification of cache client with said request of data, to the buffer memory service end that this classification of data base querying is deposited, when database return cache service end information is given cache client; Cache client need be according to this link chained list; Verify this buffer memory service end information, meeting the requirements just is forwarded to the buffer memory service end with request, otherwise returns errored response.
16. method according to claim 15, cache client regularly maintenance link chained list are specially,
Cache client is regularly obtained all buffer memory service end information from database, and newly-increased buffer memory service end information is added the incoming link chained list; And
Cache client use heartbeat message regularly detects the state with each buffer memory service end link, and the incoming link chained list is write in the renewal of link connection situation, regularly upgrades the link chained list, and said link chained list is used for cache client and handles the link verification when asking.
17., it is characterized in that said method also comprises according to claim 13 or 14 described methods,
Said cache client is after receiving the insertion Request Processing of application program, and decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end then;
Said memory database server judges whether there is corresponding cache service end information in the said database, if do not have corresponding cache service end information in the database, then searches the maximum buffer memory service end information of disk free space, returns to cache client; If have corresponding cache service end information in the database, then with said corresponding cache service end information return cache client;
Whether said cache client is effective according to the buffer memory service end information that said link chained list verification database returns, if effectively then transmit the request of insertion immediately; Then in current link chained list, find effective link as if invalid, the request of inserting is transmitted; And after being used for the request of inserting of processing forward application program, the data category and the memory buffers service end information of correspondence is sent to database, upgrade;
Said buffer memory service end is inserted into data block in the internal memory after receiving the request of insertion, and said data block is write file deposits disk in; Specifically mode does, in internal memory, sets up data block hash chained list, and the data block of inserting is left in the internal storage data piece chained list, is linked as the key value with data block request, simultaneously data block is write document storage at disk.
18. method according to claim 17 is characterized in that, the buffer memory service end writes file with data block and deposits disk in and be specially,
The buffer memory service end is encrypted data request classification, obtains 32 ciphertext classification sign indicating numbers, according to the synthetic store path of said ciphertext classification code character, said data block is written as binary file leaves under the said store path again.
19. method according to claim 18 is characterized in that, the synthetic store path of said 32 ciphertext classification code characters is specially,
32 ciphertext passwords are to be combined by Roman number and small letter English digital; The front two of 32 passwords is the ground floor directory name; The the 3rd, 4 of password is second layer subdirectory name, and the 3rd straton catalogue is run after fame with these 32 passwords, the 4th layer of binary file that then writes for data block.
20., it is characterized in that said method also comprises according to claim 13 or 14 described methods,
After said cache client receives the query requests of application program; Decision data request classification is stored corresponding catalogue listing to internal storage data library inquiry data category with the buffer memory service end, finds the buffer memory service end information that the back is returned according to database; In the link chained list, verify this buffer memory service end current states; As if available said query requests is forwarded to said buffer memory service end, after search wait buffer memory service end data block location, the result is returned; Otherwise return failure response;
The buffer memory service end is carried out data block according to said query requests from buffer memory service end internal memory location obtains.
21. method according to claim 20 is characterized in that, said buffer memory service end is carried out data block according to said query requests from buffer memory service end internal memory location obtains and is specially,
Said buffer memory service end is searched the hash table according to the key value of query requests, after finding corresponding data piece chained list is upgraded; And with data return cache client; If in internal memory, do not find, then, directly navigate to this data block according to said data block file designation and storage mode; Then return data is arranged, do not have and then return failure response.
CN201110093902.0A 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals Active CN102739720B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201110093902.0A CN102739720B (en) 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals
PCT/CN2011/075964 WO2012139328A1 (en) 2011-04-14 2011-06-20 Cache server system and application method thereof, cache client, and cache server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110093902.0A CN102739720B (en) 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals

Publications (2)

Publication Number Publication Date
CN102739720A true CN102739720A (en) 2012-10-17
CN102739720B CN102739720B (en) 2015-01-28

Family

ID=46994499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110093902.0A Active CN102739720B (en) 2011-04-14 2011-04-14 Distributed cache server system and application method thereof, cache clients and cache server terminals

Country Status (2)

Country Link
CN (1) CN102739720B (en)
WO (1) WO2012139328A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103118099A (en) * 2013-01-25 2013-05-22 福建升腾资讯有限公司 Hash algorithm based graphic image caching method
CN103853727A (en) * 2012-11-29 2014-06-11 深圳中兴力维技术有限公司 Method and system for improving large data volume query performance
CN104461929A (en) * 2013-09-23 2015-03-25 中国银联股份有限公司 Distributed type data caching method based on interceptor
CN105426371A (en) * 2014-09-17 2016-03-23 上海三明泰格信息技术有限公司 Database system
CN106331147A (en) * 2016-09-09 2017-01-11 深圳市彬讯科技有限公司 REDIS distributed type invoking method and system thereof
CN106506613A (en) * 2016-10-31 2017-03-15 大唐高鸿信安(浙江)信息科技有限公司 The data storage location encryption method of distributed key value storage systems
CN106649408A (en) * 2015-11-04 2017-05-10 中国移动通信集团重庆有限公司 Big data retrieval method and device
CN106934001A (en) * 2017-03-03 2017-07-07 广州天源迪科信息技术有限公司 Distributed quick inventory inquiry system and method
CN107038174A (en) * 2016-02-04 2017-08-11 北京京东尚科信息技术有限公司 Method of data synchronization and device for data system
CN107071059A (en) * 2017-05-25 2017-08-18 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
CN107087232A (en) * 2017-04-07 2017-08-22 Ut斯达康(深圳)技术有限公司 The real-time status detection method and system of user
CN107133183A (en) * 2017-04-11 2017-09-05 深圳市云舒网络技术有限公司 A kind of cache data access method and system based on TCMU Virtual Block Devices
CN107797859A (en) * 2017-11-16 2018-03-13 山东浪潮云服务信息科技有限公司 A kind of dispatching method of timed task and a kind of dispatch server
CN108009250A (en) * 2017-12-01 2018-05-08 武汉斗鱼网络科技有限公司 A kind of more foundation of classification race data buffer storage, querying method and devices
CN109086380A (en) * 2018-07-25 2018-12-25 光大环境科技(中国)有限公司 The method and system of compression storage are carried out to historical data
CN110022257A (en) * 2018-01-08 2019-07-16 北京京东尚科信息技术有限公司 Distributed information system
CN110298677A (en) * 2018-03-22 2019-10-01 中移(苏州)软件技术有限公司 A kind of method, apparatus, electronic equipment and the storage medium of cloud computing resources charging
CN110535977A (en) * 2019-09-29 2019-12-03 深圳市网心科技有限公司 Document distribution method and device, computer installation and storage medium
CN111858664A (en) * 2020-06-23 2020-10-30 苏州浪潮智能科技有限公司 Data persistence method and system based on BMC
CN114422570A (en) * 2021-12-31 2022-04-29 深圳市联软科技股份有限公司 Cross-platform multi-module communication method and system
CN114938393A (en) * 2022-05-06 2022-08-23 中富通集团股份有限公司 Computer room data interaction method and system and storage medium
CN116360711A (en) * 2023-06-02 2023-06-30 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium
CN114938393B (en) * 2022-05-06 2024-04-19 中富通集团股份有限公司 Computer room data interaction method and system and storage medium

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9876873B1 (en) 2015-10-21 2018-01-23 Perfect Sense, Inc. Caching techniques
CN110232044B (en) * 2019-06-17 2023-03-28 浪潮通用软件有限公司 System and method for realizing big data summarizing and scheduling service
CN110825986B (en) * 2019-11-05 2023-03-21 上海携程商务有限公司 Method, system, storage medium and electronic device for client to request data
CN112383415A (en) * 2020-10-30 2021-02-19 上海蜜度信息技术有限公司 Server side marking method and equipment
CN115633005A (en) * 2022-10-26 2023-01-20 云度新能源汽车有限公司 Real-time high-concurrency connection processing method and processing system thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308513A (en) * 2008-06-27 2008-11-19 福建星网锐捷网络有限公司 Distributed system cache data synchronous configuration method and apparatus
CN101493826A (en) * 2008-12-23 2009-07-29 中兴通讯股份有限公司 Database system based on WEB application and data management method thereof
CN101562543A (en) * 2009-05-25 2009-10-21 阿里巴巴集团控股有限公司 Cache data processing method and processing system and device thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102006330B (en) * 2010-12-01 2013-06-12 北京瑞信在线系统技术有限公司 Distributed cache system, data caching method and inquiring method of cache data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101308513A (en) * 2008-06-27 2008-11-19 福建星网锐捷网络有限公司 Distributed system cache data synchronous configuration method and apparatus
CN101493826A (en) * 2008-12-23 2009-07-29 中兴通讯股份有限公司 Database system based on WEB application and data management method thereof
CN101562543A (en) * 2009-05-25 2009-10-21 阿里巴巴集团控股有限公司 Cache data processing method and processing system and device thereof

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853727A (en) * 2012-11-29 2014-06-11 深圳中兴力维技术有限公司 Method and system for improving large data volume query performance
CN103853727B (en) * 2012-11-29 2018-07-31 深圳中兴力维技术有限公司 Improve the method and system of big data quantity query performance
CN103118099A (en) * 2013-01-25 2013-05-22 福建升腾资讯有限公司 Hash algorithm based graphic image caching method
CN104461929A (en) * 2013-09-23 2015-03-25 中国银联股份有限公司 Distributed type data caching method based on interceptor
CN104461929B (en) * 2013-09-23 2018-03-23 中国银联股份有限公司 Distributed data cache method based on blocker
CN105426371A (en) * 2014-09-17 2016-03-23 上海三明泰格信息技术有限公司 Database system
CN106649408A (en) * 2015-11-04 2017-05-10 中国移动通信集团重庆有限公司 Big data retrieval method and device
CN107038174A (en) * 2016-02-04 2017-08-11 北京京东尚科信息技术有限公司 Method of data synchronization and device for data system
CN107038174B (en) * 2016-02-04 2020-11-24 北京京东尚科信息技术有限公司 Data synchronization method and device for data system
CN106331147B (en) * 2016-09-09 2019-09-06 深圳市彬讯科技有限公司 A kind of REDIS distribution call method
CN106331147A (en) * 2016-09-09 2017-01-11 深圳市彬讯科技有限公司 REDIS distributed type invoking method and system thereof
CN106506613A (en) * 2016-10-31 2017-03-15 大唐高鸿信安(浙江)信息科技有限公司 The data storage location encryption method of distributed key value storage systems
CN106506613B (en) * 2016-10-31 2018-04-13 大唐高鸿信安(浙江)信息科技有限公司 The data storage location encryption method of distributed key value storage systems
CN106934001A (en) * 2017-03-03 2017-07-07 广州天源迪科信息技术有限公司 Distributed quick inventory inquiry system and method
CN107087232A (en) * 2017-04-07 2017-08-22 Ut斯达康(深圳)技术有限公司 The real-time status detection method and system of user
CN107087232B (en) * 2017-04-07 2020-03-27 优地网络有限公司 User real-time state detection method and system
CN107133183A (en) * 2017-04-11 2017-09-05 深圳市云舒网络技术有限公司 A kind of cache data access method and system based on TCMU Virtual Block Devices
CN107133183B (en) * 2017-04-11 2020-06-30 深圳市联云港科技有限公司 Cache data access method and system based on TCMU virtual block device
CN107071059B (en) * 2017-05-25 2018-10-02 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
CN107071059A (en) * 2017-05-25 2017-08-18 腾讯科技(深圳)有限公司 Distributed caching service implementing method, device, terminal, server and system
CN107797859A (en) * 2017-11-16 2018-03-13 山东浪潮云服务信息科技有限公司 A kind of dispatching method of timed task and a kind of dispatch server
CN107797859B (en) * 2017-11-16 2021-08-20 山东浪潮云服务信息科技有限公司 Scheduling method of timing task and scheduling server
CN108009250A (en) * 2017-12-01 2018-05-08 武汉斗鱼网络科技有限公司 A kind of more foundation of classification race data buffer storage, querying method and devices
CN110022257A (en) * 2018-01-08 2019-07-16 北京京东尚科信息技术有限公司 Distributed information system
CN110298677A (en) * 2018-03-22 2019-10-01 中移(苏州)软件技术有限公司 A kind of method, apparatus, electronic equipment and the storage medium of cloud computing resources charging
CN110298677B (en) * 2018-03-22 2021-08-13 中移(苏州)软件技术有限公司 Cloud computing resource charging method and device, electronic equipment and storage medium
CN109086380A (en) * 2018-07-25 2018-12-25 光大环境科技(中国)有限公司 The method and system of compression storage are carried out to historical data
CN110535977A (en) * 2019-09-29 2019-12-03 深圳市网心科技有限公司 Document distribution method and device, computer installation and storage medium
CN111858664B (en) * 2020-06-23 2023-01-10 苏州浪潮智能科技有限公司 Data persistence method and system based on BMC
CN111858664A (en) * 2020-06-23 2020-10-30 苏州浪潮智能科技有限公司 Data persistence method and system based on BMC
CN114422570A (en) * 2021-12-31 2022-04-29 深圳市联软科技股份有限公司 Cross-platform multi-module communication method and system
CN114938393A (en) * 2022-05-06 2022-08-23 中富通集团股份有限公司 Computer room data interaction method and system and storage medium
CN114938393B (en) * 2022-05-06 2024-04-19 中富通集团股份有限公司 Computer room data interaction method and system and storage medium
CN116360711A (en) * 2023-06-02 2023-06-30 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium
CN116360711B (en) * 2023-06-02 2023-08-11 杭州沃趣科技股份有限公司 Distributed storage processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN102739720B (en) 2015-01-28
WO2012139328A1 (en) 2012-10-18

Similar Documents

Publication Publication Date Title
CN102739720B (en) Distributed cache server system and application method thereof, cache clients and cache server terminals
CN101350030B (en) Method and apparatus for caching data
CN101340371B (en) Session keeping method and load balance apparatus
CN101064630B (en) Data synchronization method and system
EP2266043B1 (en) Cache optimzation
CN101764839B (en) Data access method and uniform resource locator (URL) server
CN102971732A (en) System architecture for integrated hierarchical query processing for key/value stores
CN103095758B (en) A kind of method processing file data in distributed file system and this system
JP2008542887A5 (en)
CN105635196A (en) Method and system of file data obtaining, and application server
JP2007066161A (en) Cache system
JP5516591B2 (en) Base station, web application server, system and method
CN101867607A (en) Distributed data access method, device and system
US8539041B2 (en) Method, apparatus, and network system for acquiring content
CN110191428A (en) A kind of data distributing method based on intelligent cloud platform
CN102035815A (en) Data acquisition method, access node and data acquisition system
CN101090371A (en) Method and system for user information management in at-once communication system
CN101764848A (en) Method and device for transmitting network files
CN102546674A (en) Directory tree caching system and method based on network storage device
CN102572011B (en) Method, device and system for processing data
CN102857547A (en) Distributed caching method and device
US11947553B2 (en) Distributed data processing
CN104702508A (en) Method and system for dynamically updating table items
US20110047165A1 (en) Network cache, a user device, a computer program product and a method for managing files
CN100407623C (en) Method and system for user data transaction in communication system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant