CN103106153A - Web cache replacement method based on access density - Google Patents

Web cache replacement method based on access density Download PDF

Info

Publication number
CN103106153A
CN103106153A CN2013100545545A CN201310054554A CN103106153A CN 103106153 A CN103106153 A CN 103106153A CN 2013100545545 A CN2013100545545 A CN 2013100545545A CN 201310054554 A CN201310054554 A CN 201310054554A CN 103106153 A CN103106153 A CN 103106153A
Authority
CN
China
Prior art keywords
access
cache
density
jump
interval
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013100545545A
Other languages
Chinese (zh)
Other versions
CN103106153B (en
Inventor
何慧
李乔
张伟哲
刘亚维
王健
王冬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN201310054554.5A priority Critical patent/CN103106153B/en
Publication of CN103106153A publication Critical patent/CN103106153A/en
Application granted granted Critical
Publication of CN103106153B publication Critical patent/CN103106153B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention relates to a web cache replacement method based on access density. The web cache replacement method resolves the problems that existing least recently used (LRU) has locality, least frequently used (LFU) has cache pollution, and the hit rate is low. The web cache replacement method based on the access density includes judging whether cache objects exist in a memory pool; judging whether the memory pool is full; initializing multiply; deleting the cache object having the lowest density, and enabling newly increased cache objects to be added to the memory pool; calculating current access intervals; judging whether second access is conducted; calculating the access density; calculating the access density according to formulas, and updating even access intervals; updating relative values; and exiting. The web cache replacement method is applied to the field of internet storage.

Description

Web buffer replacing method based on access density
Technical field
The present invention relates to the web buffer replacing method.
Background technology
Along with the development in pluralism of Web data, the distribution of web page contents progressively becomes the key factor that affects the Web service performance.The Data Dissemination of current main-stream is to adopt the content distributing network technology, and user's request is redirected to the server nearest from it, thereby reduces access time delay and source service end load pressure.In order effectively to promote service quality, content distributing network supplier disposes content proxy server at a plurality of network boundaries.For example Akamai company surpasses 25,000 content servers in more than 1,000 network internal administration of more than 70 countries and regions.Current, content distributing network is paid close attention to the routing mechanism of the selection of acting server deployed position and content usually, yet the efficiency of buffer memory is the key factor of performance that affects contents distribution.This paper, for the caching mechanism in the web content management, proposes the cache replacement policy that a kind of density is mixed with content size, with this, reduces the pressure of contents distribution flow in process and user's access time delay.
Buffer memory is replaced mechanism and mainly depended on following two principles: 1) information of frequent access should be buffered; 2) temperature of information changes the variation that means access time interval.So far existing multiple cache replacement policy is based on heavily quoting the time, current cache replacement policy mainly adopts based on frequency and local locality benchmark as an alternative, but all do not consider whole access history, and current LRU exists locality and LFU to have buffer memory pollution, the problem that hit rate is low.
Summary of the invention
The present invention is that the LRU before syllabus exists locality and LFU to exist buffer memory to pollute, the problem that hit rate is low, and the web buffer replacing method based on access density is provided.
(1) cache object cache pool whether Already in, otherwise jump into step (2), be to jump into step (5);
(2) whether cache pool is full, and the full step (3) of jumping into is discontented jump into step (4);
(3) delete the minimum cache object of access density value, will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(4) will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(5) cache pool Already in, calculate the current accessed interval;
(6) whether, be back-call jump into step (7), be not to jump into step (8) if being back-call;
(7) make the average access interval equal the current accessed interval, visiting frequency+1, calculate access density, jumps into step (10);
(8) not back-call, according to formula, calculate access density, upgrade the average access interval;
(9) upgrade last visit position, cache object visiting frequency+1;
(10) cache access total degree+1, exit.
Inventive principle:
One, suppose that spatial cache can hold at most M cache object, if spatial cache less than, replacement process is consistent with other cache replacement policies;
Two, when spatial cache is expired, for new arrival cache object i, at first calculate the current accessed interval now_accintvl of cache object i i:
If current accessed interval now_accintvl iaverage access interval avg_accintvl lower than i i, the temperature of meaning cache object i is on a declining curve, therefore reduces the density value ad_value of cache object i i;
If current accessed interval now_accintvl iaverage access interval avg_accintvl higher than i i, the temperature of meaning cache object i is in rising trend, therefore increases the density value ad_value of cache object i i.
The invention effect:
By actual Web data access situation is analyzed, the rate of change at discovery access interval has higher accuracy for the impact of hit rate;
At first the present invention extracts the URLs (Uniform Resource Locator) of real network the access behavior of analysis user on the campus network gateway, discovery is in the buffer memory of LRU (least recently used), and the URL of high popularity is often by the replacement of some low popularities; Secondly hit loss for fear of this, adopt the weight of access interval variation rate as object, and carry out the buffer memory replacement in conjunction with the shared space size of object;
By embodiment by this strategy respectively with LRU (least recently used), LFU (the frequent use recently) and GDSF (greed is mixed in space with frequency) are contrasted, result shows that the replace Algorithm based on the access interval variation can promote 3%~5% hit rate merely, and the replace Algorithm mixed promotes 5%~8% rate and byte hit than GDSF (greed is mixed in space with frequency).
Replacement policy CPBAD (the cache policy based on access density) algorithm based on access density that the present invention adopts avoids buffer memory thoroughly to pollute for each cache object arranges the out-of-service time with this.In order to solve the storage overhead of counter, carry out the counter replacement in periodic maintenance simultaneously.
The accompanying drawing explanation
Fig. 1 is module frame figure of the present invention;
Fig. 2 is the data centralization URL distribution plan in tool embodiment; (a), (b) and (c) be respectively to have extracted 427 in the network log of campus gateway of continuous three days, URL visiting frequency under the common coordinate of 936 user's requests and the graph of a relation of temperature, (d), (e) and (f) be respectively to have extracted URL visiting frequency under the log-log coordinate of 427,936 users' requests and the graph of a relation of URL temperature in the network log of campus gateway of continuous three days;
Fig. 3 is the access sequence figure of the popular URL in embodiment;
Fig. 4 is that the different zipf in embodiment divide and plant the impact of λ on algorithm;
Fig. 5 is that the cache replacement algorithm hit rate in embodiment compares; (a) be illustrated in the hit rate under the algorithms of different under Dataset1, (b) be illustrated in the hit rate under the algorithms of different under Dataset2, (c) be illustrated in the hit rate under the algorithms of different under Dataset3, (d) be illustrated in the hit rate under the algorithms of different under the total data collection
Figure BDA00002845493600031
mean LFU,
Figure BDA00002845493600032
mean LRU,
Figure BDA00002845493600033
mean CPBAD;
Fig. 6 is the rate and byte hit comparison diagram of the multiple replace Algorithm in embodiment; ,
Figure BDA00002845493600034
mean LRU, mean LFU, mean GDSF,
Figure BDA00002845493600037
mean CPBADS.
Embodiment
Embodiment one: the web buffer replacing method based on access density of present embodiment is realized according to the following steps:
(1) cache object cache pool whether Already in, otherwise jump into step (2), be to jump into step (5);
(2) whether cache pool is full, and the full step (3) of jumping into is discontented jump into step (4);
(3) delete the minimum cache object of access density value, will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(4) will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(5) cache pool Already in, calculate the current accessed interval;
(6) whether, be back-call jump into step (7), be not to jump into step (8) if being back-call;
(7) make the average access interval equal the current accessed interval, visiting frequency+1, calculate access density, jumps into step (10);
(8) not back-call, according to formula, calculate access density, upgrade the average access interval;
(9) upgrade last visit position, cache object visiting frequency+1;
(10) cache access total degree+1, exit.
The present embodiment effect:
By actual Web data access situation is analyzed, the rate of change at discovery access interval has higher accuracy for the impact of hit rate;
At first present embodiment extracts the URLs (Uniform Resource Locator) of real network the access behavior of analysis user on the campus network gateway, discovery is in the buffer memory of LRU (least recently used), and the URL of high popularity is often by the replacement of some low popularities; Secondly hit loss for fear of this, adopt the weight of access interval variation rate as object, and carry out the buffer memory replacement in conjunction with the shared space size of object;
By embodiment by this strategy respectively with LRU (least recently used), LFU (the frequent use recently) and GDSF (greed is mixed in space with frequency) are contrasted, result shows that the replace Algorithm based on the access interval variation can promote 3%~5% hit rate merely, and the replace Algorithm mixed promotes 5%~8% rate and byte hit than GDSF (greed is mixed in space with frequency).
Replacement policy CPBAD (the cache policy based on access density) algorithm based on access density of this embodiment employing avoids buffer memory thoroughly to pollute for each cache object arranges the out-of-service time with this.In order to solve the storage overhead of counter, carry out the counter replacement in periodic maintenance simultaneously.
Embodiment two: present embodiment is different from embodiment one: access described in step (3) is spaced apart this and hits cache object and hit the cache access number of times differed between this cache object last time.Other step and parameter are identical with embodiment one.
Embodiment three: present embodiment is different from embodiment one or two: the ratio that the density of access described in step (3) is the accessed number of times of cache object and the total access times of buffer memory within a period of time.Other step and parameter are identical with embodiment one or two.
Verify beneficial effect of the present invention by following examples:
One, at first analyze the life cycle of cache object in spatial cache;
Two, secondly analyze the behavior of web access and the distribution situation of URL: the distribution characteristics of research web request, extracted 427,936 user's requests in the network log of the campus gateways of continuous three days, comprise 167,981 different URL;
Three, the last temperature variation tendency according to cache object, find to adopt the replace Algorithm of density to possess better predictive ability: due to the locality of LRU and the buffer memory pollution character of LFU, the replacement policy CPBAD (cache policy based on access density) that the present embodiment proposes based on access density observes the variation tendency of cache object within the longer period, thereby and by the weights that reduce temperature reduction object, effectively avoids buffer memory to pollute and improve cache hit rate.
Web buffer replacing method based on access density is realized according to the following steps:
(1) cache object cache pool whether Already in, otherwise jump into step (2), be to jump into step (5);
(2) whether cache pool is full, and the full step (3) of jumping into is discontented jump into step (4);
(3) delete the minimum cache object of access density value, will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(4) will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
(5) cache pool Already in, calculate the current accessed interval;
(6) whether, be back-call jump into step (7), be not to jump into step (8) if being back-call;
(7) make the average access interval equal the current accessed interval, visiting frequency+1, calculate access density, jumps into step (10);
(8) not back-call, according to formula, calculate access density, upgrade the average access interval;
(9) upgrade last visit position, cache object visiting frequency+1;
(10) cache access total degree+1, exit.
The life cycle of the cache object in the present embodiment in step 1, each cache object has the life cycle of self in spatial cache, and can be divided into two parts the life cycle of the object A in buffer memory: 1 active period: from entering buffer memory until accessed for the last time; The 2 ossified phases: from last accessed until be replaced out buffer memory; The active period of cache object is longer, and caching performance is higher, although can not accurately predicting cache object future behaviour, yet extend its active period or reduce the long ossified time of phase object in buffer memory as far as possible, is the effective ways that promote buffer efficiency;
Fig. 2 is the distribution situation figure of the URL in step 2 in the present embodiment, (a), (b) and (c) be respectively to have extracted 427 in the network log of campus gateway of continuous three days, URL visiting frequency under the common coordinate of 936 user's requests and the graph of a relation of temperature, (d), (e) and (f) be respectively to have extracted 427 in the network log of campus gateway of continuous three days, URL visiting frequency under the log-log coordinate of 936 user's requests and the graph of a relation of URL temperature, blue straight line in figure is the zipf matching to red-black curve, and this data set meets typical zipf adistribute, 0.6<α<0.8 wherein, popular object means within short time relatively repeatedly accessed, and therefore extending the time of these popular objects in buffer memory is the effective ways that promote buffer efficiency, and the access interval is the key metrics of objective description object temperature;
Fig. 3 has showed in the step 2 access sequence of 8 the hottest URL in data centralization, the y axle means this 8 URL, the x axle is illustrated in the order that in these three days, each URL occurs, as can be known from Fig. 3, popular URL has the extremely low characteristic in interval within certain period, and the access interval of some popular URL diminishes gradually simultaneously, as url-2 and url-5, the access interval of some popular URL is comparatively even simultaneously, and as url-8, such URL generally can not be replaced in all kinds of cache replacement policies; And, for this class of similar url-1 URL, because its access interval has obvious periodicity, but all period interval are longer, under the cache policy of LRU, will be replaced repeatedly, thereby reduce caching performance; For this class of similar url-7 URL, although have certain periodicity, within its week, period interval constantly increases, and this class URL is under the LFU strategy, and its weights rise all the time, yet its temperature presents downtrending, thereby causes buffer memory to pollute;
The access behavior that can obtain popular URL from above analysis can be divided into following 3 class situations:
1) evenly and have periodically, as url-8, url-4;
2) the access interval is from sparse to dense, as url-5;
3) sudden growth, as url-3, url-6;
In view of the local locality of lru algorithm and the weights monotonicity of LFU algorithm, the present embodiment proposes the cache replacement algorithm based on the access interval variation, to promote caching performance;
In order effectively to verify the CPBAD algorithm,
ad _ value i n = INITIAL _ VALUE , if obj i is a new object ad _ value i n - 1 &CenterDot; &lambda; &CenterDot; avg _ accintvl i avg _ accintvl i + now _ accintvl i , if avg _ accintvl i < now _ accintvl i 0 < &lambda; < 1 ad _ value i n - 1 &CenterDot; ( 1 + &lambda; &CenterDot; now _ accintvl avg _ accintvl i + now _ accintvl i ) , if avg _ accintvl i > now _ accintvl i - - - ( 1 )
Wherein, described C totalmean the accessed total degree of buffer memory in certain period, last iposition when indicated object i last time is accessed on total access sequence, now iposition when object i last time is accessed on total access sequence, ad_value ithe density value of indicated object i, freq itotal frequency that indicated object i is accessed, avg_accintvl ithe average access interval of indicated object i, now_accintvl ithe access interval that indicated object i is current, the number that n is cache object;
On the real network data set, with LRU and LFU algorithm, compare, this data set is split into three sub-data sets by the date, as shown in table 1:
Table 1 gateway daily record data collection
In order to determine the λ value in formula (1), need to determine the impact of λ on algorithm, so at first the present embodiment generate 10,000 URL, and buffer memory is set can holds 500 URL objects, observe algorithm different by changing λ
Figure BDA00002845493600063
the performance minute planted, be illustrated in figure 4 different zipf and divide and plant the affect figure of λ on algorithm,
Figure BDA00002845493600064
mean
Figure BDA00002845493600065
Figure BDA00002845493600069
mean α=0.7,
Figure BDA00002845493600066
mean α=0.8,
Figure BDA00002845493600067
mean α=0.9, mean α=1.0;
Can obviously find out that from Fig. 4 when λ is interval in [0.6,0.8], hit rate is higher, after experiment in, the present embodiment selection λ=0.8 is as empirical value;
In order better replacement policy to be compared, the present embodiment is tested on 3 sub-data sets and total data collection: Fig. 5 has shown the hit rate under the algorithms of different, (a) be illustrated in the hit rate under the algorithms of different under Dataset1, (b) be illustrated in the hit rate under the algorithms of different under Dataset2, (c) be illustrated in the hit rate under the algorithms of different under Dataset3, (d) be illustrated in the hit rate under the algorithms of different under the total data collection mean LFU,
Figure BDA00002845493600072
mean LRU,
Figure BDA00002845493600073
mean CPBAD;
Can be observed the CPBAD algorithm and be better than LRU and LFU algorithm, known when cache size is 500 from Fig. 5 (c), the LFU algorithm obviously is better than LRU; And known in Fig. 2, the α value of data set dataset3 is greater than other 2 data sets, and it is larger that this meaning is worked as the α value, and the hit rate of LFU is higher; When cache size is 2000, the hit rate of three kinds of algorithms is almost consistent simultaneously, because work as cache size, is increased to one regularly, and the number of popular URL and cache size are close, and increase spatial cache and can't promote hit rate; For Fig. 5 (d), although spatial cache rises to 8000, lru algorithm is still lower than other, and this is owing to having stored the URL that a large amount of temperatures are lower in buffer memory; The advantage of CPBAD algorithm is can reduce its weights in spatial cache when hot data turns cold gradually, thereby promotes hit rate; Although in Fig. 5 (c), spatial cache is 2000 o'clock, CPBAD and LFU approach, at spatial cache, it is even 500 o'clock, LFU is higher than CPBAD, and this is because the quantity that continues hot data approaches spatial cache, so the buffer memory in LFU keeps high hit condition always, and for CPBAD, because the variation of density value is weaker than the freshness of LFU and maintenance buffer memory, thereby some hot datas are replaced out to buffer memory, and then cause hit rate to descend; But in general, the CPBAD algorithm is better than LFU and LRU as a rule;
During for this replace Algorithm of actual deployment, the spatial cache size can not be used merely the URL number as a reference but storage size, so rate and byte hit is possessed of higher values for real system; Consider the diversity of web content size, the size of the present embodiment hypothesis URL page meets and is uniformly distributed between (1KB, 1MB); And modify for formula (1), increase the file size parameter, as shown in formula (2), use
Figure BDA00002845493600074
weights as cache object are calculated.
size _ ad _ value i n = ad _ value i n + log ( obj _ size / cachesize ) - - - ( 2 )
Be limited to 2GB on the supposition spatial cache in experiment, URL distributes and uses total data collection dataset.Fig. 6 has shown the rate and byte hit of every kind of cache replacement algorithm on different spatial caches, zipfa=0.645,
Figure BDA00002845493600076
mean LRU, mean LFU,
Figure BDA00002845493600078
mean GDSF,
Figure BDA00002845493600079
mean CPBADS, when buffer memory is low, GDSF (Greedy Dual Size Frequency) is better than CPBAD as can be seen from Fig. 6, yet work as spatial cache, increases gradually, and the algorithm that the present embodiment proposes obviously is better than GDSF.This is due in the object weights calculate, and when space is enough, some present by heat and can not be replaced very soon in GDSF to cold large file, and CPBAD can reduce ad_value value very soon, thus reduction
Figure BDA000028454936000710
to reach more effective space availability ratio.
Generally speaking, adopt the cache replacement algorithm based on density to there is the rate and byte hit higher than GDSF for the web contents distribution, and then reduce the performance loss of web server.Higher rate and byte hit also means that the synchronization bandwidth consumption in a distribution Web group of planes reduces simultaneously.By the test by this strategy respectively with LRU (least recently used), LFU (the frequent use recently) and GDSF (greed is mixed in space with frequency) are contrasted, result shows that the replace Algorithm based on the access interval variation can promote 3%~5% hit rate merely, and the replace Algorithm mixed promotes 5%~8% rate and byte hit than GDSF (greed is mixed in space with frequency).

Claims (3)

  1. Based on the access density the web buffer replacing method, it is characterized in that based on the access density the web buffer replacing method realize according to the following steps:
    (1) cache object cache pool whether Already in, otherwise jump into step (2), be to jump into step (5);
    (2) whether cache pool is full, and the full step (3) of jumping into is discontented jump into step (4);
    (3) delete the minimum cache object of access density value, will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
    (4) will increase cache object newly and add cache pool, the access density of initialization cache object, last visit position, accessed frequency, average access interval, jump into step (10);
    (5) cache pool Already in, calculate the current accessed interval;
    (6) whether, be back-call jump into step (7), be not to jump into step (8) if being back-call;
    (7) make the average access interval equal the current accessed interval, visiting frequency+1, calculate access density, jumps into step (10);
    (8) not back-call, according to formula, calculate access density, upgrade the average access interval;
    (9) upgrade last visit position, cache object visiting frequency+1;
    (10) cache access total degree+1, exit.
  2. 2. the web buffer replacing method based on access density according to claim 1, is characterized in that described in step (3) that access is spaced apart this and hits cache object and hit the cache access number of times differed between this cache object last time.
  3. 3. the web buffer replacing method based on access density according to claim 1, is characterized in that the ratio that the density of access described in step (3) is the accessed number of times of cache object within a period of time and the total access times of buffer memory.
CN201310054554.5A 2013-02-20 2013-02-20 Based on the web cache replacement method of access density Active CN103106153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310054554.5A CN103106153B (en) 2013-02-20 2013-02-20 Based on the web cache replacement method of access density

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310054554.5A CN103106153B (en) 2013-02-20 2013-02-20 Based on the web cache replacement method of access density

Publications (2)

Publication Number Publication Date
CN103106153A true CN103106153A (en) 2013-05-15
CN103106153B CN103106153B (en) 2016-04-06

Family

ID=48314026

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310054554.5A Active CN103106153B (en) 2013-02-20 2013-02-20 Based on the web cache replacement method of access density

Country Status (1)

Country Link
CN (1) CN103106153B (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440207A (en) * 2013-07-31 2013-12-11 北京智谷睿拓技术服务有限公司 Caching method and caching device
CN103793517A (en) * 2014-02-12 2014-05-14 浪潮电子信息产业股份有限公司 File system log dump dynamic capacity-increase method based on monitoring mechanism
CN106294216A (en) * 2016-08-11 2017-01-04 电子科技大学 A kind of buffer replacing method for wind power system
CN106383792A (en) * 2016-09-20 2017-02-08 北京工业大学 Missing perception-based heterogeneous multi-core cache replacement method
CN106681995A (en) * 2015-11-05 2017-05-17 阿里巴巴集团控股有限公司 Data caching method and data query method and device
CN106909518A (en) * 2017-01-24 2017-06-30 朗坤智慧科技股份有限公司 A kind of real time data caching mechanism
CN106973088A (en) * 2017-03-16 2017-07-21 中国人民解放军理工大学 A kind of buffering updating method and network of the joint LRU and LFU based on shift in position
CN107291635A (en) * 2017-06-16 2017-10-24 郑州云海信息技术有限公司 A kind of buffer replacing method and device
CN107451071A (en) * 2017-08-04 2017-12-08 郑州云海信息技术有限公司 A kind of caching replacement method and system
US10095628B2 (en) 2015-09-29 2018-10-09 International Business Machines Corporation Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage
US10120811B2 (en) 2015-09-29 2018-11-06 International Business Machines Corporation Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage
CN108829344A (en) * 2018-05-24 2018-11-16 北京百度网讯科技有限公司 Date storage method, device and storage medium
US10223286B2 (en) 2014-08-05 2019-03-05 International Business Machines Corporation Balanced cache for recently frequently used data
US10241918B2 (en) 2015-09-29 2019-03-26 International Business Machines Corporation Considering a frequency of access to groups of tracks to select groups of tracks to destage
CN111258929A (en) * 2018-12-03 2020-06-09 北京京东尚科信息技术有限公司 Cache control method, device and computer readable storage medium
CN111400308A (en) * 2020-02-21 2020-07-10 中国平安财产保险股份有限公司 Processing method of cache data, electronic device and readable storage medium
CN112733060A (en) * 2021-01-13 2021-04-30 中南大学 Cache replacement method and device based on session clustering prediction and computer equipment
CN113676513A (en) * 2021-07-15 2021-11-19 东北大学 Deep reinforcement learning-driven intra-network cache optimization method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266742B1 (en) * 1997-10-27 2001-07-24 International Business Machines Corporation Algorithm for cache replacement
US6425057B1 (en) * 1998-08-27 2002-07-23 Hewlett-Packard Company Caching protocol method and system based on request frequency and relative storage duration
CN1869979A (en) * 2005-12-30 2006-11-29 华为技术有限公司 Buffer store management method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266742B1 (en) * 1997-10-27 2001-07-24 International Business Machines Corporation Algorithm for cache replacement
US6425057B1 (en) * 1998-08-27 2002-07-23 Hewlett-Packard Company Caching protocol method and system based on request frequency and relative storage duration
CN1869979A (en) * 2005-12-30 2006-11-29 华为技术有限公司 Buffer store management method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QIAO LI等: "A novel cache replacement policy for ISP merged CDN", 《INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS》, 19 December 2012 (2012-12-19), pages 708 - 709, XP032311005, DOI: 10.1109/ICPADS.2012.106 *
张旺俊: "Web缓存替换策略与预取技术的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 30 September 2011 (2011-09-30), pages 137 - 27 *
张艳等: "Web缓存优化模型研究", 《计算机工程》, vol. 35, no. 8, 30 April 2009 (2009-04-30), pages 85 - 90 *
石磊等: "Web缓存命中率与字节命中率关系", 《计算机工程》, vol. 37, no. 5, 31 July 2007 (2007-07-31), pages 84 - 86 *

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103440207A (en) * 2013-07-31 2013-12-11 北京智谷睿拓技术服务有限公司 Caching method and caching device
CN103440207B (en) * 2013-07-31 2017-02-22 北京智谷睿拓技术服务有限公司 Caching method and caching device
CN103793517A (en) * 2014-02-12 2014-05-14 浪潮电子信息产业股份有限公司 File system log dump dynamic capacity-increase method based on monitoring mechanism
US10223286B2 (en) 2014-08-05 2019-03-05 International Business Machines Corporation Balanced cache for recently frequently used data
US10585807B2 (en) 2014-08-05 2020-03-10 International Business Machines Corporation Balanced cache for recently frequently used data
US11200174B2 (en) 2015-09-29 2021-12-14 International Business Machines Corporation Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage
US10095628B2 (en) 2015-09-29 2018-10-09 International Business Machines Corporation Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage
US10120811B2 (en) 2015-09-29 2018-11-06 International Business Machines Corporation Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage
US10417138B2 (en) 2015-09-29 2019-09-17 International Business Machines Corporation Considering a frequency of access to groups of tracks and density of the groups to select groups of tracks to destage
US10241918B2 (en) 2015-09-29 2019-03-26 International Business Machines Corporation Considering a frequency of access to groups of tracks to select groups of tracks to destage
US10275360B2 (en) 2015-09-29 2019-04-30 International Business Machines Corporation Considering a density of tracks to destage in groups of tracks to select groups of tracks to destage
CN106681995A (en) * 2015-11-05 2017-05-17 阿里巴巴集团控股有限公司 Data caching method and data query method and device
CN106294216A (en) * 2016-08-11 2017-01-04 电子科技大学 A kind of buffer replacing method for wind power system
CN106294216B (en) * 2016-08-11 2019-03-05 电子科技大学 A kind of buffer replacing method for wind power system
CN106383792B (en) * 2016-09-20 2019-07-12 北京工业大学 A kind of heterogeneous polynuclear cache replacement method based on missing perception
CN106383792A (en) * 2016-09-20 2017-02-08 北京工业大学 Missing perception-based heterogeneous multi-core cache replacement method
CN106909518A (en) * 2017-01-24 2017-06-30 朗坤智慧科技股份有限公司 A kind of real time data caching mechanism
CN106973088A (en) * 2017-03-16 2017-07-21 中国人民解放军理工大学 A kind of buffering updating method and network of the joint LRU and LFU based on shift in position
CN106973088B (en) * 2017-03-16 2019-07-12 中国人民解放军理工大学 A kind of buffering updating method and network of the joint LRU and LFU based on shift in position
CN107291635A (en) * 2017-06-16 2017-10-24 郑州云海信息技术有限公司 A kind of buffer replacing method and device
CN107451071A (en) * 2017-08-04 2017-12-08 郑州云海信息技术有限公司 A kind of caching replacement method and system
CN108829344A (en) * 2018-05-24 2018-11-16 北京百度网讯科技有限公司 Date storage method, device and storage medium
US11307769B2 (en) 2018-05-24 2022-04-19 Beijing Baidu Netcom Science Technology Co., Ltd. Data storage method, apparatus and storage medium
CN111258929A (en) * 2018-12-03 2020-06-09 北京京东尚科信息技术有限公司 Cache control method, device and computer readable storage medium
CN111258929B (en) * 2018-12-03 2023-09-26 北京京东尚科信息技术有限公司 Cache control method, device and computer readable storage medium
CN111400308A (en) * 2020-02-21 2020-07-10 中国平安财产保险股份有限公司 Processing method of cache data, electronic device and readable storage medium
CN112733060A (en) * 2021-01-13 2021-04-30 中南大学 Cache replacement method and device based on session clustering prediction and computer equipment
CN112733060B (en) * 2021-01-13 2023-12-01 中南大学 Cache replacement method and device based on session cluster prediction and computer equipment
CN113676513A (en) * 2021-07-15 2021-11-19 东北大学 Deep reinforcement learning-driven intra-network cache optimization method

Also Published As

Publication number Publication date
CN103106153B (en) 2016-04-06

Similar Documents

Publication Publication Date Title
CN103106153B (en) Based on the web cache replacement method of access density
Wang et al. A novel dynamic network data replication scheme based on historical access record and proactive deletion
Xu et al. Characterizing facebook's memcached workload
Lymberopoulos et al. Pocketweb: instant web browsing for mobile devices
EP2680152B1 (en) Process for managing the storage of a list of N items in a memory cache of C items of a cache system
Puzhavakath Narayanan et al. Reducing latency through page-aware management of web objects by content delivery networks
Shi et al. An applicative study of Zipf’s law on web cache
Yin et al. Power-aware prefetch in mobile environments
Miao et al. Multi-level plru cache algorithm for content delivery networks
Zhao et al. GDSF-based low access latency web proxy caching replacement algorithm
Hassine et al. Caching strategies based on popularity prediction in content delivery networks
Wu et al. Web cache replacement strategy based on reference degree
Alkassab et al. Benefits and schemes of prefetching from cloud to fog networks
Zhang et al. A dynamic social content caching under user mobility pattern
Zhijun et al. Towards efficient data access in mobile cloud computing using pre-fetching and caching
Liu et al. Proactive data caching and replacement in the edge computing environment
Rodríguez et al. Improving performance of multiple-level cache systems
Santhanakrishnan et al. Towards universal mobile caching
CN106294216B (en) A kind of buffer replacing method for wind power system
Gracia et al. Meppm-memory efficient prediction by partial match model for web prefetching
Katsaros et al. Cache management for Web-powered databases
Wang et al. Feasibility analysis and self-organizing algorithm for RAN cooperative caching
Chuchuk et al. SISSA: Caching for dataset-based workloads with heterogeneous file sizes
Panagiotou et al. Performance enhancement in WSN through data cache replacement policies
Qian et al. Pre‐judgment and Incomplete Allocation Approach for Query Result Cache

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230826

Address after: 100085 4th floor, building 3, yard 1, Shangdi East Road, Haidian District, Beijing

Patentee after: Beijing Topsec Network Security Technology Co.,Ltd.

Patentee after: Topsec Technologies Inc.

Patentee after: BEIJING TOPSEC SOFTWARE Co.,Ltd.

Address before: 150001 No. 92 West straight street, Nangang District, Heilongjiang, Harbin

Patentee before: HARBIN INSTITUTE OF TECHNOLOGY

TR01 Transfer of patent right