CN103383666A - Method and system for improving cache prefetch data locality and cache assess method - Google Patents

Method and system for improving cache prefetch data locality and cache assess method Download PDF

Info

Publication number
CN103383666A
CN103383666A CN2013102982467A CN201310298246A CN103383666A CN 103383666 A CN103383666 A CN 103383666A CN 2013102982467 A CN2013102982467 A CN 2013102982467A CN 201310298246 A CN201310298246 A CN 201310298246A CN 103383666 A CN103383666 A CN 103383666A
Authority
CN
China
Prior art keywords
prefetch
records ends
data recording
data set
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013102982467A
Other languages
Chinese (zh)
Other versions
CN103383666B (en
Inventor
严得辰
刘立坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201310298246.7A priority Critical patent/CN103383666B/en
Publication of CN103383666A publication Critical patent/CN103383666A/en
Application granted granted Critical
Publication of CN103383666B publication Critical patent/CN103383666B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method and system for improving cache prefetch data locality. With adoption of the method, prefetch hit counts of each prefetch data record set in a cache can be counted; when a prefetch data record set with prefetch hit counts less than a set hit threshold value is swapped out the cache, accessed data records in the set are written into a new storage region and form a new prefetch data record set together with other data in the storage region. The method can efficiently reduce prefetch counts and improve cache hit rate.

Description

Improve method and system and the cache access method of cache prefetching data locality
Technical field
The present invention relates to caching technology, relate in particular to the prefetch data locality method for organizing that improves cache hit rate.
Background technology
Buffer memory is very important ingredient in multilevel memory system, and cache prefetching is the technology of an important raising buffer efficiency.Visit data records p ijShi Shouxian searches its access location (index search or metadata search etc.), when failing to hit in buffer memory, cache prefetching by a memory access with p ijPrefetch data set of records ends P in the rudimentary memory hierarchy in place i: { p i1..., p inBe prefetched in buffer memory, and with p i1p inAccess location be revised as in buffer memory corresponding position, expectation occurs thereafter more to p i1~p inAccess.Wherein, claim P iBe p i1..., p inThe entrance of looking ahead, p ijBe this owner record of looking ahead of looking ahead.The data recording of accessing can be the fixed-length data record, can be also elongated data recording.Spatial locality in the cache prefetching data has determined whether prefetch mechanisms is effective: the cache prefetching data with better spatial locality can make once to look ahead brings more cache hit, reduce the access of low level storage, and the relatively poor cache prefetching data of spatial locality make prefetch mechanisms can not get income.In order to make the larger effect of prefetch mechanisms performance, need to improve the spatial locality of prefetch data.
Summary of the invention
Therefore, the object of the invention is to overcome the defective of above-mentioned prior art, a kind of method of improving the cache prefetching data locality is provided.
The objective of the invention is to be achieved through the following technical solutions:
On the one hand, the invention provides a kind of method of improving the cache prefetching data locality, described method comprises:
The prefetch hit number of times of each prefetch data set of records ends in the statistics buffer memory, described prefetch hit number of times is the sum of data recording accessed in this set;
For the prefetch data set of records ends of its prefetch hit number of times less than the hit threshold of setting, when this set is swapped out buffer memory, data recording accessed in this set is written to new storage area, the prefetch data set of records ends new with other data formations in this storage area.
In said method, also can comprise:
For each prefetch data set of records ends in buffer memory:
With in this set first accessed data recording be labeled as special record;
Calculate the access interval between the accessed data recording of current accessed data recording and last time in this set, if should access the interval greater than the interval threshold of setting, current accessed data recording is labeled as special record;
For the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, the entrance of looking ahead that is marked as the data recording of special record is revised as described new prefetch data set of records ends.
In said method, described access interval can be the combination at the time interval, access times intervals, self-defining logic interval or above-mentioned interval.
In said method, also can comprise for the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, data recording accessed in this set entrance of looking ahead all is revised as described new prefetch data set of records ends.
Another aspect, the present invention also provides a kind of system that improves the cache prefetching data locality, and described system comprises:
The device that is used for the prefetch hit number of times of each prefetch data set of records ends of statistics buffer memory, described prefetch hit number of times is the sum of data recording accessed in this set;
Be used for for the prefetch data set of records ends of its prefetch hit number of times less than the hit threshold of setting, when this set is swapped out buffer memory, data recording accessed in this set is written to new storage area, with the device of the new prefetch data set of records ends of other data formations in this storage area.
In said system, also can comprise labelling apparatus and modifier, described labelling apparatus can be used for for each prefetch data set of records ends in buffer memory:
With in this set first accessed data recording be labeled as special record;
Calculate the access interval between the accessed data recording of current accessed data recording and last time in this set, if should access the interval greater than the interval threshold of setting, current accessed data recording is labeled as special record;
Described modifier can be used for for the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, the entrance of looking ahead that is marked as the data recording of special record is revised as described new prefetch data set of records ends.
Aspect another, the present invention also provides a kind of cache access method, and the method comprises:
For data recording to be visited, if cache hit increases by 1 with the prefetch hit number of times that comprises the prefetch data set of records ends of this data recording to be visited in buffer memory;
If cache miss and free cache entry, the prefetch data set of records ends that will comprise this data recording to be visited is prefetched in this cache entry, and the prefetch hit number of times of this prefetch data set of records ends is increased by 1;
If cache miss and there is no free cache entry is carried out:
In the cache entry that judgement is selected, whether the prefetch hit number of times of prefetch data set of records ends is less than the hit threshold of setting, if less than, in gathering, accessed data recording is written to new storage area, the prefetch data set of records ends new with other data formations in this storage area; And
The prefetch data set of records ends that will comprise this data recording to be visited is prefetched in this selected cache entry, and the prefetch hit number of times of this prefetch data set of records ends is increased by 1.
In above-mentioned cache access method, also can comprise:
For each prefetch data set of records ends in buffer memory:
With in this set first accessed data recording be labeled as special record;
Calculate the access interval between the accessed data recording of current accessed data recording and last time in this set, if should access the interval greater than the interval threshold of setting, current accessed data recording is labeled as special record;
When its prefetch hit number of times is swapped out buffer memory less than the prefetch data set of records ends of hit threshold, the entrance of looking ahead that is marked as the data recording of special record is revised as described new prefetch data set of records ends.
In above-mentioned cache access method, also can be included in when its prefetch hit number of times is swapped out buffer memory less than the prefetch data set of records ends of hit threshold, data recording accessed in this set entrance of looking ahead all is revised as described new prefetch data set of records ends.
Above-mentioned cache access method also can comprise:
When the data recording number in described new prefetch data set of records ends reaches the threshold value of setting, for each the prefetch data set of records ends less than the hit threshold of setting of its prefetch hit number of times in buffer memory, data recording accessed in this set is written in this new prefetch data set of records ends;
Stop the writing of this new prefetch data set of records ends, obtain idle spatial cache and be used for another new prefetch data set of records ends of storage.
Compared with prior art, the method for improving the cache prefetching data locality provided by the invention can effectively reduce number of prefetches, improves cache hit rate.
Description of drawings
Embodiments of the present invention is further illustrated referring to accompanying drawing, wherein:
Fig. 1 is the schematic flow sheet according to the method for improving the cache prefetching data locality of the embodiment of the present invention;
Fig. 2 is the schematic flow sheet according to the cache access method of the embodiment of the present invention.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage are clearer, and the present invention is described in more detail by specific embodiment below in conjunction with accompanying drawing.Should be appreciated that specific embodiment described herein only in order to explain the present invention, is not intended to limit the present invention.
Improving the cache prefetching data locality need to solve two problems: the locality of determining which cache prefetching data is relatively poor, need to improve it; And locality how to improve these cache prefetching data.From certain prefetch data set of records ends (for example, P i) be pre-fetched in buffer memory, before its buffer memory that swapped out, will have several data recording in this set accessed, the set of the data recording that these are accessed (for example, H i: { p ij1..., p ijm,
Figure BDA00003520290000041
) can be described as the prefetch hit data of this time looking ahead, the number of the data recording that these are accessed (for example, m) can be called as the prefetch hit number of times of this time looking ahead.For example, with prefetch data set of records ends P iAfter being prefetched in buffer memory, the prefetch hit number of times that expectation is this time looked ahead surpasses a threshold value quickly, if do not reach this requirement, the P that looks ahead is described iDo not play a role or brought into play less effect, claiming this kind situation for not reaching the effect of looking ahead, showing that the spatial locality of this prefetch data inside is relatively poor, improving locality thereby should carry out suitable layout again to these prefetch datas.
Fig. 1 has provided the schematic flow sheet that improves the method for cache prefetching data locality according to an embodiment of the invention.The prefetch hit number of times of each prefetch data set of records ends in the method statistics buffer memory, for its prefetch hit number of times less than the prefetch data set of records ends of the hit threshold of setting (P for example i), when this set is swapped out buffer memory, (be prefetch hit data, for example H with data recording accessed in this set i) redundancy is written to new storage area, the prefetch data set of records ends (for example be P) new with other data formations in this storage area.
Wherein, described hit threshold can arrange according to real system environment or user's request, can be static threshold, it can be also dynamic threshold, for example hit threshold can be set as certain predetermined round values, the number percent that also can hit threshold be set to element number in the prefetch data set of records ends, for example 10% * | P i|, 20% * | P i|, 30% * | P i| etc.Described prefetch hit number of times is the sum of data recording accessed in this set.In other embodiments, described prefetch hit number of times can be also the access times to this prefetch data set of records ends.Be written to by redundancy by said method that in prefetch hit data in this new prefetch data set of records ends and this set, other data recording form a collection of new spatial locality prefetch data preferably; Other data recording in this set can be the redundant datas that writes by same method, can be also the data recording of the new generation that writes, can have the data recording of repetition in this set, also can be by removing someway the data recording that repeats.In other embodiments, can there be simultaneously one or more so new prefetch data set of records ends.When existing when a plurality of, to improve data recording accessed in the prefetch data set of records ends of locality and be written in one of them set according to certain classification determined.In addition, do not limit the storage medium at formed new prefetch data set of records ends place, can in writing buffer memory, can in the storage medium of other levels, can also be transferred to from writing buffer memory the storage medium of other levels yet; Preferentially access from buffer memory when there is the data recording of a plurality of copies in access, as not in buffer memory, enter into the rear access of buffer memory by the mode of looking ahead.
Write these prefetch hit data (H for example in redundancy i) afterwards, the same data recording may be present in a plurality of prefetch data set of records ends, had multiple choices by the entrance of looking ahead of each data record in the prefetch hit data of redundancy this moment, can not do change and (for example, be still P i), also can navigate to the new position of looking ahead (for example P), can also make a choice according to the situation of access, the environment that the different entrance positioning strategies of looking ahead is suitable for is different.In a preferred embodiment of the invention, the method also comprises the following steps: for each prefetch data set of records ends in buffer memory: in gathering first accessed data recording be labeled as special record; And calculate access interval between the accessed data recording of current accessed data recording and last time in this set, if should access the interval greater than the interval threshold of setting, current accessed data recording is labeled as special record.Like this, for the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, data recording redundancy accessed in this set is written to new storage area, the prefetch data set of records ends new with other data formations in this storage area can also be revised as the entrance of looking ahead that is marked as the data recording of special record in this set described new prefetch data set of records ends simultaneously.Wherein, the access interval can be a kind of in the time interval, access times interval and self-defining certain logic interval, can be also multiple interval type combination (wherein a kind of interval is long can be labeled as special record).Interval threshold can arrange according to the type of accessing the interval.In other embodiments, also the entrance of looking ahead that carries out all data recording that above-mentioned redundancy writes all can be revised as described new prefetch data set of records ends.Perhaps can all not revise yet.
Can find out from above, this method of improving the cache prefetching data locality can be followed cache access and buffer memory replacement process and move.In yet another embodiment of the present invention, also provide a kind of in conjunction with the above-mentioned cache access method of improving cache prefetching data locality method.This cache access method implementation is as follows: when certain data recording of access (is assumed to be p ij) time, minute three kinds of situations are processed: (this data recording p when (1) fails to hit in buffer memory when access ijNot in buffer memory), if having empty cache entry, for example cache entry C k, with p ijPlace prefetch data set of records ends P i: { p i1..., p inBe prefetched to cache entry C kIn, this moment look ahead complete after with p ijJoin H kIn, increase progressively h k, and mark p ijBe special record; Wherein, H kBe and cache entry C kThe corresponding set of recording the prefetch hit data, its data structure can for example be formation, bitmap etc., h kBe used for recording prefetch hit time counting number.If there is no free cache entry, carry out buffer memory replacement step (will be described herein-after), use P iReplace selected cache entry C kIn former meaningful.(2) (for example, P is hit in access in buffer memory iBe pre-fetched into cache entry C kIn) and h k<T iThe time, if this moment p ijAlso not at H kIn, it is joined H kIn, increase progressively h k, and calculate last visit P iIn element and the interval I of current accessed, if I>T I, mark p ijBe special record; Wherein, T iBe hit threshold, T IBe interval threshold, it can determine according to access interval type, and both can be static threshold, can be also dynamic thresholds.(3) access is hit in buffer memory and prefetch hit number of times h k〉=T iThe time, increase progressively h this moment kWherein, with p ijJoin H kIn for example can refer to p ijLogical OR physics pointer in buffer memory is saved in and records H kData structure in (judge p before adding ijH whether kWhether middle existence add when or not).
When will be with cache entry C kContent will be from P iReplace with P lThe time (data that for example, access are recorded in P lIn, need to be with P lBe prefetched to cache entry C kIn), process in two kinds of situation: (1) is if h k<T i, with H kIn data recording be written to redundant data set of records ends P wIn, change simultaneously H kIn be labeled as special record the entrance of looking ahead be P w, then empty H kJuxtaposition h kBe 0; (2) if h k〉=T i, empty H kJuxtaposition h kBe 0.Wherein, P wThe memory location can (for example, write buffer memory) in buffer memory, also can in the storage medium of other levels, can also be transferred to the storage medium of other levels from writing buffer memory.P wIn except the data recording that redundancy writes as stated above, can also write the data recording of new generation; At this moment, may there be two copy (P in the data recording that writes of redundancy iAnd P wIn each one), preferentially access from buffer memory when there is the data recording of a plurality of copies in access, as not in buffer memory, enter into access after buffer memory by the mode of looking ahead.As redundant data set of records ends P wIn storage space exhaust or data recording number wherein reaches upper limit requirement, scan all cache entry C this moment 1~C n, as scanning cache entry C k(wherein cache set is P i) time, process in two kinds of situation: (1) is if h k<T i, with H kIn data recording be written to P wIn, change simultaneously H kIn be labeled as special record the entrance of looking ahead be P w, empty at last H kJuxtaposition h kBe 0; (2) if h k〉=T i, execution action not.Then, stop to P wIn write new data recording, continue to obtain the free buffer storage space and make it become new redundant data set of records ends P w′
For understanding better above-mentioned cache access method, with reference to Fig. 2, the implementation of this cache access method is illustrated.Suppose that in this example the cache entry number is 4, cache replacement algorithm used is least recently used (LRU) algorithm, uses j as p ijAt cache entry C kIn logical pointer (can realize p by recording digital j ijJoin set H kIn), make hit threshold T i=10% * | P i|, set interval threshold T IValue be 4, recording in the new set of records ends of looking ahead that can write that the number upper limit requires is 10, the storage space of the set of records ends of looking ahead that this is new is arranged in writes buffer memory.During original state, buffer memory is all empty, and the prefetch hit data acquisition is all empty, and the prefetch hit number of times is 0.Suppose in current system, had 6 prefetch data set of records ends P as shown in table 1 1-P 6Table 2 has provided the access sequence 1-20 that will carry out in this example, access sequence 1-7 wherein, and 12-20 is read access; Access sequence 8-11 is write access.Access sequence 1 expression read data records p 1,2, data recording p is write in access sequence 8 expressions 7,1, by that analogy.
Table 1
P 1 P 2 P 3 P 4 P 5 P 6
p 1,1~p 1,22 p 2,1~p 2,18 p 3,1~p 3,15 p 4,1~p 4,12 p 5,1~p 5,25 p 6,1~p 6,21
Table 2
Sequence number 1 2 3 4 5 6 7 8 9 10
Type Read Read Read Read Read Read Read Write Write Write
Record p 1,2 p 2,5 p 3,1 p 4,9 p 2,6 p 2,7 p 1,3 p 7,1 p 7,2 p 7,3
Sequence number 11 12 13 14 15 16 17 18 19 20
Type Write Read Read Read Read Read Read Read Read Read
Record p 7,4 p 5,10 p 5,11 p 5,10 p 1,2 p 6,3 p 1,4 p 3,2 p 3,11 p 4,12
Continuation is with reference to figure 2, and the detailed process of the above-mentioned sequence 1-20 of access is:
1) execution in step 101, the judgement visit data p of institute 1,2Whether in buffer memory;
For example, caching system can determine that institute visit data record is whether in buffer memory by search operation.If p 1,2Not in buffer memory, need p so 1,2The prefetch data set of records ends P at place 1Anticipate in buffer memory.Supposing the system is selected P 1Anticipate cache entry C 1In, go to step 104 and continue to carry out.
2) execution in step 104, judgement cache entry C 1In be whether empty;
That is to say, caching system is with P 1Be prefetched to cache entry C 1Before, judge C 1In whether data are arranged.If cache entry C 1Be sky, forward step 110 to and continue to carry out.
3) execution in step 110, with P 1Be prefetched to cache entry C 1In;
4) execution in step 111, with p 1,2Join C 1Prefetch hit set H 1In; Then return to step 101, continue to process next access sequence.
Particularly, at C 1Corresponding prefetch hit data acquisition H 1In add p 1,2Logical pointer 2, mark p 1,2Be special record, increase progressively prefetch hit number of times h 1
According to process 1)~5) identical mode, process access sequence 2~4, the prefetch data set of records ends of storing in the rear cache entry that is disposed, corresponding prefetch hit data acquisition, and corresponding prefetch hit number of times can (underscore represents the special mark that records) as shown in table 3 below:
Table 3
Cache entry The prefetch data set of records ends The prefetch hit data acquisition The prefetch hit number of times
C 1 P 1 2 1
C 2 P 2 5 1
C 3 P 3 1 1
C 4 P 4 9 1
With reference to the access sequence shown in figure 2 and table 2, the below continues to process access sequence 5:
5) execution in step 101, judgement p 2,6Whether in buffer memory; Here p 2,6In buffer memory, go to step 102 and continue to carry out.
6) execution in step 102, judgement prefetch hit number of times h 2Whether less than hit threshold; If less than, go to step 103 and continue to carry out, otherwise return to step 101, process next access sequence.
At this moment, h 2Be 1 this moment, less than threshold value T 2=10% * | P 2|=10% * 18.
7) execution in step 103, with p 2,6Join C 2Prefetch hit set H 2In; Then return to step 101, continue to process next access sequence.
In step 103, also need to calculate access interval I as indicated above; Here with to C 2Previous access interval I be 3, less than interval threshold T I, therefore at C 2Corresponding prefetch hit data acquisition H 2In record p 2,6Logical pointer 6, increase progressively prefetch hit number of times h 2
According to process 5)~7) identical mode, continue to process access sequence 6~7, different is for access sequence 6, to carry out 102 o'clock h 2Greater than hit threshold T 2, therefore do not need execution in step 103; For access sequence 7, carrying out 103 o'clock, and to C 1Previous access interval I be 6, greater than interval threshold T I, therefore remove p 1,3Logical pointer 3 join H 1In and increase progressively h 1In addition, also need it is labeled as special record.
Then, continue to process access sequence 8~11.For writing sequence 8~11, caching system can generate a new prefetch data set of records ends P 7, with p 7,1~p 7,4Be written to P 7In (in this example, suppose P 7The memory location writing buffer memory P wIn).At this moment, the prefetch data set of records ends of storing in each cache entry, corresponding prefetch hit data acquisition, and corresponding prefetch hit number of times (underscore represents the special mark that records, and hyphen represents that information is empty) as shown in table 4 below:
Table 4
Figure BDA00003520290000091
Then process access sequence 12, namely read data records p 5,10:
8) execution in step 101, judgement p 5,10Whether in buffer memory; Here p 5,10No longer in buffer memory, supposing the system is selected p 5,10The prefetch data Visitor Logs set P at place 5Be prefetched to cache entry C 1In, then go to step 104 and carry out.
9) execution in step 104, judgement cache entry C 1Whether be empty; If cache entry is empty, go to step 110; If cache entry is not empty, need cache entry C 1In the content buffer memory that swaps out, go to step 105 this moment and continue to carry out.
10) execution in step 105, judgement prefetch hit number of times h 1Whether be less than or equal to hit threshold; If be less than or equal to hit threshold, execution in step 106; If greater than hit threshold, go to step 110 and continue to carry out.
Particularly, this moment h 1Be 2, less than hit threshold T 1=10% * | P 1|=10% * 22.This explanation C this moment 1In prefetch data Visitor Logs set P 1In data locality relatively poor, need to improve.Otherwise if greater than hit threshold, explanation does not need to improve the locality of this part data, does not namely need present cache entry C 1Data carry out extra process, directly with P 5Change to C 1Get final product.
11) execution in step 106, with H 1In data recording write the set P 7In;
Particularly, from cache entry C 1Corresponding prefetch hit data acquisition H 1In read successively logical pointer 2 and 3, and accordingly from C 1In read the 2nd and the 3rd and record p 1,2And p 1,3, be written into P 7In (become P 7In p 7,5And p 7,6), and change p 1,2And p 1,3The entrance of looking ahead be P 7(the present embodiment writes same prefetch data set of records ends with data recording and the new record that produces of these redundancies).This moment p 1,2, p 1,3And p 7,5, p 7,6Copy each other is because the latter is writing buffer memory P wIn and P 1By the buffer memory that swaps out, so priority access p 7,5, p 7,6
12) execution in step 107, empty prefetch hit set H 1, juxtaposition h 1Be 0;
13) execution in step 108, and judgement writes set P 7Whether full; If less than, go to step 110 and continue to carry out; If full, go to step 109 and continue to carry out.
Here, P 7Middle data recording number is 4, less than 10, does not reach the number upper limit.
14) execution in step 110, with P 5Change to cache entry C 1In;
15) execution in step 111, with p 5,10Join C 1Prefetch hit set H1 in;
According to process 5)~7) identical mode, process access sequence 13,14, different is, for access sequence 14, in execution in step 103 processes to H 1In add p 5,10Logical pointer 10 time, find that this numeral is recorded (with sequence 12 access identical datas record), therefore, other operations in skips steps 103.After finishing dealing with, the prefetch data set of records ends of storing in each cache entry, corresponding prefetch hit data acquisition, and corresponding prefetch hit number of times (underscore represents the special mark that records, and hyphen represents that information is empty) as shown in table 5 below:
Table 5
Figure BDA00003520290000101
For access sequence 15, due to p 1,2And p 1,3At P 7In have copy p 7,5And p 7,6(finding copy by cache lookup), therefore access can be from P 7Middle reads data log p 7,5, p 7,6, do not carry out other operations.
According to process 8)~15) identical mode, process access sequence 16~18, in processing procedure, according to LRU cache replacement algorithm, P 3, P 4, P 2Successively by the buffer memory that swaps out, in this process with p 3,1, p 4,9Write P 7, due to p 3,1, p 4,9Be marked as special its entrance of looking ahead that records and be revised as P 7, when swapping out P2 due to the prefetch hit number of times greater than threshold value T 2, wherein the prefetch hit data do not write P 7
According to process 5)~7) identical mode, process access sequence 19, in processing procedure, p 3,11At cache entry C 2In hit, at C 2Corresponding prefetch hit data acquisition H 2In add p 3,11Logical pointer 11, increase progressively prefetch hit number of times h 2
According to process 8)~15) identical mode, process access sequence 20, in processing procedure, according to LRU cache replacement algorithm, P 5By the buffer memory that swapped out, in this process with p 5,10, p 5,11Write P 7, due to p 5,10Be marked as special its entrance of looking ahead that records and be revised as P 7Different is, when execution in step 108, and P 7In the data recording number be 10, reached the number upper limit, need execution in step 109, particularly, scanning cache entry C 1~C 4, find h 1, h 3, h 4Do not satisfy the threshold value requirement, therefore with p 6,3, p Isosorbide-5-Nitrae, p 4,12Write P 7(h 2Satisfy the threshold value requirement, so p 3,2, p 3,11Do not write), due to p 6,3, p Isosorbide-5-Nitrae, p 4,12Be marked as special record, its entrance of looking ahead is revised as P 7, empty at last C 1, C 3, C 4Corresponding prefetch hit data acquisition H, H 3, H 4, juxtaposition h 1, h 3, h 4Be 0.After step 109 is finished, new prefetch data set of records ends P 7: { p 7,1..., p 7,13In comprise 13 data records, wherein four for newly writing record, nine is the copy record.To stop write operation to P7 this moment, have new data to produce or as indicated above when needing redundancy to write segment data record when follow-up, from the new storage area of new free buffer memory allocation, make it be used to store new redundant data set of records ends P 8Deng.
After handling all access sequences that table 2 provides, the prefetch data set of records ends of storing in each cache entry, corresponding prefetch hit data acquisition, and corresponding prefetch hit number of times is as shown in table 6 below, and (underscore represents the special mark that records, hyphen represents that information is empty, at P 7Quilt is persisted to it in disk before being swapped out and writing buffer memory):
Table 6
Figure BDA00003520290000111
The inventor also in content is sought directory system in storage, utilizes the backup load under true environment that above-mentioned method is tested.Test result shows, the method has reduced by 17.8%~56% index number of prefetches, has promoted 8%~24% the bandwidth that writes that reads bandwidth and 2%~6%; In same directory system, utilize by a definite date that the test result of the data synchronized loading in two weeks shows, number of prefetches has descended 96%.
Although the present invention is described by preferred embodiment, yet the present invention is not limited to embodiment as described herein, also comprises without departing from the present invention various changes and the variation done.

Claims (11)

1. method of improving the cache prefetching data locality, described method comprises:
The prefetch hit number of times of each prefetch data set of records ends in the statistics buffer memory, described prefetch hit number of times is the sum of data recording accessed in this set;
For the prefetch data set of records ends of its prefetch hit number of times less than the hit threshold of setting, when this set is swapped out buffer memory, data recording accessed in this set is written to new storage area, the prefetch data set of records ends new with other data formations in this storage area.
2. method according to claim 1 also comprises:
For each prefetch data set of records ends in buffer memory:
With in this set first accessed data recording be labeled as special record;
Calculate the access interval between the accessed data recording of current accessed data recording and last time in this set, if should access the interval greater than the interval threshold of setting, current accessed data recording is labeled as special record;
For the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, the entrance of looking ahead that is marked as the data recording of special record is revised as described new prefetch data set of records ends.
3. method according to claim 2, described access are spaced apart the combination at the time interval, access times intervals, self-defining logic interval or above-mentioned interval.
4. method according to claim 1, also comprise for the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, data recording accessed in this set entrance of looking ahead all is revised as described new prefetch data set of records ends.
5. system that improves the cache prefetching data locality, described system comprises:
The device that is used for the prefetch hit number of times of each prefetch data set of records ends of statistics buffer memory, described prefetch hit number of times is the sum of data recording accessed in this set;
Be used for for the prefetch data set of records ends of its prefetch hit number of times less than the hit threshold of setting, when this set is swapped out buffer memory, data recording accessed in this set is written to new storage area, with the device of the new prefetch data set of records ends of other data formations in this storage area.
6. system according to claim 5, also comprise labelling apparatus and modifier,
Described labelling apparatus is used for for each prefetch data set of records ends of buffer memory:
With in this set first accessed data recording be labeled as special record;
Calculate the accessed data recording of current accessed data recording and last time in this set it
Between the access interval, if should access the interval greater than the interval threshold of setting, current accessed data recording is labeled as special record;
Described modifier is used for for the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, the entrance of looking ahead that is marked as the data recording of special record is revised as described new prefetch data set of records ends.
7. system according to claim 5, also comprise modifier, it is used for for the prefetch data set of records ends of its prefetch hit number of times less than hit threshold, when this set is swapped out buffer memory, data recording accessed in this set entrance of looking ahead all is revised as described new prefetch data set of records ends.
8. cache access method, the method comprises:
For data recording to be visited, if cache hit increases by 1 with the prefetch hit number of times that comprises the prefetch data set of records ends of this data recording to be visited in buffer memory;
If cache miss and free cache entry, the prefetch data set of records ends that will comprise this data recording to be visited is prefetched in this cache entry, and the prefetch hit number of times of this prefetch data set of records ends is increased by 1;
If cache miss and there is no free cache entry is carried out:
In the cache entry that judgement is selected, whether the prefetch hit number of times of prefetch data set of records ends is less than the hit threshold of setting, if less than, in gathering, accessed data recording is written to new storage area, the prefetch data set of records ends new with other data formations in this storage area; And
The prefetch data set of records ends that will comprise this data recording to be visited is prefetched in this selected cache entry, and the prefetch hit number of times of this prefetch data set of records ends is increased by 1.
9. method according to claim 8 also comprises:
For each prefetch data set of records ends in buffer memory:
With in this set first accessed data recording be labeled as special record;
Calculate the access interval between the accessed data recording of current accessed data recording and last time in this set, if should access the interval greater than the interval threshold of setting, current accessed data recording is labeled as special record;
When its prefetch hit number of times is swapped out buffer memory less than the prefetch data set of records ends of hit threshold, the entrance of looking ahead that is marked as the data recording of special record is revised as described new prefetch data set of records ends.
10. method according to claim 8, also be included in when its prefetch hit number of times is swapped out buffer memory less than the prefetch data set of records ends of hit threshold, data recording accessed in this set entrance of looking ahead all is revised as described new prefetch data set of records ends.
11. according to claim 8 or 9 or 10 described methods also comprise:
When the data recording number in described new prefetch data set of records ends reaches the threshold value of setting, for each the prefetch data set of records ends less than the hit threshold of setting of its prefetch hit number of times in buffer memory, data recording accessed in this set is written in this new prefetch data set of records ends;
Stop the writing of this new prefetch data set of records ends, obtain idle spatial cache and be used for another new prefetch data set of records ends of storage.
CN201310298246.7A 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality Active CN103383666B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310298246.7A CN103383666B (en) 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310298246.7A CN103383666B (en) 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality

Publications (2)

Publication Number Publication Date
CN103383666A true CN103383666A (en) 2013-11-06
CN103383666B CN103383666B (en) 2016-12-28

Family

ID=49491463

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310298246.7A Active CN103383666B (en) 2013-07-16 2013-07-16 Improve method and system and the cache access method of cache prefetching data locality

Country Status (1)

Country Link
CN (1) CN103383666B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104063330A (en) * 2014-06-25 2014-09-24 华为技术有限公司 Data prefetching method and device
WO2015100653A1 (en) * 2013-12-31 2015-07-09 华为技术有限公司 Data caching method, device and system
CN107168648A (en) * 2017-05-04 2017-09-15 广东欧珀移动通信有限公司 File memory method, device and terminal
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
CN108287795A (en) * 2018-01-16 2018-07-17 宿州新材云计算服务有限公司 A kind of new types of processors buffer replacing method
CN109491873A (en) * 2018-11-05 2019-03-19 网易无尾熊(杭州)科技有限公司 It caches monitoring method, medium, device and calculates equipment
CN116107926A (en) * 2023-02-03 2023-05-12 摩尔线程智能科技(北京)有限责任公司 Cache replacement policy management method, device, equipment, medium and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
US20090240894A1 (en) * 2002-08-28 2009-09-24 Intel Corporation Method and aparatus for the synchronization of distributed caches
CN102110073A (en) * 2011-02-01 2011-06-29 中国科学院计算技术研究所 Replacement device and method for chip shared cache and corresponding processor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090240894A1 (en) * 2002-08-28 2009-09-24 Intel Corporation Method and aparatus for the synchronization of distributed caches
CN1499382A (en) * 2002-11-05 2004-05-26 华为技术有限公司 Method for implementing cache in high efficiency in redundancy array of inexpensive discs
CN102110073A (en) * 2011-02-01 2011-06-29 中国科学院计算技术研究所 Replacement device and method for chip shared cache and corresponding processor

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015100653A1 (en) * 2013-12-31 2015-07-09 华为技术有限公司 Data caching method, device and system
CN104063330A (en) * 2014-06-25 2014-09-24 华为技术有限公司 Data prefetching method and device
CN104063330B (en) * 2014-06-25 2017-04-26 华为技术有限公司 Data prefetching method and device
CN107463509A (en) * 2016-06-05 2017-12-12 华为技术有限公司 Buffer memory management method, cache controller and computer system
CN107463509B (en) * 2016-06-05 2020-12-15 华为技术有限公司 Cache management method, cache controller and computer system
CN107168648A (en) * 2017-05-04 2017-09-15 广东欧珀移动通信有限公司 File memory method, device and terminal
CN108287795A (en) * 2018-01-16 2018-07-17 宿州新材云计算服务有限公司 A kind of new types of processors buffer replacing method
CN108287795B (en) * 2018-01-16 2022-06-21 安徽蔻享数字科技有限公司 Processor cache replacement method
CN109491873A (en) * 2018-11-05 2019-03-19 网易无尾熊(杭州)科技有限公司 It caches monitoring method, medium, device and calculates equipment
CN116107926A (en) * 2023-02-03 2023-05-12 摩尔线程智能科技(北京)有限责任公司 Cache replacement policy management method, device, equipment, medium and program product
CN116107926B (en) * 2023-02-03 2024-01-23 摩尔线程智能科技(北京)有限责任公司 Cache replacement policy management method, device, equipment, medium and program product

Also Published As

Publication number Publication date
CN103383666B (en) 2016-12-28

Similar Documents

Publication Publication Date Title
US10303596B2 (en) Read-write control method for memory, and corresponding memory and server
CN103383666A (en) Method and system for improving cache prefetch data locality and cache assess method
CN103136121B (en) Cache management method for solid-state disc
CN105574104B (en) A kind of LogStructure storage system and its method for writing data based on ObjectStore
CN104115133B (en) For method, system and the equipment of the Data Migration for being combined non-volatile memory device
CN102841850B (en) Reduce the method and system that solid state disk write is amplified
CN107368436B (en) Flash memory cold and hot data separated storage method combined with address mapping table
CN104025059B (en) For the method and system that the space of data storage memory is regained
KR101678868B1 (en) Apparatus for flash address translation apparatus and method thereof
US10346096B1 (en) Shingled magnetic recording trim operation
CN103631536B (en) A kind of method utilizing the invalid data of SSD to optimize RAID5/6 write performance
KR101297442B1 (en) Nand flash memory including demand-based flash translation layer considering spatial locality
US9477416B2 (en) Device and method of controlling disk cache by identifying cached data using metadata
JP2007220107A (en) Apparatus and method for managing mapping information of nonvolatile memory
CN102609335B (en) Device and method for protecting metadata by copy-on-write
US8151053B2 (en) Hierarchical storage control apparatus, hierarchical storage control system, hierarchical storage control method, and program for controlling storage apparatus having hierarchical structure
US20150052310A1 (en) Cache device and control method thereof
US10185660B2 (en) System and method for automated data organization in a storage system
CN107402890A (en) A kind of data processing method and system based on Solid-state disc array and caching
US20130282977A1 (en) Cache control device, cache control method, and program thereof
CN108334457B (en) IO processing method and device
KR100987251B1 (en) Flash memory management method and apparatus for merge operation reduction in a fast algorithm base ftl
CN109478163A (en) For identifying the system and method co-pending of memory access request at cache entries
CN111008158B (en) Flash memory cache management method based on page reconstruction and data temperature identification
US10579541B2 (en) Control device, storage system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant