CN100452007C - Entity area description element pre-access method of direct EMS memory for process unit access - Google Patents

Entity area description element pre-access method of direct EMS memory for process unit access Download PDF

Info

Publication number
CN100452007C
CN100452007C CNB2007101110352A CN200710111035A CN100452007C CN 100452007 C CN100452007 C CN 100452007C CN B2007101110352 A CNB2007101110352 A CN B2007101110352A CN 200710111035 A CN200710111035 A CN 200710111035A CN 100452007 C CN100452007 C CN 100452007C
Authority
CN
China
Prior art keywords
area description
description element
entity area
project
memory access
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CNB2007101110352A
Other languages
Chinese (zh)
Other versions
CN101059787A (en
Inventor
高鹏
黄宇
李德建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Via Technologies Inc
Original Assignee
Via Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Via Technologies Inc filed Critical Via Technologies Inc
Priority to CNB2007101110352A priority Critical patent/CN100452007C/en
Publication of CN101059787A publication Critical patent/CN101059787A/en
Application granted granted Critical
Publication of CN100452007C publication Critical patent/CN100452007C/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method for pre-fetching a solid area description unit from a direct memory access processor. When receives a direct memory access request, detects the data storage state of a queue of data to judge if pre-fetch a solid area description unit, wherein the queue is used to store the data operated in the direct memory access operation, if it is, uses a solid area description unit table to read at least one solid area description unit item, to be stored into a high-speed buffer, if not, uses the quick memory to read a prior solid area description unit item, and processes direct memory fetch operation according to the prior solid area description unit item. The inventive method changes the time that pre-fetches solid area description unit item to avoid overflow or lack bit of queue, to improve the data output efficiency of direct memory access processor.

Description

The entity area description element forecasting method of direct memory access processing unit
Technical field
(Direct Memory Access DMA) manages, and is particularly to entity area description element (Physical Region Descriptor, the PRD) forecasting method of a kind of direct memory access unit to the present invention relates to a kind of direct memory access.
Background technology
In computer architecture, direct memory access (Direct Memory Access, DMA) specific hardware that allows computer-internal under needn't the use of CPU (central processing unit), access system internal memory independently.The DMA transmission can be duplicated region of memory between device.Simultaneously, CPU (central processing unit) can be carried out other work, thus the usefulness of elevator system.
Entity area description element project (Physical Region Descriptor entry, PRD entry) is stored in the entity area description element form (PRD table) of Installed System Memory, in order to the information of particular memory block in the definition internal memory, as the start address and the size of memory block.Usually before carrying out dma operation, the DMA processing unit need arrive the entity area description element form and read an entity area description element project, with start address and the size of obtaining memory block that will access according to this entity area description element project.At this moment, the DMA processing unit can be about to data and write this memory block or read desired data by this memory block carrying out accessing operation with the corresponding memory block of this entity area description element project.Fig. 1 shows the synoptic diagram of an existing DMA processing unit.As shown in the figure, comprise an A interface 310, a B interface 320 and a high-speed cache (memory cache) 350 in the DMA processing unit 300.A interface 310 and B interface 320 are respectively in order to access A and B bus.High-speed cache 350 sees through the entity area description element project that the A bus is looked ahead in order to storage A interface.More comprise a formation (as FIFO) A331 in the DMA processing unit 300, with a formation B332.Read at DMA in (Out) operation, A interface 310 sees through the A bus by internal memory 340 reading of data, and data are deposited among the formation A331.B interface 320 is reading of data in formation A331, and data are written on the B bus, so that data are delivered to corresponding peripheral device, and SATA for example, USB device.Write at DMA in (In) operation, B interface 320 is in B bus reading of data, and data are write among the formation B332.A interface 310 is reading of data in formation B332, and data are write on the A bus, data are written to internal memory 340.In some cases, have less main control end on the B bus, and the A bus has more main control end.In order to reach best data output, the B bus is not wish to be in idle state, promptly wishes carrying out the operation of data transmission always.In other words, wish that in the DMA read operation formation A331 can not be empty, and wish that in the DMA write operation formation B332 can not be full.
Usually, DMA processing unit 300 adopts " dispersion-collection (Scatter-Gather) " mechanism to reduce the copy step of data." dispersion-collection " mechanism is meant that DMA processing unit 300 allows to transmit data to a plurality of respectively by the memory block of corresponding entity area description symbol definition in once single dma operation.That is to say that DMA processing unit 300 can carry out dma operation more together after collecting a plurality of simple DMA requests.In " dispersion-collection " DMA processing unit mechanism, the prefetch mechanisms of entity area description element can be improved the efficient and the output of DMA processing unit.Figure 2 shows that the entity area description element forecasting method of an existing DMA processing unit.At first, as step S210, A interface 310 is by obtaining an entity area description element project in the entity area description element form, and deposits in the high-speed cache 350 in the DMA processing unit.As step S220, judge whether the entity area description element form has arrived tail end, i.e. whether the entity area description element project that judgement obtains is last project in the entity area description element form, if not, as step S230, judge whether high-speed cache 350 is full.If high-speed cache 350 and less than (step S230 not), flow process is got back to step S210, continues by obtaining another entity area description element project in the entity area description element form, and deposits in the high-speed cache 350.If being last project (step S220 is) or the high-speed cache 350 in the entity area description element form, the entity area description element project that obtains expired (step S230 is), execution in step S240 then, by reading an entity area description element project in the high-speed cache 350, and carry out dma operation (step S250) according to the entity area description element project.Afterwards, as step S260, judge whether high-speed cache 350 is empty.If high-speed cache 350 empty (step S260 not), flow process is got back to step S240, by reading another entity area description element project in the high-speed cache 350, and as step S250, carries out dma operation according to the entity area description element project.If high-speed cache 350 is empty (step S260 is), then execution in step S270 judges whether the entity area description element form has arrived tail end, if not, flow process is got back to step S210, obtains an entity area description element project by the entity area description element form, if finish whole flow process.
In above-mentioned existing prefetch mechanisms, after DMA processing unit 300 is caused beginning, DMA processing unit 300 can be by the entity area description element form entity area description element project of looking ahead, and it is the entity area description element items storing that obtains is to the high-speed cache 350 of DMA processing unit 300, full until high-speed cache 350.After high-speed cache 350 has been expired, DMA processing unit 300 beginning is carried out dma operation according to the project in the high-speed cache 350, treat all items in the high-speed cache 350 all processed after more again by the entity area description element form entity area description element project of looking ahead.
Yet, for DMA read operation shown in Figure 2, when DMA processing unit 300 is looked ahead the entity area description element project, it is empty that formation A331 might become very much, and for the DMA write operation, when DMA processing unit 300 was looked ahead the entity area description element project, it was full that formation B332 might become very much.Specifically, when carrying out the DMA read operation, since on the A bus in the respective caches 350 dma operation of last entity area description element project finish, and follow-up dma operation just can begin after the prefetch operation of entity area description element project is finished.When time and B interface 320 continue to read formation A331 if the prefetch operation of entity area description element project costs a lot of money, will cause formation A331 to owe position (Underflow).When carrying out the DMA write operation and since on the A bus in the respective caches 350 dma operation of last entity area description element project finish, and follow-up dma operation just can begin after the prefetch operation of entity area description element project is finished.When time and B interface 320 continue that data are write formation B332 if the prefetch operation of entity area description element project costs a lot of money, will cause formation B332 overflow (Overflow).Owe position or overflow for fear of formation, above-mentionedly can make that either way the B bus is in idle state, thereby cause the data output usefulness of DMA processing unit 300 to descend.
Summary of the invention
The object of the present invention is to provide a kind of project of entity area description element efficiently forecasting method, to increase the data output usefulness of direct memory access (DMA) processing unit.
The invention provides a kind of entity area description element forecasting method of DMA processing unit.When receiving the direct memory access request, detect the data storing situation of a formation, to judge whether the entity area description element project of looking ahead, wherein this formation in order to store mutually should the direct memory access operation data.If, then read at least one entity area description element project by an entity area description element form, and with entity area description element items storing to a high-speed cache.If not, then read a current entity area description element project, and carry out the direct memory access operation according to current entity area description element project by this memory cache.
Entity area description element project forecasting method of the present invention has changed the opportunity of the entity area description element project of looking ahead, and owes the situation of position or overflow to avoid formation to occur, thereby has increased the data output usefulness of DMA processing unit.
Description of drawings
Fig. 1 is the synoptic diagram of the DMA system of prior art.
Figure 2 shows that the entity area description element forecasting method of the DMA processing unit of prior art.
Fig. 3 is the process flow diagram according to the entity area description element forecasting method of the DMA processing unit of the embodiment of the invention.
Fig. 4 is that DMA processing unit according to the embodiment of the invention is in the process flow diagram of the entity area description element forecasting method of DMA read operation.
Fig. 5 is that DMA processing unit according to the embodiment of the invention is in the process flow diagram of the entity area description element forecasting method of D MA write operation.
Embodiment
By the description of carrying out below in conjunction with the accompanying drawing that an example exemplarily is shown, above and other objects of the present invention and characteristics will become apparent.
Entity area description element forecasting method according to the DMA processing unit of the embodiment of the invention is applicable to a DMA processing unit.The DMA processing unit has the framework of similar Fig. 1.The DMA processing unit comprises one first interface (A interface) and one second interface (B interface), in order to difference access one first bus (A bus) and one second bus (B bus).Wherein, the DMA processing unit couples through first bus and an internal memory.Comprise one first formation (formation A) and one second formation (formation B) in the DMA processing unit.In the DMA read operation, first interface, and writes data in first formation by the internal memory reading of data in first bus.Second interface is reading of data in first formation, and data are written on second bus.In the DMA write operation, second interface is in the second bus reading of data, and data are write in second formation.First interface is reading of data in second formation, and data are write on first bus, so that data are written in the internal memory.It should be noted that the DMA processing unit has at least one high-speed cache (Cache), in order to store the entity area description element project of being looked ahead by the entity area description element form.
Fig. 3 is the entity area description element forecasting method according to the DMA processing unit of the embodiment of the invention.
As step S410, judge that by DMA request the DMA processing unit is to carry out DMA write operation or DMA read operation, judge that promptly the DMA request that receives is the DMA request of reading or DMA writes request.If carry out the DMA read operation, as step S420, judge whether first formation has expired or, that is to say, by detecting situation that first queuing data stores to judge whether the entity area description element project of looking ahead near full.If first formation has been expired or near full (step S420 is), the entity area description element project of promptly can looking ahead, as step S430, read the entity area description element project by the entity area description element form, and the entity area description element project that reads is deposited in the high-speed cache.Afterwards, flow process is got back to step S420.If first formation and less than or near full (step S420 not), as step S440, by taking out the entity area description element project in the high-speed cache, and carry out dma operation according to the entity area description element project of taking out.Afterwards, flow process is got back to step S420.If carry out the DMA write operation, as step S450, judge second formation whether empty or near empty, promptly detect situation that second queuing data stores to judge whether the entity area description element project of looking ahead.If second formation is not empty or near empty (step S450 not),, by taking out the entity area description element project in the high-speed cache, and carry out dma operation according to the entity area description element project of taking out as step S460.Afterwards, flow process is got back to step S450.If second formation is empty or near empty (step S450 is), as step S470, reads the entity area description element project by the entity area description element form, and the entity area description element project that reads is deposited in the high-speed cache.Afterwards, flow process is got back to step S450.
Next, respectively for the details of operation of DMA read operation and DMA write operation explanation DMA.
Fig. 4 is that DMA processing unit according to the embodiment of the invention is in the entity area description element forecasting method of DMA read operation.
When the DMA processing unit is carried out the DMA read operation,, read an entity area description element project fetched-entry by the entity area description element form, and be stored in the high-speed cache as step S502.As step S504, judge whether first formation has one first both free space of sizing.Mandatory declaration be that step S504 is in order to judge whether first formation has expired or near full.First both sizing can be set at the length (Burst Length) of the supported single of second interface transmission data, can certainly be set at other value according to elasticity of demand.Usually, the length of the supported single transmission of second interface data is greater than the size of the pairing memory block of single entity area description element project.If first formation has the first both free space of sizing (step S504 is), flow process is to step S510.If first formation does not have the first both free space of sizing (step S504 not),, judge whether the entity area description element project fetched-entry that reads at step S502 is last project of entity area description element form as step S506.(if step S506 is), flow process is to step S510.(step S506 denys) if not as step S508, judges whether high-speed cache is full.As if high-speed cache and less than (step S508 denys), flow process is got back to step S502.If high-speed cache is full (step S508 is), as step S510, whether the dma operation of judging corresponding current entity area description element project current-entry finishes, and judges that promptly whether first interface has read data in the corresponding memory block according to current entity area description element project current-entry.If finish (step S510 denys), flow process is to step S518.If finish (step S510 is),, judge that whether current entity area description element project current-entry is last project in the whole entity area description element form as step S512.If (step S512 is) finishes whole flow process.If current entity area description element project current-entry is not to be last project (step S512 not) in the whole entity area description element form,, judge whether sky of high-speed cache as step S514.If high-speed cache is empty (step S514 is), flow process is got back to step S502.If high-speed cache empty (step S514 denys) is as step S516, by taking out an entity area description element project popped-entry in the high-speed cache.Afterwards, as step S518, judge whether first formation has first both free space of sizing.If first formation has the first both free space of sizing (step S518 is), as step S520, carry out dma operation, and get back to the judgement of step S510 according to the entity area description element project popped-entry that obtains.If first formation does not have the first both free space of sizing (step S518 not),, judge that whether completely high-speed cache as step S522.As if high-speed cache and less than (step S522 denys), flow process is got back to step S502.If high-speed cache is full (step S522 is),, judge once more whether first formation has first both free space of sizing as step S524.If first formation does not have the first both free space of sizing (step S524 not), the judgement that continues step S524 has first both free space of sizing until first formation, waits for that promptly second interface sends the data in first formation to second bus.Treat that first formation has first both during the free space of sizing (step S524 is), then carry out step S520, according to the entity area description element project implementation dma operation of obtaining, promptly first interface see through first bus read to the internal memory with the corresponding memory block of popped-entry in data, and get back to the judgement of step S510.At this moment, the current entity area description element project current-entry among the step S510 is by the entity area description element project popped-entry that takes out in the high-speed cache among the step S516.Especially, if when execution in step S510, there is not entity area description element project popped-entry from high-speed cache, to take out as yet, promptly also do not have current entity area description element project current-entry, think that then the dma operation of corresponding current entity area description element project current-entry finishes.If when execution in step S512, also there is not current entity area description element project current-entry, think that then current entity area description element project current-entry is not last project in the whole entity area description element form.
Fig. 5 is that DMA processing unit according to the embodiment of the invention is in the entity area description element forecasting method of DMA write operation.
When the DMA processing unit is carried out the DMA write operation,, read an entity area description element project fetched-entry by the entity area description element form, and be stored in the high-speed cache as step S602.As step S604, judge whether second formation has one second both data of sizing.Mandatory declaration be, step S604 be used for judging second formation whether empty or near empty.Second both sizing can be set at the length (Burst Length) of the supported single of first interface transmission data, can certainly be set at other value according to elasticity of demand.Usually, the length of the supported single transmission of first interface data is greater than the size of the pairing memory block of single entity area description element project.If second formation has the second both data of sizing (step S604 is), flow process is to step S610.If second formation does not have the second both data of sizing (step S604 not),, judge whether the entity area description element project fetched-entry that reads is last project of entity area description element form as step S606.(if step S606 is), flow process is to step S610.(step S606 denys) if not as step S608, judges whether high-speed cache is full.As if high-speed cache and less than (step S608 denys), flow process is got back to step S602.If high-speed cache is full (step S608 is), as step S610, whether the dma operation of judging corresponding current entity area description element project current-entry finishes, and judges promptly whether first interface writes completely corresponding memory block according to current entity area description element project current-entry with data.If finish (step S610 denys), flow process is to step S618.If finish (step S610 is),, judge that whether current entity area description element project current-entry is last project in the whole entity area description element form as step S612.If (step S612 is), presentation-entity area description element form have not had unnecessary entity area description element project and can be read, thereby finish whole flow process.If present project is not to be last project (step S612 not) in the whole entity area description element form,, judge whether sky of high-speed cache as step S614.If high-speed cache is empty (step S614 is), flow process is got back to step S602, reads another entity area description element project.If high-speed cache empty (step S614 denys) is as step S616, by taking out an entity area description element project popped-entry in the high-speed cache.As step S618, judge whether second formation has one second both data of sizing.If second formation has the second both data of sizing (step S618 is), as step S620, carry out dma operation according to the entity area description element project popped-entry that obtains, promptly first interface sees through first bus data of second formation is write the corresponding memory block with popped-entry, and gets back to the judgement of step S610.If second formation does not have the second both data of sizing (step S618 not),, judge that whether completely high-speed cache as step S622.As if high-speed cache and less than (step S622 denys), flow process is got back to step S602.If high-speed cache is full (step S622 is),, judge once more whether second formation has second both data of sizing as step S624.If second formation does not have the second both data of sizing (step S624 not), continue the judgement of step S624, wait for that promptly second interface deposits the data that second bus transmits in second formation in.If second formation has the second both data of sizing (step S624 is), then carry out step S620, according to the entity area description element project implementation dma operation of obtaining, and get back to the judgement of step S610.
In this embodiment, in the DMA read operation, when first queue full or when having expired, just carry out the prefetch operation of entity area description element project, in the DMA write operation, when second queue empty or when empty, just carry out the prefetch operation of entity area description element project.Hence one can see that, and entity area description element project forecasting method of the present invention has changed the opportunity of the entity area description element project of looking ahead, and owes the situation of position or overflow to avoid formation to occur, thereby increased the data output usefulness of DMA processing unit.
The above only is preferred embodiment of the present invention; so it is not in order to limit scope of the present invention; any personnel that are familiar with this technology; without departing from the spirit and scope of the present invention; can do further improvement and variation on this basis, so the scope that claims were defined that protection scope of the present invention is worked as with the application is as the criterion.

Claims (11)

1. the entity area description element forecasting method of a direct memory access processing unit is characterized in that, described entity area description element forecasting method comprises step:
When receiving a direct memory access request, detect the data storing situation of a formation, to judge whether the entity area description element project of looking ahead, wherein this formation in order to store mutually should the direct memory access operation data;
If, then read at least one entity area description element project by an entity area description element form, and with described entity area description element items storing to a high-speed cache; And
If not, then read a current entity area description element project, and carry out the direct memory access operation according to described current entity area description element project by described high-speed cache.
2. the entity area description element forecasting method of direct memory access processing unit according to claim 1 is characterized in that, described entity area description element forecasting method more comprises the following steps:
With described entity area description element items storing to the step of described high-speed cache, judge that whether the described entity area description element project that read by described entity area description element form is last project in the described entity area description element form;
If described entity area description element project is last project in the described entity area description element form, stop to read the entity area description element project by described entity area description element form.
3. the entity area description element forecasting method of direct memory access processing unit according to claim 2 is characterized in that, described entity area description element forecasting method more comprises the following steps:
If described entity area description element project is not last project in the described entity area description element form, judge whether described high-speed cache is full, whether continue to read described entity area description element project by described entity area description element form with decision;
Wherein, if described high-speed cache is full, execution in step reads a current entity area description element project by described high-speed cache, and carry out direct memory access according to described current entity area description element project and operate, if described high-speed cache is not full, read at least one entity area description element project by described entity area description element project once more, and with described entity area description element items storing to described high-speed cache.
4. the entity area description element forecasting method of direct memory access processing unit according to claim 1 is characterized in that, described entity area description element forecasting method more comprises the following steps:
After reading described current entity area description element project, judge with the corresponding direct memory access operation of described current entity area description element project and whether finish; And
If do not finish, detect the data storing situation of described formation, to judge whether the entity area description element project of looking ahead, the entity area description element project if the data storing situation of described formation does not need to look ahead is then carried out the direct memory access operation according to described current entity area description element project.
5. the entity area description element forecasting method of direct memory access processing unit according to claim 4 is characterized in that, described entity area description element forecasting method more comprises the following steps:
If finish with the corresponding direct memory access operation of described current entity area description element project, and described current entity area description element project is last the entity area description element project in the entity area description element form, finish mutually should the entity area description element form the direct memory access operation.
6. the entity area description element forecasting method of direct memory access processing unit according to claim 4 is characterized in that, described entity area description element forecasting method more comprises the following steps:
If finished with the operation of the corresponding direct memory access of described current entity area description element project and described high-speed cache empty, then read at least one entity area description element project, and be stored to described high-speed cache by described entity area description element form.
7. the entity area description element forecasting method of direct memory access processing unit according to claim 6 is characterized in that, described entity area description element forecasting method more comprises the following steps:
If finished with the operation of the corresponding direct memory access of described current entity area description element project and described high-speed cache is not to be empty, read described current entity area description element project by described high-speed cache; And
Detect the data storing situation of described formation, to judge whether the entity area description element project of looking ahead, the entity area description element project if the data storing situation of described formation does not need to look ahead is then carried out the direct memory access operation according to the described current entity area description element project that reads.
8. the entity area description element forecasting method of direct memory access processing unit according to claim 4 is characterized in that, described entity area description element forecasting method more comprises the following steps:
The entity area description element project if the data storing situation of described formation can be looked ahead judges then whether described high-speed cache is full; And
If described high-speed cache and less than, read at least one entity area description element project by described entity area description element form, and be stored to described high-speed cache, if described high-speed cache is full, the data storing situation of detecting described formation once more is until the data storing situation of the described formation entity area description element project that do not allow to look ahead.
9. the entity area description element forecasting method of direct memory access processing unit according to claim 1, it is characterized in that, described entity area description element forecasting method more comprises: before the data storing situation of the described formation of detecting, judge that described direct memory access request is that a direct internal memory reads and asks or a direct internal memory writes request.
10. the entity area description element forecasting method of direct memory access processing unit according to claim 9, it is characterized in that, if described direct memory access request is that direct internal memory reads request, whether the step of then detecting the data storing situation of a formation has both free space of sizing in the described formation for detecting, wherein, if have the free space of described both sizings in the described formation, then can carry out direct memory read operation.
11. the entity area description element forecasting method of direct memory access processing unit according to claim 9, it is characterized in that, if described direct memory access request is that direct internal memory writes request, whether the step of then detecting the data storing situation of a formation has both data of sizing in the described formation for detecting, wherein, if state the data that have described both sizings in the formation, then can carry out direct internal memory write operation.
CNB2007101110352A 2007-06-13 2007-06-13 Entity area description element pre-access method of direct EMS memory for process unit access Active CN100452007C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007101110352A CN100452007C (en) 2007-06-13 2007-06-13 Entity area description element pre-access method of direct EMS memory for process unit access

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007101110352A CN100452007C (en) 2007-06-13 2007-06-13 Entity area description element pre-access method of direct EMS memory for process unit access

Publications (2)

Publication Number Publication Date
CN101059787A CN101059787A (en) 2007-10-24
CN100452007C true CN100452007C (en) 2009-01-14

Family

ID=38865896

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007101110352A Active CN100452007C (en) 2007-06-13 2007-06-13 Entity area description element pre-access method of direct EMS memory for process unit access

Country Status (1)

Country Link
CN (1) CN100452007C (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103197973A (en) * 2012-01-05 2013-07-10 中兴通讯股份有限公司 Mobile terminal and management method thereof
TWI636363B (en) * 2017-08-08 2018-09-21 慧榮科技股份有限公司 Method for performing dynamic resource management in a memory device, and associated memory device and controller thereof
CN112328510B (en) * 2020-10-29 2022-11-29 上海兆芯集成电路有限公司 Advanced host controller and control method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870627A (en) * 1995-12-20 1999-02-09 Cirrus Logic, Inc. System for managing direct memory access transfer in a multi-channel system using circular descriptor queue, descriptor FIFO, and receive status queue
US6272564B1 (en) * 1997-05-01 2001-08-07 International Business Machines Corporation Efficient data transfer mechanism for input/output devices
US20060173970A1 (en) * 2005-02-03 2006-08-03 Level 5 Networks, Inc. Including descriptor queue empty events in completion events
WO2006112270A1 (en) * 2005-04-13 2006-10-26 Sony Corporation Information processing device and information processing method
CN1900922A (en) * 2006-07-27 2007-01-24 威盛电子股份有限公司 Direct internal storage eccess operation method of microcomputer system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5870627A (en) * 1995-12-20 1999-02-09 Cirrus Logic, Inc. System for managing direct memory access transfer in a multi-channel system using circular descriptor queue, descriptor FIFO, and receive status queue
US6272564B1 (en) * 1997-05-01 2001-08-07 International Business Machines Corporation Efficient data transfer mechanism for input/output devices
US20060173970A1 (en) * 2005-02-03 2006-08-03 Level 5 Networks, Inc. Including descriptor queue empty events in completion events
WO2006112270A1 (en) * 2005-04-13 2006-10-26 Sony Corporation Information processing device and information processing method
CN1900922A (en) * 2006-07-27 2007-01-24 威盛电子股份有限公司 Direct internal storage eccess operation method of microcomputer system

Also Published As

Publication number Publication date
CN101059787A (en) 2007-10-24

Similar Documents

Publication Publication Date Title
US10956328B2 (en) Selective downstream cache processing for data access
US6170030B1 (en) Method and apparatus for restreaming data that has been queued in a bus bridging device
TW542958B (en) A method and apparatus for pipelining ordered input/output transactions to coherent memory in a distributed memory, cache coherent, multi-processor system
CN103198025A (en) Method and system form near neighbor data cache sharing
CN100419715C (en) Embedded processor system and its data operating method
US8489851B2 (en) Processing of read requests in a memory controller using pre-fetch mechanism
CN102378971A (en) Method for reading data and memory controller
US7844777B2 (en) Cache for a host controller to store command header information
WO2013130090A1 (en) Data processing apparatus having first and second protocol domains, and method for the data processing apparatus
CN100452007C (en) Entity area description element pre-access method of direct EMS memory for process unit access
US6745308B2 (en) Method and system for bypassing memory controller components
US20080307169A1 (en) Method, Apparatus, System and Program Product Supporting Improved Access Latency for a Sectored Directory
US8464005B2 (en) Accessing common registers in a multi-core processor
US7328312B2 (en) Method and bus prefetching mechanism for implementing enhanced buffer control
US20080276045A1 (en) Apparatus and Method for Dynamic Cache Management
US20080320176A1 (en) Prd (physical region descriptor) pre-fetch methods for dma (direct memory access) units
US8719542B2 (en) Data transfer apparatus, data transfer method and processor
US20090089559A1 (en) Method of managing data movement and cell broadband engine processor using the same
JP2011065359A (en) Memory system
JP2010079536A (en) Memory access control circuit and memory access control method
CN100345103C (en) Method of prefetching data/instructions related to externally triggered events
CN101097555B (en) Method and system for processing data on chip
CN117389915B (en) Cache system, read command scheduling method, system on chip and electronic equipment
US6473834B1 (en) Method and apparatus for prevent stalling of cache reads during return of multiple data words
JP5428653B2 (en) Memory access processing apparatus and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant