US20020016873A1 - Arbitrating and servicing polychronous data requests in direct memory access - Google Patents

Arbitrating and servicing polychronous data requests in direct memory access Download PDF

Info

Publication number
US20020016873A1
US20020016873A1 US09/875,512 US87551201A US2002016873A1 US 20020016873 A1 US20020016873 A1 US 20020016873A1 US 87551201 A US87551201 A US 87551201A US 2002016873 A1 US2002016873 A1 US 2002016873A1
Authority
US
United States
Prior art keywords
data
processing module
devices
memory
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/875,512
Other versions
US6795875B2 (en
Inventor
Donald Gray
Agha Ahsan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/628,473 external-priority patent/US6816923B1/en
Assigned to WEBTV NETWORKS, INC. (MICROSOFT) reassignment WEBTV NETWORKS, INC. (MICROSOFT) ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AHSAN, AGHA ZAIGHAM, GRAY, DONALD M. III
Priority to US09/875,512 priority Critical patent/US6795875B2/en
Application filed by Individual filed Critical Individual
Publication of US20020016873A1 publication Critical patent/US20020016873A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION MERGER (SEE DOCUMENT FOR DETAILS). Assignors: WEBTV NETWORKS, INC.
Priority to US10/945,052 priority patent/US6976098B2/en
Publication of US6795875B2 publication Critical patent/US6795875B2/en
Application granted granted Critical
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION MERGER (SEE DOCUMENT FOR DETAILS). Assignors: WEBTV NETWORKS, INC.
Priority to US11/126,111 priority patent/US7389365B2/en
Priority to US11/125,563 priority patent/US7089336B2/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/28Handling requests for interconnection or transfer for access to input/output bus using burst mode transfer, e.g. direct memory access DMA, cycle steal

Definitions

  • the present invention relates to systems and methods for transferring data to and from memory in a computer system. More particularly, the present invention relates to systems and methods for servicing the data and memory requirements of system devices by arbitrating the data requests of those devices.
  • An important operational aspect of a computer or of a computer system is the need to transfer data to and from the memory of the computer.
  • the computer's processor is used to perform the task of transferring data to and from the computer's memory, then the processor is unable to perform other functions.
  • the processor bears a heavy load if the processor is required to copy data word by word to and from the computer's memory system for those devices. As a result, using the processor to transfer data in this manner can consume precious processing time.
  • DMA Direct Memory Access
  • a DMA controller essentially relieves the processor of having to transfer data to and from memory by permitting a device to transfer data to or from the computer's memory without the use of the computer's processor.
  • a significant advantage of DMA is that large amounts of data may be transferred before generating an interrupt to the computer to signal that the task is completed. Because the DMA controller is transferring data, the processor is therefore free to perform other tasks.
  • High performance memory systems preferably provide high bandwidth and prefer large data requests. This is in direct contrast to many devices, which may request small amounts of data, have low bandwidth, and require small latencies. This results in system inefficiencies as traditional devices individually communicate with the memory system in an effort to bridge this gap. It is possible that many different devices may be simultaneously making small data requests to a memory system that prefers to handle large memory requests. As a result, the performance of the memory system is decreased.
  • an audio device may support several different channels that receive data from memory. The audio device typically makes a data request to memory for data every few microseconds for those channels. Because devices such as audio devices recognize that they may experience significant latency from the memory system before their request is serviced, the audio device may implement an excessively large buffer to account for that latency.
  • the present invention provides a DMA engine that manages the data requirements and requests of system devices.
  • the DMA engine includes a data reservoir that effectively consolidates the separate memory buffers of the devices.
  • the DMA engine provides centralized addressing as well.
  • the data reservoir is divided into smaller portions that correspond to each device.
  • the DMA engine also provides a scalable bandwidth and latency to the system devices.
  • An overall feature of the present invention is the ability to guarantee that a particular device will be serviced in a programmable response time. This guarantee enables the buffer sizes to be reduced, which conserves memory, as well as permits the available bandwidth to be efficiently utilized.
  • the DMA engine maintains the data reservoir, the DMA engine is responsible for providing each device with the data that the device requests. At the same time, the DMA engine is also responsible for monitoring the remaining data in the data reservoir such that a data request can be made to the system's memory when more data is required for a particular portion of the data reservoir. To accomplish these tasks, the DMA engine provides arbitration functionality to the devices as well as to the memory.
  • the arbitration functionality provided to the devices determines which devices are eligible to make a data request in a particular cycle.
  • Each de ice may have multiple data channels, but the device is treated as a unit from the perspective of the DMA engine.
  • the arbitration functionality provided between the DMA engine and the memory occurs on a per channel basis rather than a per device basis. Each channel is evaluated in turn to determine whether a data request should be made to memory or whether the channel can wait until it is evaluated again in the future. Because the number of channels is known and because the time needed to service a particular channel is known, each channel is assured of being serviced within a particular time period. This guarantee ensures that the data reservoir will have the data required by the system devices.
  • the arbitration interface between the system memory and the DMA engine addresses the data needs of each channel in a successive fashion by using a list that contains at least one entry for each channel.
  • the DMA engine repeatedly cycles through the entries in the list to evaluate the data or memory requirements of each channel.
  • the order in which the channels are evaluated can be programmed such that high bandwidth devices are serviced more frequently, while low bandwidth devices are serviced within a programmable time period.
  • data requests to or from memory are for larger blocks of data that can withstand some latency.
  • FIG. 1 illustrates an exemplary system that provides a suitable operating environment for the present invention
  • FIG. 2 is a block diagram illustrating a DMA engine that services the data and memory requirements of system devices
  • FIG. 3 is a more detailed block diagram of the DMA engine shown in FIG. 2;
  • FIG. 4 is a block diagram illustrating the memory interface that provides arbitration functionality between the DMA engine and a system's memory
  • FIG. 5 is a block diagram illustrating a main list and a sub list and is used to show calls to channels on the main list as well as the sub list;
  • FIG. 6 is a block diagram illustrating the devices interface that provides arbitration functionality between the DMA engine and the system devices.
  • the present invention relates to systems for servicing and managing the data requests and memory requirements of devices operating within a computer system.
  • a Direct Memory Access (DMA) engine acts as an intermediary between the memory system and the devices by consolidating the buffer requirements of the devices, providing scalable bandwidth and latency to both the devices and the memory system, minimizing the buffering requirements of the devices through guaranteed scheduling, and efficiently using idle time periods.
  • DMA Direct Memory Access
  • An overall feature of the DMA engine is the ability to support the data requirements of the devices in a particular system while ensuring sufficient response time and bandwidth for each device.
  • the DMA engine includes a centralized data reservoir or buffer that replaces the buffers of the individual devices.
  • the consolidated data reservoir of the DMA engine also provides centralized addressing. Also, by centralizing the buffer requirements into the data reservoir, the DMA engine is able to implement the DMA control logic a single time, whereas each device previously required separate DMA control logic.
  • Another feature of the DMA engine is related to the latency that devices often experience when interacting with memory.
  • the DMA engine ensures that a request from a particular device for data will be handled within a predetermined time period in part by maintaining the data reservoir that holds each device's data.
  • the data reservoir is maintained on a per channel basis by evaluating factors such as the bandwidth requirements of each channel associated with each device, the anticipated response time of the memory system to service the request of each channel, how long the viable data remaining in the data reservoir will last for each channel, and the like. This information is used to determine whether the channel being evaluated should be serviced immediately or whether the channel can wait until it is evaluated again before it is serviced. In this manner, the DMA engine ensures that each device or channel will have sufficient data stored in the data reservoir.
  • the DMA engine further ensures that the data requirements of all devices will be met within a certain time period by providing an interface to the DMA engine for both the devices and the memory.
  • the DMA engine interface with the memory is adapted to the characteristics of a high performance memory system, while the DMA engine interface with the devices is adapted to the requirements of the devices.
  • the DMA engine is therefore capable of accessing relatively large blocks of data from the memory while providing relatively smaller blocks of data to the devices from the data reservoir. Effectively, the DMA engine permits high priority devices, which may have low bandwidth requirements, to efficiently coexist with high bandwidth devices that may have lower priority.
  • the present invention extends to both methods and systems for servicing the memory requirements of multiple devices.
  • the embodiments of the present invention may comprise a special purpose or general purpose computer including various computer hardware, as discussed in greater detail below.
  • Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon.
  • Such computer-readable media can be any available media which can be accessed by a general purpose or special purpose computer.
  • a special purpose computer is a set top box.
  • Exemplary set top boxes include, but are not limited to, analog and digital devices such as satellite receivers, digital recording devices, cable boxes, video game consoles, Internet access boxes, and the like or any combination thereof.
  • such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
  • Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
  • FIG. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented.
  • the invention will be described in the general context of computer-executable instructions, such as program modules, being executed by computers in network environments.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein.
  • the particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps.
  • the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like.
  • the invention may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network.
  • program modules may be located in both local and remote memory storage devices.
  • FIG. 1 illustrates a management system 100 that represents just one of many suitable operating environments in which the principles of the present invention may operate.
  • the management system 100 consists of an ASIC 110 that includes a number of components that communicate over a control bus 111 and a memory bus 112 .
  • the control bus 111 carries relatively low bandwidth control information that controls the operation of each of the components of the ASIC 110 .
  • the memory bus 112 carries higher bandwidth information between each of the components of the ASIC 110 and memory.
  • a bus management unit 113 manages the communication over the control bus 111 and also interfaces with a processor 114 and a PCI bus 115 .
  • the processor 114 oversees the general video processing by dispatching instructions over the control bus 111 instructing the various components of the ASIC 110 to perform their specialized tasks. The processor 114 also monitors the progress of such tasks thus controlling the various components of ASIC 110 in a coordinated fashion.
  • the ASIC 110 has access to one or more memory subsystems 116 that provides volatile memory that is shared between the components of the ASIC 110 .
  • the memory subsystems 116 may be any memory subsystem that allows for rapid access to stored information.
  • a memory unit 117 communicates directly with the Memory subsystems 116 .
  • the Direct Memory Access unit (hereinafter “DMA” unit or “DMA engine”) 118 acts as a buffering interface to support memory access for the remaining devices in the ASIC 110 . Each of these remaining devices will now be described.
  • a Universal Serial Bus interface 119 runs a universal serial bus and may be any conventional USB interface adapted to interface with the control bus 111 and the memory bus 112 .
  • a device unit 121 includes a number of interfaces for a number of miscellaneous devices.
  • the device unit 121 contains a bi-directional interface for an 12 C bus 122 for communication with external components, a bi-directional interface for a smart card 123 , a bi-directional Infra Red (IR) serial interface 124 , and a bi-directional ISA/IDE bus 125 that interfaces with a Read Only Memory 126 and a hard disk drive 127 .
  • IR Infra Red
  • a graphics unit 128 comprises a 3-D graphic rendering engine that may be, for example, an eight million polygon direct-X7 compatible 3-D graphics unit.
  • An audio unit 129 drives a PC audio interface 130 such as SPDIF.
  • a video unit 132 receives video data from the memory bus 112 and converts the video data into a digital display.
  • the video unit 132 provides the digital display data to the digital video encoder 133 which converts the digital display data into the desired format (e.g., NTSC or HDTV) and provides the digital video through a Digital to Analog Converter (DAC) and filter 134 to a composite, S-Video or component output.
  • the digital video encoder 133 may also output the video to a digital video interface (DVI) 135 using a DVI converter 136 .
  • DVI digital video interface
  • An MPEG decoder 138 is provided to decode MPEG streams.
  • the MPEG decoder also performs subsampled decoding by reducing the frame size of the resulting decoded frame.
  • a resampler 139 performs resizing of the frame as needed to conform to the display format in force at the appropriate display device.
  • the resampler also performs conversion of interlaced video to progressive video, and vice versa, as needed to conform to the appropriate display format.
  • a transcoder 140 receives MPEG compressed frames, and further compresses the MPEG frame thus reducing the storage and bandwidth requirements of the transcoded MPEG stream.
  • An error corrector 141 reduces error that was introduced due to the transmission of an MPEG stream to the video management system 100 .
  • An encryption/decryption unit 142 performs encryption and decryption as appropriate.
  • FIG. 1 and the corresponding discussion above provide a general description of a suitable environment in which the invention mar be implemented, it will be appreciated that the features of the present invention disclosed herein may be practiced in association with a variety of different system configurations. For example, there are many types of devices that may be adapted to interface with the DMA engine 118 in accordance with the principles of the present invention, not just those devices described above with respect to FIG. 1.
  • data request refers to either a read or a write operation. Data request can also apply to the interaction between the DMA engine and the system devices or to the interaction between the DMA engine and the main memory of the system.
  • the present invention is primarily discussed in terms of memory reads, but it is understood to apply to memory writes as well. The memory or data requirements of a particular device can be evaluated from the perspective of either the DMA engine or the main memory of a system.
  • FIG. 2 is a block diagram that illustrates a DMA engine such as DMA engine 118 for servicing and managing the memory or data requirements of system devices.
  • Each device can be a hardware device or a software module or a combination thereof
  • the devices 220 interface with the DMA engine 118 through a devices interface 250 .
  • the devices interface 250 allows the DMA engine 118 to service the data requirements of the devices 220 while providing sufficient response time and bandwidth for the devices 220 .
  • the devices interface 250 further provides arbitration functionality to the devices 220 such that the DM engine 118 services the data requests of eligible devices included in the devices 220 in any given cycle.
  • the devices interface 250 determines which devices are eligible to make a service request to the DMA engine 118 in a given cycle or window.
  • the data requests refer to reading or writing data to the DMA engine 118 .
  • the devices interface 250 makes a determination as to eligibility on a per device basis and does not consider the channels that may be associated with each device.
  • the memory interface 270 determines whether to make a data request to memory 116 on a per channel basis.
  • the memory interface 270 determines whether a particular channel should be serviced and provides arbitration functionality between the DMA engine 118 and the memory 116 .
  • the memory channel evaluates each channel in a repetitive fashion. In this manner, each channel is effectively guaranteed to be serviced within a particular time period.
  • a data request refers to the transfer of data from the main memory to the DMA engine or from the DMA engine to the main memory.
  • a device when a device makes a data request, it does not imply that data is transferred to or from the main memory. Also, when a data request is serviced by the main memory, it does not imply that a device has received or transferred data to the DMA engine even though these actions can occur at the same time.
  • the memory interface 270 may be viewed as a state machine that produces an output for a given input.
  • the output is whether the channel being evaluated should be serviced and the input includes factors that determine whether the channel is critical. Those factors include, but are not limited to, the amount of data currently available to the channel in the DMA engine, how long it takes the main memory to service the data request of the channel, how long before the channel is evaluated again, and the like. After one channel has been evaluated, the state machine advances to the next channel.
  • the state machine After a particular sequence of channels has been evaluated, the state machine begins the evaluation process again at the beginning of the sequence. It is possible for a sequence to include a single channel more than once. While the devices interface 250 and the memory interface 270 are illustrated as being separate from the DMA engine 118 , it is understood that the devices interface 250 and the memory interface 270 may be integral modules of the DMA engine 118 . In addition, the devices interface 250 and the memory interface may comprise both hardware and software components.
  • FIG. 3 is a more detailed diagram illustrating the interaction between the devices 220 , the memory 116 and the DMA engine 118 .
  • the exemplary system illustrated in FIG. 3 has devices 220 including device 221 , device 222 , device 223 , and device 224 . It is understood that the actual number of devices in a particular system is not limited to the illustrated devices but can vary depending on the configuration of the system.
  • Each of the devices 221 , 222 , 223 , and 224 has one or more channels over which data may be transferred.
  • Exemplary devices include, but are not limited to audio devices, universal serial port (USB) devices, resampler devices, MPEG devices, any of the devices described above with respect to FIG. 1, and the like.
  • the DMA engine 118 includes a data reservoir 202 that includes device buffers 204 , 206 , 208 , and 209 .
  • Each device buffer corresponds to a device included in the devices 220 . More specifically, each channel of each device is allocated a portion of the data reservoir 202 . In this manner, the buffer requirements of the devices 220 are consolidated into the data reservoir 202 . More particularly, the data reservoir 202 replaces the small or medium sized buffers associated with the individual devices with a single large buffer. Not only does this arrangement conserve memory, but the DMA control logic that is usually implemented for each device may be instantiated a single time in the DMA engine 118 .
  • the DMA engine 118 56 independently configurable channels are available.
  • an audio unit or device may use 4 read channels and 4 write channels.
  • An MPEG unit or device may consume 5 channels consisting of 2 read channels, 1 control stream read channel, and 2 write data channels.
  • a USB unit or device may use 1 read data channel and 1 write data channel.
  • the DMA engine 118 can support more or fewer channels. While FIG. 3 represents the data reservoir 202 as maintaining a device buffer for each device, the data reservoir 202 may actually maintain a portion of the data reservoir 202 for each channel of each device.
  • a data request is sent to the DMA engine 118 through the device interface 250 .
  • the device interface 250 rather than performing arbitration on a per channel basis, arbitrates the data requests it receives on a per device or unit basis. If a device needs to make a data request for more than one channel, the device is responsible for making a data request for the higher priority channel because a device can usually only make a single request. From the perspective of the DMA engine 118 , the bandwidth requirement of each device is determined by the device's channels, and the DMA engine 118 uses the latency of the urgent channel as the device latency when considering the device request.
  • the device interface 250 provides arbitration functionality that determines which devices or data requests are eligible to be serviced by the DMA engine 118 . Once the eligible devices are identified, a basic arbitration scheme may be used to determine which data request should be granted. Determining which devices are eligible, however, includes scheduling the devices such that latencies can be effectively guaranteed. In addition, scheduling the devices in this manner prevents a particular device from consuming the available bandwidth until other devices have been serviced. Scheduling the devices will be discussed further with reference to FIG. 6.
  • the devices interface 250 provides a calculated latency and bandwidth tradeoff.
  • a device having both a high priority and a low bandwidth may be able to withstand a larger latency than a device having a lower priority and a higher bandwidth.
  • Proper scheduling ensures that high priority devices will have an adjustable, guaranteed response time while reducing the buffering requirements for the high bandwidth device.
  • audio devices are typically considered to be high priority devices and an MPEG device is a low priority device with high bandwidth. Because the MPEG device will be serviced in a programmable response time, the buffer requirement of the MPEG device is reduced even though other devices have to be serviced.
  • a key aspect of the devices interface 250 is that each device is guaranteed of being serviced in a defined and programmable response time.
  • the devices are preferably managed by the DMA engine on a per device basis rather than a per channel basis because many of the devices may have low bandwidth and it is more efficient to consider the bandwidth of all the channels of a device.
  • the memory interface 270 uses a list structure to manage the memory or data requirements of the individual channels.
  • the entries in the list structure are channel identifiers that identify the channels of the devices 220 .
  • the list which is described in more detail with reference to FIG. 4, may be viewed as a circular list that is advanced to the next entry each time an entry or channel has been evaluated or serviced.
  • Each channel represented by an entry in the list is evaluated for service on a regular basis, and each channel is assured of being serviced in a programmable response time.
  • One reason the response time is programmable is because each channel can be included in the list structure more than once. This enables those channels that need more frequent servicing to be accommodated while still ensuring that the other channels will be evaluated or serviced within a known response time.
  • the DMA engine 118 uses the data reservoir 202 as a memory buffer for the devices 220 . As the memory interface 270 rotates through the circular list maintained by the memory interface 270 and evaluates the channels represented by the entries in the circular list, the data remaining in the data reservoir 202 for each channel is evaluated. More specifically, the DMA engine 118 evaluates the portion of the data reservoir 202 that corresponds to the channel in the circular list of the memory interface 270 that is being examined.
  • the criteria for evaluating each portion of the data reservoir 202 include, but are not limited to, how many bytes are left in the portion of the data reservoir 202 , a buffer time that corresponds to the rate at which the remaining data is being used by the device as well as how long those bytes will last, the latency of the memory system experienced while accessing the data from the memory 116 , and an entry time representing when will the channel be evaluated again. These factors determine whether the channel being examined is critical or requires service. If the channel requires service a data request is made to the main memory. If the channel is not critical, then the channel can wait until it is evaluated again by the memory interface of the DMA engine.
  • One benefit of examining each channel independently of the other channels is that the data can be managed in memory rather than in registers, which results in improved performance.
  • FIG. 4 is a block diagram that represents the arbitration functionality between the DMA engine and the memory that is provided by the memory interface 270 , which is included in the DMA engine 118 .
  • FIG. 4 illustrates the memory interface 270 , which includes in this example, a main list 271 and a sub list 272 . Each entry in the main list 271 corresponds to a channel.
  • the DMA engine supported 56 channels, which are represented in the main list as entries or channel identifiers having the values of 0 to 55.
  • the channel identifiers are represented as channels 273 , 275 , 276 , 277 , 278 , and 279 .
  • the length of the main list 271 can vary and only a few entries are illustrated in FIG. 4.
  • Each channel identifier can be listed multiple times on the main list 271 , but it is preferable that multiple entries for a single channel be evenly spaced on the main list 271 , This allows a wide range of programmed response times to be implemented without requiring significant storage or memory. Also, this ensures that the entry time or the time until the channel is to be evaluated again is known.
  • the main list 271 also supports identifier numbers higher than the number of channels supported by the DMA engine.
  • 8 additional channel identifiers are supported and are represented by the numbers 56 through 63 . Seven of these channel identifiers indicate a jump or a call from the main list 271 to a sub list such as the sub list 272 .
  • the sub-list call 274 is an example of these identifiers and sub-list call 274 points to the sub list 272 .
  • the sub list 272 contains channel entries similar to the entries on the main list 271 , and each time a call to the sub-list is made, one entry in the sub-list is evaluated.
  • the next entry in the main list 271 is evaluated and serviced as indicated by arrow 290 .
  • the next time a call to the sub-list is made from the main list 271 the successive entry in the sub list 272 is performed.
  • sub list 272 may be used to hold channels that can withstand longer latencies.
  • sub list 273 may be significantly shorter when sub lists are employed. Otherwise, the main list 271 would have to contain space for the entries on the sub list each time a jump to the sub list occurs. Thus, the use of sub lists conserves memory.
  • channel 273 is an identifier for one of the channels of the device 221 .
  • the DMA engine 118 maintains the device buffer 204 for the channel 273 .
  • the channel 273 is evaluated to determine whether a data request should be made to memory 116 .
  • the channel 273 is first checked to determine basic information such as whether the channel is enabled and which way the data is flowing, either to or from the memory 116 .
  • full configuration data of the channel 273 is accessed from a memory channel control to determine the bandwidth requirement, the time until the channel 273 will next have an opportunity for service, the data format, the access style, and the like.
  • the available data for the channel in the device buffer 204 is determined by accessing, for example, memory pointers.
  • the critical requests which are stored in the critical request queue, are then processed or serviced.
  • the critical request queue is preferably a first in first out (FIFO) queue that may be reordered on occasion. In one example, the first four data requests in the queue are examined and serviced in an optimal order.
  • the critical queue stores, in this example, the channel identifier; and control information including, but not limited to, current memory page address, first memory sub-page address, current memory length, transaction size, data format, data access style, and the like.
  • the non-critical request queue is not essential to the operation of the invention, but is used to hold the most pressing non-critical data requests. This queue is able to improve memory efficiency by making use of available cycles. For example, if the critical request queue is empty, then data requests in the non-critical queue may be serviced. Data requests in the non-critical queue may remain indefinitely if there is a large volume of other system traffic. If a request in the non-critical queue becomes critical, it is moved to the critical queue for servicing.
  • the main list 271 is embodied as a circular list, and because the worst case situations are considered, it is possible to guarantee that a particular channel will be serviced within a certain time period or frame.
  • the advantage of this system is that the data requests to memory from the DMA engine are more suited to the characteristics of the high performance memory.
  • the DMA engine preferably makes larges requests, accommodates large bandwidth, and is capable of experiencing significant latency without having an impact on the devices.
  • FIG. 6 illustrates the arbitration functionality provided by the device interface 250 .
  • FIG. 6 illustrates device 221 , which has channels 301 , 302 , and 303 , and device 222 , which has channels 304 , 305 , and 306 .
  • the DMA engine 118 requires that the device 221 send a data request 307 whenever any one of the channels 301 , 302 , or 303 of the device 221 needs servicing.
  • the device 222 sends a data request 308 whenever one or more of the channels 304 , 305 or 306 requires servicing. Because a request can represent one of several channels, the arbitration performed by the devices interface 250 is per device rather than per channel. Each device therefore has the responsibility of indicating which channel is most urgent or critical, and the latency that the device can experience is determined from the urgent channel.
  • the device interface 250 has an arbitration mechanism that is used to determine which devices are eligible to make requests to the DMA engine 118 .
  • the arbitration mechanism includes an arbitration count 251 that is represented by four bits, but other representations are equally valid. Eligible devices are determined, for example, by the following comparison logic: ((arbitration count XOR devVal) & devMask), where devVal is the device value and devMask is a defined value.
  • the device 221 may only be eligible every time the two least significant bits of the arbitration count 251 are zero. In this situation, the device 221 would be an eligible device for only one out of four cycles or arbitration counts. In a similar situation, the device 222 may only be eligible to make a data request every time the two least significant bits of the arbitration count 251 are both ones. In this situation, the device 222 is only eligible for one out of every four cycles. Even though the device 221 and the device 222 are only eligible for one out of every four cycles, they are eligible to make a data request on different cycles. In this manner, the requests of the devices can be scheduled in an efficient manner

Abstract

Systems for servicing the data and memory requirements of system devices. A DMA engine that includes a data reservoir is provided that manages and arbitrates the data requests from the system devices. An arbitration unit is provided that only allows eligible devices to make a data request in any given cycle to ensure that all devices will be serviced within a programmable time period. The data reservoir contains the data buffers for each channel of each device. A memory interface ensures that sufficient data for each channel is present in the data reservoir by making requests to a system's memory based on an analysis of each channel. Based on this analysis, a request is either made to the system's main memory, or the channel waits until it is evaluated again in the future. Each channel is thereby guaranteed a response time.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • The present application is a continuation-in-part from commonly-owned co-pending U.S. patent application Ser. No. 09/628,473, filed Jul. 31, 2000, and entitled “Arbitrating and Servicing Polychronous Data Requests In Direct Memory Access”, which application is incorporated herein by reference in its entirety.[0001]
  • BACKGROUND OF THE INVENTION
  • 1. The Field of the Invention [0002]
  • The present invention relates to systems and methods for transferring data to and from memory in a computer system. More particularly, the present invention relates to systems and methods for servicing the data and memory requirements of system devices by arbitrating the data requests of those devices. [0003]
  • 2. The Prior State of the Art [0004]
  • An important operational aspect of a computer or of a computer system is the need to transfer data to and from the memory of the computer. However, if the computer's processor is used to perform the task of transferring data to and from the computer's memory, then the processor is unable to perform other functions. When a computer is supporting high speed devices that have significant memory needs, the processor bears a heavy load if the processor is required to copy data word by word to and from the computer's memory system for those devices. As a result, using the processor to transfer data in this manner can consume precious processing time. [0005]
  • A solution to this problem is Direct Memory Access (DMA). A DMA controller essentially relieves the processor of having to transfer data to and from memory by permitting a device to transfer data to or from the computer's memory without the use of the computer's processor. A significant advantage of DMA is that large amounts of data may be transferred before generating an interrupt to the computer to signal that the task is completed. Because the DMA controller is transferring data, the processor is therefore free to perform other tasks. [0006]
  • As computer systems become more sophisticated, however, it is becoming increasingly evident that there is a fundamental problem between the devices that take advantage of DMA and the memory systems of those computers. More specifically, the problem faced by current DMA modules is the ability to adequately service the growing number of high speed devices as well as their varying data requirements. [0007]
  • High performance memory systems preferably provide high bandwidth and prefer large data requests. This is in direct contrast to many devices, which may request small amounts of data, have low bandwidth, and require small latencies. This results in system inefficiencies as traditional devices individually communicate with the memory system in an effort to bridge this gap. It is possible that many different devices may be simultaneously making small data requests to a memory system that prefers to handle large memory requests. As a result, the performance of the memory system is decreased. [0008]
  • This situation makes it difficult for low bandwidth devices, which may have high priority, to effectively interact with high bandwidth devices that may have lower priority. For example, an audio device may support several different channels that receive data from memory. The audio device typically makes a data request to memory for data every few microseconds for those channels. Because devices such as audio devices recognize that they may experience significant latency from the memory system before their request is serviced, the audio device may implement an excessively large buffer to account for that latency. [0009]
  • This is not an optimum solution for several reasons. For instance, many devices maintain a large buffer because they do not have a guarantee that their data requests will be serviced within a particular time period. Other devices maintain an excessively large buffer because it is crucial that the data be delivered in a timely manner even though the devices may have low bandwidth requirements. For example, if an audio device does not receive its data in a timely manner, the result is instantly noticed by a user. Additionally, each device must implement DMA control logic, which can be quite complex for some devices. In other words, the DMA control logic is effectively repeated for each device. [0010]
  • Current devices often interact with DMA systems independently of the other system devices and each device in the system is able to make a data request to the DMA at any time. As a result, it is difficult to determine which devices need to be serviced first. The arbitration performed by systems employing isochronous arbitration often defines fixed windows in which all devices that may require servicing are given a portion. These fixed windows are large from the perspective of high bandwidth devices and small from the perspective of low bandwidth devices. Thus, high bandwidth devices are required to buffer more data than they really need and low bandwidth devices often do not need to use their allocated portion of the window. This results in inefficiencies because all of the available bandwidth may not be used and additional memory is required for the buffers of high bandwidth devices. In essence, current systems do not adequately allow high priority devices to efficiently coexist with high bandwidth devices. [0011]
  • SUMMARY OF THE INVENTION
  • The present invention provides a DMA engine that manages the data requirements and requests of system devices. The DMA engine includes a data reservoir that effectively consolidates the separate memory buffers of the devices. In addition to consolidating memory, the DMA engine provides centralized addressing as well. The data reservoir is divided into smaller portions that correspond to each device. The DMA engine also provides a scalable bandwidth and latency to the system devices. An overall feature of the present invention is the ability to guarantee that a particular device will be serviced in a programmable response time. This guarantee enables the buffer sizes to be reduced, which conserves memory, as well as permits the available bandwidth to be efficiently utilized. [0012]
  • Because the DMA engine maintains the data reservoir, the DMA engine is responsible for providing each device with the data that the device requests. At the same time, the DMA engine is also responsible for monitoring the remaining data in the data reservoir such that a data request can be made to the system's memory when more data is required for a particular portion of the data reservoir. To accomplish these tasks, the DMA engine provides arbitration functionality to the devices as well as to the memory. [0013]
  • The arbitration functionality provided to the devices determines which devices are eligible to make a data request in a particular cycle. Each de ice may have multiple data channels, but the device is treated as a unit from the perspective of the DMA engine. By only allowing some of the devices to be eligible during a particular cycle, all devices are ensured of being serviced within a particular time period and high bandwidth devices are not permitted to consume more bandwidth than they were allocated. [0014]
  • The arbitration functionality provided between the DMA engine and the memory occurs on a per channel basis rather than a per device basis. Each channel is evaluated in turn to determine whether a data request should be made to memory or whether the channel can wait until it is evaluated again in the future. Because the number of channels is known and because the time needed to service a particular channel is known, each channel is assured of being serviced within a particular time period. This guarantee ensures that the data reservoir will have the data required by the system devices. [0015]
  • The arbitration interface between the system memory and the DMA engine addresses the data needs of each channel in a successive fashion by using a list that contains at least one entry for each channel. The DMA engine repeatedly cycles through the entries in the list to evaluate the data or memory requirements of each channel. In addition, the order in which the channels are evaluated can be programmed such that high bandwidth devices are serviced more frequently, while low bandwidth devices are serviced within a programmable time period. Thus, data requests to or from memory are for larger blocks of data that can withstand some latency. [0016]
  • Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The features and advantages of the invention may be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the present invention will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter. [0017]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • In order that the manner in which the above-recited and other advantages and features of the invention are obtained, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered to be limiting of its scope, the invention will be described and explained with additional specificity and detail through the use of the accompanying drawings in which: [0018]
  • FIG. 1 illustrates an exemplary system that provides a suitable operating environment for the present invention; [0019]
  • FIG. 2 is a block diagram illustrating a DMA engine that services the data and memory requirements of system devices; [0020]
  • FIG. 3 is a more detailed block diagram of the DMA engine shown in FIG. 2; [0021]
  • FIG. 4 is a block diagram illustrating the memory interface that provides arbitration functionality between the DMA engine and a system's memory; [0022]
  • FIG. 5 is a block diagram illustrating a main list and a sub list and is used to show calls to channels on the main list as well as the sub list; and [0023]
  • FIG. 6 is a block diagram illustrating the devices interface that provides arbitration functionality between the DMA engine and the system devices. [0024]
  • DETAILED DESCRIPTION OF THE INVENTION
  • The present invention relates to systems for servicing and managing the data requests and memory requirements of devices operating within a computer system. A Direct Memory Access (DMA) engine acts as an intermediary between the memory system and the devices by consolidating the buffer requirements of the devices, providing scalable bandwidth and latency to both the devices and the memory system, minimizing the buffering requirements of the devices through guaranteed scheduling, and efficiently using idle time periods. [0025]
  • An overall feature of the DMA engine is the ability to support the data requirements of the devices in a particular system while ensuring sufficient response time and bandwidth for each device. The DMA engine includes a centralized data reservoir or buffer that replaces the buffers of the individual devices. In addition to reducing or eliminating the need for buffers in the various devices, the consolidated data reservoir of the DMA engine also provides centralized addressing. Also, by centralizing the buffer requirements into the data reservoir, the DMA engine is able to implement the DMA control logic a single time, whereas each device previously required separate DMA control logic. [0026]
  • Another feature of the DMA engine is related to the latency that devices often experience when interacting with memory. The DMA engine ensures that a request from a particular device for data will be handled within a predetermined time period in part by maintaining the data reservoir that holds each device's data. The data reservoir is maintained on a per channel basis by evaluating factors such as the bandwidth requirements of each channel associated with each device, the anticipated response time of the memory system to service the request of each channel, how long the viable data remaining in the data reservoir will last for each channel, and the like. This information is used to determine whether the channel being evaluated should be serviced immediately or whether the channel can wait until it is evaluated again before it is serviced. In this manner, the DMA engine ensures that each device or channel will have sufficient data stored in the data reservoir. [0027]
  • The DMA engine further ensures that the data requirements of all devices will be met within a certain time period by providing an interface to the DMA engine for both the devices and the memory. The DMA engine interface with the memory is adapted to the characteristics of a high performance memory system, while the DMA engine interface with the devices is adapted to the requirements of the devices. The DMA engine is therefore capable of accessing relatively large blocks of data from the memory while providing relatively smaller blocks of data to the devices from the data reservoir. Effectively, the DMA engine permits high priority devices, which may have low bandwidth requirements, to efficiently coexist with high bandwidth devices that may have lower priority. [0028]
  • The present invention extends to both methods and systems for servicing the memory requirements of multiple devices. The embodiments of the present invention may comprise a special purpose or general purpose computer including various computer hardware, as discussed in greater detail below. [0029]
  • Embodiments within the scope of the present invention also include computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media can be any available media which can be accessed by a general purpose or special purpose computer. One example of a special purpose computer is a set top box. Exemplary set top boxes include, but are not limited to, analog and digital devices such as satellite receivers, digital recording devices, cable boxes, video game consoles, Internet access boxes, and the like or any combination thereof. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. [0030]
  • When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a computer-readable medium. Thus, any such a connection is properly termed a computer-readable medium. Combinations of the above should also be included within the scope of computer-readable media. Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. [0031]
  • FIG. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment in which the invention may be implemented. Although not required, the invention will be described in the general context of computer-executable instructions, such as program modules, being executed by computers in network environments. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Computer-executable instructions, associated data structures, and program modules represent examples of the program code means for executing steps of the methods disclosed herein. The particular sequence of such executable instructions or associated data structures represent examples of corresponding acts for implementing the functions described in such steps. [0032]
  • Those skilled in the art will appreciate that the invention may be practiced in network computing environments with many types of computer system configurations, including personal computers, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, and the like. The invention may also be practiced in distributed computing environments where tasks are performed by local and remote processing devices that are linked (either by hardwired links, wireless links, or by a combination of hardwired or wireless links) through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices. [0033]
  • FIG. 1 illustrates a [0034] management system 100 that represents just one of many suitable operating environments in which the principles of the present invention may operate. The management system 100 consists of an ASIC 110 that includes a number of components that communicate over a control bus 111 and a memory bus 112. The control bus 111 carries relatively low bandwidth control information that controls the operation of each of the components of the ASIC 110. The memory bus 112 carries higher bandwidth information between each of the components of the ASIC 110 and memory. A bus management unit 113 manages the communication over the control bus 111 and also interfaces with a processor 114 and a PCI bus 115.
  • The [0035] processor 114 oversees the general video processing by dispatching instructions over the control bus 111 instructing the various components of the ASIC 110 to perform their specialized tasks. The processor 114 also monitors the progress of such tasks thus controlling the various components of ASIC 110 in a coordinated fashion.
  • Of course, memory is required to perform such coordinated operations. Accordingly, the [0036] ASIC 110 has access to one or more memory subsystems 116 that provides volatile memory that is shared between the components of the ASIC 110. The memory subsystems 116 may be any memory subsystem that allows for rapid access to stored information.
  • A [0037] memory unit 117 communicates directly with the Memory subsystems 116. The Direct Memory Access unit (hereinafter “DMA” unit or “DMA engine”) 118 acts as a buffering interface to support memory access for the remaining devices in the ASIC 110. Each of these remaining devices will now be described.
  • A Universal [0038] Serial Bus interface 119 runs a universal serial bus and may be any conventional USB interface adapted to interface with the control bus 111 and the memory bus 112.
  • A [0039] device unit 121 includes a number of interfaces for a number of miscellaneous devices. For example, the device unit 121 contains a bi-directional interface for an 12 C bus 122 for communication with external components, a bi-directional interface for a smart card 123, a bi-directional Infra Red (IR) serial interface 124, and a bi-directional ISA/IDE bus 125 that interfaces with a Read Only Memory 126 and a hard disk drive 127.
  • A [0040] graphics unit 128 comprises a 3-D graphic rendering engine that may be, for example, an eight million polygon direct-X7 compatible 3-D graphics unit.
  • An [0041] audio unit 129 drives a PC audio interface 130 such as SPDIF.
  • A [0042] video unit 132 receives video data from the memory bus 112 and converts the video data into a digital display. The video unit 132 provides the digital display data to the digital video encoder 133 which converts the digital display data into the desired format (e.g., NTSC or HDTV) and provides the digital video through a Digital to Analog Converter (DAC) and filter 134 to a composite, S-Video or component output. The digital video encoder 133 may also output the video to a digital video interface (DVI) 135 using a DVI converter 136.
  • An [0043] MPEG decoder 138 is provided to decode MPEG streams. The MPEG decoder also performs subsampled decoding by reducing the frame size of the resulting decoded frame.
  • A [0044] resampler 139 performs resizing of the frame as needed to conform to the display format in force at the appropriate display device. The resampler also performs conversion of interlaced video to progressive video, and vice versa, as needed to conform to the appropriate display format.
  • A [0045] transcoder 140 receives MPEG compressed frames, and further compresses the MPEG frame thus reducing the storage and bandwidth requirements of the transcoded MPEG stream.
  • An [0046] error corrector 141 reduces error that was introduced due to the transmission of an MPEG stream to the video management system 100.
  • An encryption/[0047] decryption unit 142 performs encryption and decryption as appropriate.
  • While FIG. 1 and the corresponding discussion above provide a general description of a suitable environment in which the invention mar be implemented, it will be appreciated that the features of the present invention disclosed herein may be practiced in association with a variety of different system configurations. For example, there are many types of devices that may be adapted to interface with the [0048] DMA engine 118 in accordance with the principles of the present invention, not just those devices described above with respect to FIG. 1.
  • As used herein, “data request” refers to either a read or a write operation. Data request can also apply to the interaction between the DMA engine and the system devices or to the interaction between the DMA engine and the main memory of the system. The present invention is primarily discussed in terms of memory reads, but it is understood to apply to memory writes as well. The memory or data requirements of a particular device can be evaluated from the perspective of either the DMA engine or the main memory of a system. [0049]
  • FIG. 2 is a block diagram that illustrates a DMA engine such as [0050] DMA engine 118 for servicing and managing the memory or data requirements of system devices. Each device can be a hardware device or a software module or a combination thereof The devices 220 interface with the DMA engine 118 through a devices interface 250. The devices interface 250 allows the DMA engine 118 to service the data requirements of the devices 220 while providing sufficient response time and bandwidth for the devices 220. The devices interface 250 further provides arbitration functionality to the devices 220 such that the DM engine 118 services the data requests of eligible devices included in the devices 220 in any given cycle. In other words, the devices interface 250 determines which devices are eligible to make a service request to the DMA engine 118 in a given cycle or window. In this context, the data requests refer to reading or writing data to the DMA engine 118.
  • As described, the [0051] devices interface 250 makes a determination as to eligibility on a per device basis and does not consider the channels that may be associated with each device. The memory interface 270, however, determines whether to make a data request to memory 116 on a per channel basis. The memory interface 270 determines whether a particular channel should be serviced and provides arbitration functionality between the DMA engine 118 and the memory 116. The memory channel evaluates each channel in a repetitive fashion. In this manner, each channel is effectively guaranteed to be serviced within a particular time period. In this context, a data request refers to the transfer of data from the main memory to the DMA engine or from the DMA engine to the main memory. Thus, when a device makes a data request, it does not imply that data is transferred to or from the main memory. Also, when a data request is serviced by the main memory, it does not imply that a device has received or transferred data to the DMA engine even though these actions can occur at the same time.
  • In one example, the [0052] memory interface 270 may be viewed as a state machine that produces an output for a given input. The output is whether the channel being evaluated should be serviced and the input includes factors that determine whether the channel is critical. Those factors include, but are not limited to, the amount of data currently available to the channel in the DMA engine, how long it takes the main memory to service the data request of the channel, how long before the channel is evaluated again, and the like. After one channel has been evaluated, the state machine advances to the next channel.
  • After a particular sequence of channels has been evaluated, the state machine begins the evaluation process again at the beginning of the sequence. It is possible for a sequence to include a single channel more than once. While the [0053] devices interface 250 and the memory interface 270 are illustrated as being separate from the DMA engine 118, it is understood that the devices interface 250 and the memory interface 270 may be integral modules of the DMA engine 118. In addition, the devices interface 250 and the memory interface may comprise both hardware and software components.
  • FIG. 3 is a more detailed diagram illustrating the interaction between the [0054] devices 220, the memory 116 and the DMA engine 118. The exemplary system illustrated in FIG. 3 has devices 220 including device 221, device 222, device 223, and device 224. It is understood that the actual number of devices in a particular system is not limited to the illustrated devices but can vary depending on the configuration of the system. Each of the devices 221, 222, 223, and 224 has one or more channels over which data may be transferred. Exemplary devices include, but are not limited to audio devices, universal serial port (USB) devices, resampler devices, MPEG devices, any of the devices described above with respect to FIG. 1, and the like.
  • The [0055] DMA engine 118 includes a data reservoir 202 that includes device buffers 204, 206, 208, and 209. Each device buffer corresponds to a device included in the devices 220. More specifically, each channel of each device is allocated a portion of the data reservoir 202. In this manner, the buffer requirements of the devices 220 are consolidated into the data reservoir 202. More particularly, the data reservoir 202 replaces the small or medium sized buffers associated with the individual devices with a single large buffer. Not only does this arrangement conserve memory, but the DMA control logic that is usually implemented for each device may be instantiated a single time in the DMA engine 118.
  • In one example of the [0056] DMA engine 118, 56 independently configurable channels are available. In this example, there are 28 read channels and 28 write channels, and each device in the devices 220 may use more than one channel as previously stated. For example, an audio unit or device may use 4 read channels and 4 write channels. An MPEG unit or device may consume 5 channels consisting of 2 read channels, 1 control stream read channel, and 2 write data channels. A USB unit or device may use 1 read data channel and 1 write data channel. In other examples, the DMA engine 118 can support more or fewer channels. While FIG. 3 represents the data reservoir 202 as maintaining a device buffer for each device, the data reservoir 202 may actually maintain a portion of the data reservoir 202 for each channel of each device.
  • Whenever a device included in the [0057] devices 220 requires service for any of the channels of the device, a data request is sent to the DMA engine 118 through the device interface 250. The device interface 250, rather than performing arbitration on a per channel basis, arbitrates the data requests it receives on a per device or unit basis. If a device needs to make a data request for more than one channel, the device is responsible for making a data request for the higher priority channel because a device can usually only make a single request. From the perspective of the DMA engine 118, the bandwidth requirement of each device is determined by the device's channels, and the DMA engine 118 uses the latency of the urgent channel as the device latency when considering the device request.
  • The [0058] device interface 250 provides arbitration functionality that determines which devices or data requests are eligible to be serviced by the DMA engine 118. Once the eligible devices are identified, a basic arbitration scheme may be used to determine which data request should be granted. Determining which devices are eligible, however, includes scheduling the devices such that latencies can be effectively guaranteed. In addition, scheduling the devices in this manner prevents a particular device from consuming the available bandwidth until other devices have been serviced. Scheduling the devices will be discussed further with reference to FIG. 6.
  • In essence, the [0059] devices interface 250 provides a calculated latency and bandwidth tradeoff. A device having both a high priority and a low bandwidth may be able to withstand a larger latency than a device having a lower priority and a higher bandwidth. Proper scheduling ensures that high priority devices will have an adjustable, guaranteed response time while reducing the buffering requirements for the high bandwidth device. For example, audio devices are typically considered to be high priority devices and an MPEG device is a low priority device with high bandwidth. Because the MPEG device will be serviced in a programmable response time, the buffer requirement of the MPEG device is reduced even though other devices have to be serviced. A key aspect of the devices interface 250 is that each device is guaranteed of being serviced in a defined and programmable response time.
  • The devices are preferably managed by the DMA engine on a per device basis rather than a per channel basis because many of the devices may have low bandwidth and it is more efficient to consider the bandwidth of all the channels of a device. The [0060] memory interface 270, however, uses a list structure to manage the memory or data requirements of the individual channels. The entries in the list structure are channel identifiers that identify the channels of the devices 220.
  • The list, which is described in more detail with reference to FIG. 4, may be viewed as a circular list that is advanced to the next entry each time an entry or channel has been evaluated or serviced. Each channel represented by an entry in the list is evaluated for service on a regular basis, and each channel is assured of being serviced in a programmable response time. One reason the response time is programmable is because each channel can be included in the list structure more than once. This enables those channels that need more frequent servicing to be accommodated while still ensuring that the other channels will be evaluated or serviced within a known response time. [0061]
  • The [0062] DMA engine 118 uses the data reservoir 202 as a memory buffer for the devices 220. As the memory interface 270 rotates through the circular list maintained by the memory interface 270 and evaluates the channels represented by the entries in the circular list, the data remaining in the data reservoir 202 for each channel is evaluated. More specifically, the DMA engine 118 evaluates the portion of the data reservoir 202 that corresponds to the channel in the circular list of the memory interface 270 that is being examined.
  • The criteria for evaluating each portion of the [0063] data reservoir 202 include, but are not limited to, how many bytes are left in the portion of the data reservoir 202, a buffer time that corresponds to the rate at which the remaining data is being used by the device as well as how long those bytes will last, the latency of the memory system experienced while accessing the data from the memory 116, and an entry time representing when will the channel be evaluated again. These factors determine whether the channel being examined is critical or requires service. If the channel requires service a data request is made to the main memory. If the channel is not critical, then the channel can wait until it is evaluated again by the memory interface of the DMA engine. One benefit of examining each channel independently of the other channels is that the data can be managed in memory rather than in registers, which results in improved performance.
  • FIG. 4 is a block diagram that represents the arbitration functionality between the DMA engine and the memory that is provided by the [0064] memory interface 270, which is included in the DMA engine 118. FIG. 4 illustrates the memory interface 270, which includes in this example, a main list 271 and a sub list 272. Each entry in the main list 271 corresponds to a channel. In a previous example, the DMA engine supported 56 channels, which are represented in the main list as entries or channel identifiers having the values of 0 to 55. The channel identifiers are represented as channels 273, 275, 276, 277, 278, and 279. It is understood that the length of the main list 271 can vary and only a few entries are illustrated in FIG. 4. Each channel identifier can be listed multiple times on the main list 271, but it is preferable that multiple entries for a single channel be evenly spaced on the main list 271, This allows a wide range of programmed response times to be implemented without requiring significant storage or memory. Also, this ensures that the entry time or the time until the channel is to be evaluated again is known.
  • The [0065] main list 271 also supports identifier numbers higher than the number of channels supported by the DMA engine. In this example, 8 additional channel identifiers are supported and are represented by the numbers 56 through 63. Seven of these channel identifiers indicate a jump or a call from the main list 271 to a sub list such as the sub list 272. The sub-list call 274 is an example of these identifiers and sub-list call 274 points to the sub list 272. The sub list 272 contains channel entries similar to the entries on the main list 271, and each time a call to the sub-list is made, one entry in the sub-list is evaluated. After one entry on the sub-list has been serviced, the next entry in the main list 271 is evaluated and serviced as indicated by arrow 290. The next time a call to the sub-list is made from the main list 271, the successive entry in the sub list 272 is performed.
  • This provides the significant advantage of using smaller tables to replace a single larger table. In FIG. 5, for example, if a [0066] main list 271 had channels M0, M1 and M2 and the sub-list 272 had channels S0, S1, S2, S3, and S4, then the calling order of the entries in both lists would be M0, M1, M2, S0, M0, M1, M2, S1, M0, M1, M2, S2, M0, M1, M2, S3, M0, M1, M2, and S4. If a single list were used to implement this example, 20 entries would be needed in the list. By using a main list and a sub-list, however, only nine entries are needed in this example: a four entry main list and a five entry sub-list.
  • As illustrated in the previous example, only one entry on the sub-list is evaluated on the sub-list each time a call is made to that sub-list. Thus, another significant advantage of the [0067] sub list 272 is that the sub list 272 may be used to hold channels that can withstand longer latencies. Another advantage of the sub list 273 is that the main list 271 may be significantly shorter when sub lists are employed. Otherwise, the main list 271 would have to contain space for the entries on the sub list each time a jump to the sub list occurs. Thus, the use of sub lists conserves memory.
  • With reference to both FIGS. 3 and 4, assume that [0068] channel 273 is an identifier for one of the channels of the device 221. Also assume that the DMA engine 118 maintains the device buffer 204 for the channel 273. When the main list 271 reaches the channel 273, the channel 273 is evaluated to determine whether a data request should be made to memory 116. In the evaluation, the channel 273 is first checked to determine basic information such as whether the channel is enabled and which way the data is flowing, either to or from the memory 116. Next, full configuration data of the channel 273 is accessed from a memory channel control to determine the bandwidth requirement, the time until the channel 273 will next have an opportunity for service, the data format, the access style, and the like.
  • Next, the available data for the channel in the [0069] device buffer 204 is determined by accessing, for example, memory pointers. The amount of available data, in conjunction with how fast the available data is being used, determines how much time is represented by the available data. This value is compared against the response time, which includes how long until the channel will next be examined, as well as an allowance for system overhead. If the comparison indicates that the time remaining to the channel 273 is less than the response time, then the channel 273 is considered critical and a data request for service is posted by the DMA engine 118. If the channel 273 is critical, the data request is placed in a critical request queue for servicing. If the channel 273 is not critical, the data request may be placed in a non-critical request queue.
  • The critical requests, which are stored in the critical request queue, are then processed or serviced. The critical request queue is preferably a first in first out (FIFO) queue that may be reordered on occasion. In one example, the first four data requests in the queue are examined and serviced in an optimal order. The critical queue stores, in this example, the channel identifier; and control information including, but not limited to, current memory page address, first memory sub-page address, current memory length, transaction size, data format, data access style, and the like. [0070]
  • The non-critical request queue is not essential to the operation of the invention, but is used to hold the most pressing non-critical data requests. This queue is able to improve memory efficiency by making use of available cycles. For example, if the critical request queue is empty, then data requests in the non-critical queue may be serviced. Data requests in the non-critical queue may remain indefinitely if there is a large volume of other system traffic. If a request in the non-critical queue becomes critical, it is moved to the critical queue for servicing. [0071]
  • When determining the response time for a particular channel, it is often necessary to compute the worst case scenario for that channel. This is often dependent on several factors, including, but not limited to the response time of the memory system, the transaction size and the like. In order to determine whether a particular channel should be serviced involves an analysis of several factors, including but not limited to, the time until the channel will be checked again, the number of requests in the critical queue before a request is posted, the worst case latency from when a requests is posted until it is granted by the memory; and the worst case latency from when a request is granted until its servicing is complete. Some of these factors are design constants while others are dependent on the channel. [0072]
  • Because the [0073] main list 271 is embodied as a circular list, and because the worst case situations are considered, it is possible to guarantee that a particular channel will be serviced within a certain time period or frame. The advantage of this system is that the data requests to memory from the DMA engine are more suited to the characteristics of the high performance memory. Thus, the DMA engine preferably makes larges requests, accommodates large bandwidth, and is capable of experiencing significant latency without having an impact on the devices.
  • FIG. 6 illustrates the arbitration functionality provided by the [0074] device interface 250. FIG. 6 illustrates device 221, which has channels 301, 302, and 303, and device 222, which has channels 304, 305, and 306. In this example, the DMA engine 118 requires that the device 221 send a data request 307 whenever any one of the channels 301, 302, or 303 of the device 221 needs servicing. Similarly, the device 222 sends a data request 308 whenever one or more of the channels 304, 305 or 306 requires servicing. Because a request can represent one of several channels, the arbitration performed by the devices interface 250 is per device rather than per channel. Each device therefore has the responsibility of indicating which channel is most urgent or critical, and the latency that the device can experience is determined from the urgent channel.
  • The [0075] device interface 250 has an arbitration mechanism that is used to determine which devices are eligible to make requests to the DMA engine 118. In other words, a data request can only be made to the DMA engine when a device is eligible to make a request. In this example, the arbitration mechanism includes an arbitration count 251 that is represented by four bits, but other representations are equally valid. Eligible devices are determined, for example, by the following comparison logic: ((arbitration count XOR devVal) & devMask), where devVal is the device value and devMask is a defined value.
  • Whenever this logic comparison is true for a particular device, that device is eligible to make a data request for data from the data reservoir of the DMA engine. Using this comparison logic, the eligibility of a particular device can be programmed. More specifically, a particular device can be eligible to make a request every cycle, every other cycle, every fourth cycle, every eighth cycle or every sixteenth cycle. This logic also allows the eligibility of the devices to be staggered or scheduled such that any one device does not consume the available bandwidth. As used herein, “cycle” can refer to a defined time window, a certain number of clock cycles, or any other period in which data requests from eligible devices can be made or serviced. [0076]
  • For example, the [0077] device 221 may only be eligible every time the two least significant bits of the arbitration count 251 are zero. In this situation, the device 221 would be an eligible device for only one out of four cycles or arbitration counts. In a similar situation, the device 222 may only be eligible to make a data request every time the two least significant bits of the arbitration count 251 are both ones. In this situation, the device 222 is only eligible for one out of every four cycles. Even though the device 221 and the device 222 are only eligible for one out of every four cycles, they are eligible to make a data request on different cycles. In this manner, the requests of the devices can be scheduled in an efficient manner
  • The present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.[0078]

Claims (20)

What is claimed and desired to be secured by United States Letters Patent is:
1. An audio/video processing system comprising:
an audio processing module disposed to process audio data and to provide first requests over a first data channel;
a second processing module disposed to process data and to provide second requests over a second data channel; and
a direct memory access (DMA) engine receiving the first and second requests over the first and second data channels, the DMA engine including a data reservoir maintaining a memory buffer for the audio processing module and the second processing module, and including an arbitration mechanism disposed to arbitrate the first and second data requests.
2. The invention as in claim 1, wherein the second processing module comprises a digital video processing module disposed to process video data and to provide video requests over the second data channel.
3. The invention as in claim 1, wherein the second processing module comprises a Universal Serial Bus.
4. The invention as in claim 1, wherein the second processing module comprises a device unit that interfaces with one or more external components.
5. The invention as in claim 1, wherein the second processing module comprises a graphics rendering engine.
6. The invention as recited in claim 1, wherein the second processing module comprises a video decoder.
7. The invention as recited in claim 6, wherein the video decoder comprises an MPEG decoder.
8. The invention as recited in claim 1, wherein the second processing module comprises a resampler.
9. The invention as recited in claim 1, wherein the second processing module comprises a transcoder.
10. The invention as recited in claim 1, wherein the second processing module comprises an error corrector.
11. The invention as recited in claim 1, wherein the second processing module comprises an encryption unit.
12. The invention as recited in claim 1, wherein the second processing module comprises a decryption unit.
13. The invention as in claim 1 wherein the arbitration mechanism comprises:
a devices interface coupled connected with the data reservoir disposed to arbitrate the first and second data requests generated by the audio processing module and the second processing module; and
a memory interface operably connected with the DMA module disposed to arbitrate reservoir data requests generated by the DMA module for data from the main memory to replenish the data reservoir.
14. The invention as in claim 13, wherein the data reservoir comprises a plurality of device buffers.
15. The invention as in claim 14, wherein each of the plurality of device buffers comprises at least one channel buffer for each channel associated with a respective one of the audio processing module and the second processing module.
16. The invention as in claim 13, wherein the devices interface further comprises an arbitration mechanism used to select eligible devices from one of the audio processing module and the second processing module, wherein the eligible device makes the device data requests.
17. The invention as in claim 13, wherein the DMA engine guarantees that device data requests of the digital video processing module and the audio processing module is serviced within a programmable response time.
18. The invention as in claim 13, wherein the memory interface further comprises a circular list having a plurality of entries, each entry representing one of the channels of the audio processing module and the second processing module, wherein the channels are evaluated to determine if the channels are critical.
19. The invention as in claim 18, wherein the circular list is linked to one or more sub-lists, the one or more sub-lists having additional entries representing one of the channels of the audio processing module and the second processing module.
20. The invention as in claim 18, wherein the DMA engine makes the reservoir data request for the channels that are critical.
US09/875,512 2000-07-31 2001-06-01 Arbitrating and servicing polychronous data requests in direct memory access Expired - Lifetime US6795875B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US09/875,512 US6795875B2 (en) 2000-07-31 2001-06-01 Arbitrating and servicing polychronous data requests in direct memory access
US10/945,052 US6976098B2 (en) 2000-07-31 2004-09-20 Arbitrating and servicing polychronous data requests in direct memory access
US11/125,563 US7089336B2 (en) 2000-07-31 2005-05-10 Arbitrating and servicing polychronous data requests in Direct Memory Access
US11/126,111 US7389365B2 (en) 2000-07-31 2005-05-10 Arbitrating and servicing polychronous data requests in direct memory access

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/628,473 US6816923B1 (en) 2000-07-31 2000-07-31 Arbitrating and servicing polychronous data requests in direct memory access
US09/875,512 US6795875B2 (en) 2000-07-31 2001-06-01 Arbitrating and servicing polychronous data requests in direct memory access

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/628,473 Continuation-In-Part US6816923B1 (en) 2000-07-31 2000-07-31 Arbitrating and servicing polychronous data requests in direct memory access

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US10/945,052 Continuation US6976098B2 (en) 2000-07-31 2004-09-20 Arbitrating and servicing polychronous data requests in direct memory access

Publications (2)

Publication Number Publication Date
US20020016873A1 true US20020016873A1 (en) 2002-02-07
US6795875B2 US6795875B2 (en) 2004-09-21

Family

ID=34139090

Family Applications (4)

Application Number Title Priority Date Filing Date
US09/875,512 Expired - Lifetime US6795875B2 (en) 2000-07-31 2001-06-01 Arbitrating and servicing polychronous data requests in direct memory access
US10/945,052 Expired - Lifetime US6976098B2 (en) 2000-07-31 2004-09-20 Arbitrating and servicing polychronous data requests in direct memory access
US11/126,111 Expired - Fee Related US7389365B2 (en) 2000-07-31 2005-05-10 Arbitrating and servicing polychronous data requests in direct memory access
US11/125,563 Expired - Fee Related US7089336B2 (en) 2000-07-31 2005-05-10 Arbitrating and servicing polychronous data requests in Direct Memory Access

Family Applications After (3)

Application Number Title Priority Date Filing Date
US10/945,052 Expired - Lifetime US6976098B2 (en) 2000-07-31 2004-09-20 Arbitrating and servicing polychronous data requests in direct memory access
US11/126,111 Expired - Fee Related US7389365B2 (en) 2000-07-31 2005-05-10 Arbitrating and servicing polychronous data requests in direct memory access
US11/125,563 Expired - Fee Related US7089336B2 (en) 2000-07-31 2005-05-10 Arbitrating and servicing polychronous data requests in Direct Memory Access

Country Status (1)

Country Link
US (4) US6795875B2 (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020180890A1 (en) * 2001-05-21 2002-12-05 Milne James R. Modular digital television architecture
US20030001981A1 (en) * 2001-05-21 2003-01-02 Sony Corporation Modular digital television architecture
US20040031060A1 (en) * 2001-04-12 2004-02-12 Tetsujiro Kondo Signal processing device, housing rack, and connector
US20050080943A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation Method and apparatus for efficient sharing of DMA resource
US20050228936A1 (en) * 2004-03-18 2005-10-13 International Business Machines Corporation Method and apparatus for managing context switches using a context switch history table
CN100552655C (en) * 2003-08-07 2009-10-21 松下电器产业株式会社 Processor integrated circuit and the methods of product development that processor integrated circuit has been installed
US20100008424A1 (en) * 2005-03-31 2010-01-14 Pace Charles P Computer method and apparatus for processing image data
CN101645052A (en) * 2008-08-06 2010-02-10 中兴通讯股份有限公司 Quick direct memory access (DMA) ping-pong caching method
US20100086062A1 (en) * 2007-01-23 2010-04-08 Euclid Discoveries, Llc Object archival systems and methods
CN101820543A (en) * 2010-03-30 2010-09-01 北京蓝色星河软件技术发展有限公司 Ping-pong structure fast data access method combined with direct memory access (DMA)
US20110182352A1 (en) * 2005-03-31 2011-07-28 Pace Charles P Feature-Based Video Compression
US8842154B2 (en) 2007-01-23 2014-09-23 Euclid Discoveries, Llc Systems and methods for providing personal video services
US8902971B2 (en) 2004-07-30 2014-12-02 Euclid Discoveries, Llc Video compression repository and model reuse
US9532069B2 (en) 2004-07-30 2016-12-27 Euclid Discoveries, Llc Video compression repository and model reuse
US9578345B2 (en) 2005-03-31 2017-02-21 Euclid Discoveries, Llc Model-based video encoding and decoding
US9621917B2 (en) 2014-03-10 2017-04-11 Euclid Discoveries, Llc Continuous block tracking for temporal prediction in video encoding
US9743078B2 (en) 2004-07-30 2017-08-22 Euclid Discoveries, Llc Standards-compliant model-based video encoding and decoding
US10091507B2 (en) 2014-03-10 2018-10-02 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
US10097851B2 (en) 2014-03-10 2018-10-09 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2385740B (en) * 2002-02-22 2005-04-20 Zarlink Semiconductor Ltd A telephone subscriber unit and a semiconductor device for use in or with a telephone subscriber unit
US20040003164A1 (en) * 2002-06-27 2004-01-01 Patrick Boily PCI bridge and data transfer methods
US7263587B1 (en) * 2003-06-27 2007-08-28 Zoran Corporation Unified memory controller
US7213084B2 (en) * 2003-10-10 2007-05-01 International Business Machines Corporation System and method for allocating memory allocation bandwidth by assigning fixed priority of access to DMA machines and programmable priority to processing unit
US20050223131A1 (en) * 2004-04-02 2005-10-06 Goekjian Kenneth S Context-based direct memory access engine for use with a memory system shared by devices associated with multiple input and output ports
US7822903B2 (en) * 2006-02-24 2010-10-26 Qualcomm Incorporated Single bus command having transfer information for transferring data in a processing system
US8417842B2 (en) * 2008-05-16 2013-04-09 Freescale Semiconductor Inc. Virtual direct memory access (DMA) channel technique with multiple engines for DMA controller
US9110771B2 (en) 2008-06-13 2015-08-18 New York University Computations using a polychronous wave propagation system
US20120155273A1 (en) * 2010-12-15 2012-06-21 Advanced Micro Devices, Inc. Split traffic routing in a processor
US9128925B2 (en) * 2012-04-24 2015-09-08 Freescale Semiconductor, Inc. System and method for direct memory access buffer utilization by setting DMA controller with plurality of arbitration weights associated with different DMA engines
US10575568B2 (en) 2017-11-23 2020-03-03 Shannon Lehna Smart body shaping system
US10740150B2 (en) * 2018-07-11 2020-08-11 X-Drive Technology, Inc. Programmable state machine controller in a parallel processing system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598542A (en) * 1994-08-08 1997-01-28 International Business Machines Corporation Method and apparatus for bus arbitration in a multiple bus information handling system using time slot assignment values
US6167465A (en) * 1998-05-20 2000-12-26 Aureal Semiconductor, Inc. System for managing multiple DMA connections between a peripheral device and a memory and performing real-time operations on data carried by a selected DMA connection
US6446151B1 (en) * 1999-09-29 2002-09-03 Agere Systems Guardian Corp. Programmable time slot interface bus arbiter

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5388237A (en) * 1991-12-30 1995-02-07 Sun Microsystems, Inc. Method of and apparatus for interleaving multiple-channel DMA operations
KR970002384B1 (en) * 1994-10-26 1997-03-03 엘지전자 주식회사 Control unit for sound-generation and display for portable terminal
US5613162A (en) * 1995-01-04 1997-03-18 Ast Research, Inc. Method and apparatus for performing efficient direct memory access data transfers
JP3403284B2 (en) * 1995-12-14 2003-05-06 インターナショナル・ビジネス・マシーンズ・コーポレーション Information processing system and control method thereof
US5754884A (en) * 1996-05-20 1998-05-19 Advanced Micro Devices Method for improving the real-time functionality of a personal computer which employs an interrupt servicing DMA controller
US5974480A (en) * 1996-10-18 1999-10-26 Samsung Electronics Co., Ltd. DMA controller which receives size data for each DMA channel
US5982672A (en) * 1996-10-18 1999-11-09 Samsung Electronics Co., Ltd. Simultaneous data transfer through read and write buffers of a DMA controller
US5894586A (en) * 1997-01-23 1999-04-13 Xionics Document Technologies, Inc. System for providing access to memory in which a second processing unit is allowed to access memory during a time slot assigned to a first processing unit
JPH11184804A (en) * 1997-12-22 1999-07-09 Nec Corp Information processor and information processing method
US6205524B1 (en) * 1998-09-16 2001-03-20 Neomagic Corp. Multimedia arbiter and method using fixed round-robin slots for real-time agents and a timed priority slot for non-real-time agents
US6275877B1 (en) * 1998-10-27 2001-08-14 James Duda Memory access controller
JP2003141057A (en) * 2001-11-06 2003-05-16 Mitsubishi Electric Corp Dma transfer control circuit

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5598542A (en) * 1994-08-08 1997-01-28 International Business Machines Corporation Method and apparatus for bus arbitration in a multiple bus information handling system using time slot assignment values
US6167465A (en) * 1998-05-20 2000-12-26 Aureal Semiconductor, Inc. System for managing multiple DMA connections between a peripheral device and a memory and performing real-time operations on data carried by a selected DMA connection
US6446151B1 (en) * 1999-09-29 2002-09-03 Agere Systems Guardian Corp. Programmable time slot interface bus arbiter

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040031060A1 (en) * 2001-04-12 2004-02-12 Tetsujiro Kondo Signal processing device, housing rack, and connector
US7859601B2 (en) * 2001-04-12 2010-12-28 Sony Corporation Signal processing device, housing rack, and connector
US20030001981A1 (en) * 2001-05-21 2003-01-02 Sony Corporation Modular digital television architecture
US20020180890A1 (en) * 2001-05-21 2002-12-05 Milne James R. Modular digital television architecture
CN100552655C (en) * 2003-08-07 2009-10-21 松下电器产业株式会社 Processor integrated circuit and the methods of product development that processor integrated circuit has been installed
US20050080943A1 (en) * 2003-10-09 2005-04-14 International Business Machines Corporation Method and apparatus for efficient sharing of DMA resource
US6993598B2 (en) 2003-10-09 2006-01-31 International Business Machines Corporation Method and apparatus for efficient sharing of DMA resource
US7136943B2 (en) 2004-03-18 2006-11-14 International Business Machines Corporation Method and apparatus for managing context switches using a context switch history table
US20050228936A1 (en) * 2004-03-18 2005-10-13 International Business Machines Corporation Method and apparatus for managing context switches using a context switch history table
US8902971B2 (en) 2004-07-30 2014-12-02 Euclid Discoveries, Llc Video compression repository and model reuse
US9743078B2 (en) 2004-07-30 2017-08-22 Euclid Discoveries, Llc Standards-compliant model-based video encoding and decoding
US9532069B2 (en) 2004-07-30 2016-12-27 Euclid Discoveries, Llc Video compression repository and model reuse
US8942283B2 (en) * 2005-03-31 2015-01-27 Euclid Discoveries, Llc Feature-based hybrid video codec comparing compression efficiency of encodings
US8964835B2 (en) 2005-03-31 2015-02-24 Euclid Discoveries, Llc Feature-based video compression
US20110182352A1 (en) * 2005-03-31 2011-07-28 Pace Charles P Feature-Based Video Compression
US8908766B2 (en) 2005-03-31 2014-12-09 Euclid Discoveries, Llc Computer method and apparatus for processing image data
US20100008424A1 (en) * 2005-03-31 2010-01-14 Pace Charles P Computer method and apparatus for processing image data
US9578345B2 (en) 2005-03-31 2017-02-21 Euclid Discoveries, Llc Model-based video encoding and decoding
US9106977B2 (en) 2006-06-08 2015-08-11 Euclid Discoveries, Llc Object archival systems and methods
US8553782B2 (en) 2007-01-23 2013-10-08 Euclid Discoveries, Llc Object archival systems and methods
US8842154B2 (en) 2007-01-23 2014-09-23 Euclid Discoveries, Llc Systems and methods for providing personal video services
US20100086062A1 (en) * 2007-01-23 2010-04-08 Euclid Discoveries, Llc Object archival systems and methods
CN101645052A (en) * 2008-08-06 2010-02-10 中兴通讯股份有限公司 Quick direct memory access (DMA) ping-pong caching method
CN101820543A (en) * 2010-03-30 2010-09-01 北京蓝色星河软件技术发展有限公司 Ping-pong structure fast data access method combined with direct memory access (DMA)
US10097851B2 (en) 2014-03-10 2018-10-09 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding
US9621917B2 (en) 2014-03-10 2017-04-11 Euclid Discoveries, Llc Continuous block tracking for temporal prediction in video encoding
US10091507B2 (en) 2014-03-10 2018-10-02 Euclid Discoveries, Llc Perceptual optimization for model-based video encoding

Also Published As

Publication number Publication date
US7389365B2 (en) 2008-06-17
US6795875B2 (en) 2004-09-21
US20050204074A1 (en) 2005-09-15
US20050204073A1 (en) 2005-09-15
US20050038935A1 (en) 2005-02-17
US6976098B2 (en) 2005-12-13
US7089336B2 (en) 2006-08-08

Similar Documents

Publication Publication Date Title
US7089336B2 (en) Arbitrating and servicing polychronous data requests in Direct Memory Access
EP0863462B1 (en) Processor capable of efficiently executing many asynchronous event tasks
US7093256B2 (en) Method and apparatus for scheduling real-time and non-real-time access to a shared resource
KR100943446B1 (en) Method of processing data of at least one data stream, data storage system and method of use thereof
US7565462B2 (en) Memory access engine having multi-level command structure
CN1125491A (en) Video peripheral for a computer
JP4519082B2 (en) Information processing method, moving image thumbnail display method, decoding device, and information processing device
US20100295859A1 (en) Virtualization of graphics resources and thread blocking
JPH06208526A (en) Data communication method and data processing system by way of bas and bridge
CN1054160A (en) Communications interface adapter
JPH08228200A (en) Arbiter to be used at the time of controlling operation including data transfer and method of arbitrating operation including data transfer
WO2009130871A1 (en) Decoding device
EP1046251A1 (en) A method of maintaining a minimum level of data quality while allowing bandwidth-dependent quality enhancement
US7263587B1 (en) Unified memory controller
CN101527844A (en) Method for block execution of data to be decoded
US7861012B2 (en) Data transmitting device and data transmitting method
US5911152A (en) Computer system and method for storing data in a buffer which crosses page boundaries utilizing beginning and ending buffer pointers
US6816923B1 (en) Arbitrating and servicing polychronous data requests in direct memory access
US6313766B1 (en) Method and apparatus for accelerating software decode of variable length encoded information
EP0802683A2 (en) Data priority processing for MPEG system
EP1267272B1 (en) A specialized memory device
JP2000092469A (en) Digital reception terminal
JPH08314793A (en) Memory access control method and semiconductor integrated circuit and image decoding device using this method
US20030233338A1 (en) Access to a collective resource
EP1351514A2 (en) Memory acces engine having multi-level command structure

Legal Events

Date Code Title Description
AS Assignment

Owner name: WEBTV NETWORKS, INC. (MICROSOFT), CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GRAY, DONALD M. III;AHSAN, AGHA ZAIGHAM;REEL/FRAME:011881/0485

Effective date: 20010530

AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: MERGER;ASSIGNOR:WEBTV NETWORKS, INC.;REEL/FRAME:015029/0853

Effective date: 20020628

STCF Information on status: patent grant

Free format text: PATENTED CASE

AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: MERGER;ASSIGNOR:WEBTV NETWORKS, INC.;REEL/FRAME:015770/0323

Effective date: 20020628

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034541/0001

Effective date: 20141014

FPAY Fee payment

Year of fee payment: 12