US20040111532A1 - Method, system, and program for adding operations to structures - Google Patents

Method, system, and program for adding operations to structures Download PDF

Info

Publication number
US20040111532A1
US20040111532A1 US10/314,473 US31447302A US2004111532A1 US 20040111532 A1 US20040111532 A1 US 20040111532A1 US 31447302 A US31447302 A US 31447302A US 2004111532 A1 US2004111532 A1 US 2004111532A1
Authority
US
United States
Prior art keywords
data packet
operations
priority level
designation
level associated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US10/314,473
Other versions
US7177913B2 (en
Inventor
Patrick Connor
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Priority to US10/314,473 priority Critical patent/US7177913B2/en
Assigned to INTEL CORPORATION reassignment INTEL CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONNOR, PATRICK L.
Publication of US20040111532A1 publication Critical patent/US20040111532A1/en
Application granted granted Critical
Publication of US7177913B2 publication Critical patent/US7177913B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Definitions

  • the present invention relates to a method, system, and program for adding operations identifying data packets to structures.
  • components are coupled to each other via one or more buses.
  • a variety of components can be coupled to a bus, thereby providing intercommunication between all of the various components.
  • An example of a bus that is used for data transfer between a memory and another device is the peripheral component interconnect (PCI) bus.
  • PCI peripheral component interconnect
  • DMA transfers In order to relieve a processor of the burden of controlling the movement of blocks of data inside of a computer, direct memory access (DMA) transfers are commonly used. With DMA transfers, data can be transferred from one memory location to another memory location, or from a memory location to an input/output (I/O) device (and vice versa), without having to go through the processor. Additional bus efficiently is achieved by allowing some of the devices connected to the PCI bus to be DMA masters.
  • I/O controllers such as gigabit Ethernet media access control (MAC) network controllers
  • MAC gigabit Ethernet media access control
  • a host computer includes an Input/Output (I/O) controller for controlling the transfer of data packets to or from, for example, other computers or peripheral devices across a network, such as an Ethernet local area network (LAN).
  • I/O Input/Output
  • LAN Ethernet local area network
  • IEEE Institute of Electrical and Electronics Engineers
  • a device driver for the I/O controller To read a data buffer of a memory using DMA transfers, such as when the data has to be retrieved from memory in response to a transmit command from an operating system so that the data can be transmitted by the I/O controller, a device driver for the I/O controller prepares the data buffer.
  • a transmit command may be any indication that notifies the device driver of a data packet to be transferred, for example, over a network.
  • the device driver writes one or more descriptors (i.e., that include the data buffer's physical memory address and length, etc.) to a command register of the I/O controller to inform the I/O controller that one or more descriptors are ready to be processed by the I/O controller.
  • the I/O controller then DMA transfers the one or more descriptors from memory to another buffer and obtains the data buffer's physical memory address, length, etc. After the I/O controller has processed the one or more descriptors, the I/O controller can DMA transfer the contents/data in the data buffer.
  • a priority may be assigned to the data packets. For instance, for an Ethernet LAN, data packets are assigned a priority ranging from level 0 to 7 , with 7 reflecting the highest priority level.
  • Some I/O controllers maintain one queue for storing high priority data packets that are waiting to be transferred and another queue for storing low priority data packets that are waiting to be transferred. Then, data packets are selected from the two queues and transferred with a round robin technique (i.e., a data packet from the high priority queue is selected for transfer, then a data packet from the low priority queue is selected for transfer, a data packet from the high priority queue is selected for transfer, etc.) by the I/O controller.
  • a round robin technique i.e., a data packet from the high priority queue is selected for transfer, then a data packet from the low priority queue is selected for transfer, a data packet from the high priority queue is selected for transfer, etc.
  • low priority data packets may be transferred before queued high priority data packets. For example, if the majority of data packets are high priority, such as streaming audio or video data, the high priority queue may have several high priority data packets waiting to be transferred. If a low priority data packet is then stored in the
  • FIG. 1 illustrates a computing environment in which aspects of the invention may be implemented.
  • FIG. 2 illustrates a format of a data packet in accordance with certain embodiments of the invention.
  • FIG. 3 illustrates logic implemented in a device driver in accordance with certain embodiments of the invention.
  • FIG. 4 illustrates an example set of transfer operations for high and low priority data packets being placed into queues in accordance with certain embodiments of the invention.
  • FIG. 5 illustrates an example set of transfer operations for mostly high priority data packets being placed into queues in accordance with certain embodiments of the invention.
  • FIG. 6 illustrates an example set of transfer operations for mostly low priority data packets being placed into queues in accordance with certain embodiments of the invention.
  • FIG. 1 illustrates a computing environment in which aspects of the invention may be implemented.
  • a computer 102 includes a central processing unit (CPU) 104 , a volatile memory 106 , non-volatile storage 108 (e.g., magnetic disk drives, optical disk drives, a tape drive, etc.), an operating system 110 , and a network adapter 112 .
  • the computer 102 may comprise any computing device known in the art, such as a mainframe, server, personal computer, workstation, laptop, handheld computer, telephony device, network appliance, virtualization device, storage controller, etc.
  • the network adapter 112 includes a network protocol for implementing the physical communication layer to send and receive network packets to and from remote devices over a network 116 .
  • the network adapter 112 includes an I/O controller 122 .
  • the I/O controller 122 may comprise an Ethernet Media Access Controller (MAC) or network interface card (NIC), and it is understood that other types of network controllers, I/O controllers such as small computer system interface (SCSI controllers), or cards may be used.
  • MAC Media Access Controller
  • NIC network interface card
  • the network 116 may comprise a Local Area Network (LAN), the Internet, a Wide Area Network (WAN), Storage Area Network (SAN), etc.
  • the network adapter 112 may implement the Ethernet protocol, token ring protocol, Fibre Channel protocol, Infiniband, Serial Advanced Technology Attachment (SATA), parallel SCSI, serial attached SCSI cable, etc., or any other network communication protocol known in the art.
  • the storage 108 may comprise an internal storage device or an attached or network accessible storage. Programs in the storage 108 are loaded into the memory 106 and executed by the CPU 104 .
  • An input device 130 is used to provide user input to the CPU 104 , and may include a keyboard, mouse, pen-stylus, microphone, touch sensitive display screen, or any other activation or input mechanism known in the art.
  • An output device 132 is capable of rendering information transferred from the CPU 104 , or other component, such as a display monitor, printer, storage, etc.
  • a device driver 118 includes network adapter 112 specific operations to communicate with the network adapter 112 and interface between the operating system 110 and the network adapter 112 .
  • the device driver 118 controls operation of the I/O controller 122 and performs other operations related to the reading of data packets from memory 106 .
  • the device driver 118 may be software that is executed by CPU 104 in memory 106 .
  • the computer 102 may include other drivers, such as a transport protocol driver 128 .
  • the transport protocol driver 128 executes in memory 106 and processes the content of messages included in the packets received at the network adapter 112 that are wrapped in a transport layer, such as TCP and/or IP, Internet Small Computer System Interface (iSCSI), Fibre Channel SCSI, parallel SCSI transport, or any other transport layer protocol known in the art.
  • a transport layer such as TCP and/or IP, Internet Small Computer System Interface (iSCSI), Fibre Channel SCSI, parallel SCSI transport, or any other transport layer protocol known in the art.
  • the device driver 118 issues operations to the I/O controller 122 .
  • an operation may be any type of information, command, etc., for examples described herein, the term “transfer operation” will be used to refer to an operation that provides information about data for transfer (e.g., across an Ethernet LAN).
  • Other operations e.g., a storage operation that is used to store data into a structure
  • An I/O controller 122 maintains a first structure 124 (e.g., a queue) and a second structure 126 (e.g., a queue) for storing the transfer operations.
  • the device driver 118 issues transfer operations to the I/O controller 122 and places the transfer operations in the structures 124 , 126 .
  • the transfer operations identify data packets stored in one or more data buffers 134 .
  • the I/O controller 122 processes the transfer operations in structures 124 , 126 to transfer data packets from data buffers 134 to a transfer structure 136 (e.g., a First In First Out (FIFO) queue) for transfer over, for example, network 116 .
  • a transfer structure 136 e.g., a First In First Out (FIFO) queue
  • FIG. 1 Several of the devices of FIG. 1 maybe directly or indirectly coupled to a bus (not shown).
  • the device driver 118 and the I/O controller 122 maybe coupled to the bus.
  • structures/buffers 124 , 126 , 132 , and 134 are illustrated as residing in memory 106 , it is to be understood that some or all of these structures/buffers may be located in a storage unit separate from the memory 106 in certain embodiments.
  • FIG. 2 illustrates a format of a data packet 250 in accordance with certain embodiments of the invention.
  • the network packet 250 is implemented in a format understood by the network protocol, such as an Ethernet packet that would include additional Ethernet components, such as a header and error checking code (not shown).
  • a transport packet 252 is included in the network packet 250 .
  • the transport packet may 252 comprise a transport layer capable of being processed by the I/O controller 22 , such as the TCP and/or IP protocol, Internet Small Computer System Interface (iSCSI) protocol, Fibre Channel SCSI, parallel SCSI transport, etc.
  • the transport packet 252 includes a priority level 254 as well as other transport layer fields, such as payload data, a header, and an error checking code.
  • the payload data 252 includes the underlying content being transferred, e.g., commands, status and/or data.
  • the operating system may include a device layer, such as a SCSI driver (not shown), to process the content of the payload data and access any status, commands and/or data therein.
  • the invention places transfer operations identifying data packets (e.g., descriptors identifying the physical memory address and length of the data buffers in which the data packets reside) into one of multiple (e.g., two) structures 124 , 126 according to the priority level of the data packet to be placed into one of the structures 124 , 126 and the number of pending data packets already stored in each of the structures. High priority data packets are placed on the structure with the fewest number of pending data packets, and low priority data packets are placed on the structure with the highest number of pending data packets.
  • transfer operations identifying data packets e.g., descriptors identifying the physical memory address and length of the data buffers in which the data packets reside
  • transfer operations identifying data packets e.g., descriptors identifying the physical memory address and length of the data buffers in which the data packets reside
  • transfer operations identifying data packets e.g., descriptors identifying the physical memory address and length of the data buffers
  • FIG. 3 illustrates logic implemented in a device driver 118 in accordance with certain embodiments of the invention.
  • Control begins at block 300 with receipt of an operation identifying a data packet.
  • a priority level for the data packet is obtained.
  • a priority level is included with the data packet. If a priority level is not already associated with the data packet, the priority level of the data packet may be calculated in block 310 or before block 310 based on one or more factors, such as whether the payload data includes audio or video data (e.g., audio or video data may be high priority so that the audio or video data packets are sent together to avoid disruption of an audio or video stream).
  • the priority level may be associated with an alphabetic character, symbol, numeric value, or other value.
  • the priority level has a first designation (e.g., a high priority level) or a second designation (e.g., a low priority level), and if the data packet has a first designation, processing continues to block 330 , otherwise, processing continues to block 340 .
  • the first and second designations may be determined, for example, by a system administrator. For example, if there are eight priority levels, the top four priority levels (e.g., 4 , 5 , 6 , and 7 ) may be designated as high priorities, while the remaining four priority levels (e.g., 0 , 1 , 2 , and 3 ) may be treated as low priorities.
  • block 320 may be modified to determine whether the priority level falls within a range. For example, if the priority level is associated with a character from “A” through “M”, then in block 320 a determination may be made of whether the priority level of the data packet falls within “A” through “M” or “N” through “Z”.
  • the operation identifying the data packet associated with the first designation (e.g., high priority) is placed in the structure with the least number of data packets. If the structures are an equal length (e.g., have an equal number of data packets), then either structure may be chosen.
  • the operation identifying the data packet associated with the second designation (e.g., low priority) is placed in the structure with the most number of data packets.
  • the invention ensures that data packets of similar priority are sent in the order that they were issued to the device driver 118 .
  • the transfer operations for data packets of similar priority are not necessarily stored on the same structure. That is, one structure is not designated for transfer operations for high priority data packets, while another structure is designated for transfer operations for low priority data packets.
  • the structure that is currently the shortest will change as transfer operations identifying data packets are added to the structure.
  • the transfer operations for high priority data packets are added to the available structures in a round robin manner. This preserves the data packet order and does not allow subsequent transfer operations to bypass any of the transfer operations for high priority packets that are being added to the structures.
  • FIG. 4 illustrates an example set of transfer operations for high and low priority data packets being placed into queues in accordance with certain embodiments of the invention.
  • the device driver 118 inserts a set of transfer operations 400 identifying data packets into queues 410 , 420 for access by the I/O controller 122 .
  • transfer operations are stored in the queues, although data packets may be stored as well.
  • a number following “Data Packet” indicates the order of receipt of the transfer operation for the data packet of the specified priority level.
  • Low Priority Data Packet 0 , Data Packet 1 , and Data Packet 2 are transfer operations that are received in order of 0 - 1 - 2 and have low priority
  • High Priority Data Packet 0 and Data Packet 1 are transfer operations that are received in the order of 0 - 1 and have high priority.
  • Low Priority Data Packet 0 identifies the first transfer operation received at the I/O controller 122 for transmission.
  • Low Priority Data Packet 0 may be placed into either queue 410 or 420 .
  • Low Priority Data Packet 0 is placed into queue 1 410 .
  • Low Priority Data Packet 1 is received next and, since the data packet identified by this transfer operation has a low priority, the transfer operation is placed into the queue with most transfer operations (i.e., the longer queue), which is queue 1 410 .
  • High Priority Data Packet 0 is placed into the queue with the least transfer operations (i.e., the shorter queue), which is queue 2 420 .
  • High Priority Data Packet 0 is likely to be transferred before Low Priority Data Packet 1 . That is, Low Priority Data Packet 0 will be removed from queue 1 410 and processed, then High Priority Data Packet 0 will be removed from queue 2 420 and processed, before Low Priority Data Packet 1 is removed and processed. This depends on how long it takes to process Low Priority Data Packet 0 relative to when High Priority Data Packet 0 is added to queue 2 420 .
  • Low Priority Data Packet 2 is received and is placed into the longer queue 1 410
  • High Priority Data Packet 1 is placed into the shorter queue 2 420 .
  • the data packets identified by the transfer operations are likely to be transferred over the network 116 in the following order: Low Priority Data Packet 0 , High Priority Data Packet 0 , Low Priority Data Packet 1 , High Priority Data Packet 1 , Low Priority Data Packet 0 .
  • FIG. 5 illustrates an example set of transfer operations for mostly high priority data packets being placed into queues in accordance with certain embodiments of the invention.
  • the device driver 118 inserts a set of transfer operations 500 identifying data packets into queues 510 , 520 for access by the I/O controller 122 .
  • High Priority Data Packet 0 is received at the I/O controller 122 .
  • High Priority Data Packet 0 may be placed into either queue 510 , 520 .
  • High Priority Data Packet 0 is placed in queue 1 510 .
  • High Priority Data Packet 1 is then placed into shorter queue 2 , 520 .
  • High Priority Data Packet 2 is placed into the shorter queue 1 510 .
  • Low Priority Data Packet 0 is placed into the longer queue 1 510 .
  • High Priority Data Packet 3 is placed into the shorter queue 2 520 .
  • the data packets identified by the transfer operations are likely to be transferred over the network 116 in the following order: High Priority Data Packet 0 , High Priority Data Packet 1 , High Priority Data Packet 2 , High Priority Data Packet 3 , and Low Priority Data Packet 0 . This allows High Priority Data Packet 3 to bypass Low Priority Data Packet 0 .
  • FIG. 6 illustrates an example set of transfer operations for mostly low priority data packets being placed into queues in accordance with certain embodiments of the invention.
  • the device driver 118 inserts a set of transfer operations 400 identifying data packets into queues 610 , 620 for access by the I/O controller 122 .
  • Low Priority Data Packet 0 is placed into queue 1 610 .
  • each other Low Priority Data Packet 1 , 2 , 3 , 4 , 5 , 6 are placed into queue 1 610 .
  • High Priority Data Packet 0 is placed into the shorter queue 2 620 .
  • High Priority Data Packet 0 which was requested last for transfer may be the second data packet to be transferred.
  • the data packets identified by the transfer operations are likely to be transferred over the network 116 in the following order: Low Priority Data Packet 0 , High Priority Data Packet 0 , Low Priority Data Packet 1 , Low Priority Data Packet 2 , Low Priority Data Packet 3 , Low Priority Data Packet 4 , Low Priority Data Packet 5 , and Low Priority Data Packet 6 .
  • This allows a high priority data packet to be given preferential treatment and to bypass multiple low priority data packets.
  • the high priority data packet was sent with lower latency than the low priority data packets.
  • the described embodiments of the invention provide a method, system, and program for a technique for using multiple transfer structures (e.g., queues )of equal priority on an I/O controller to provide preferential treatment of high priority data packets that are to be transferred, for example, across a network.
  • multiple transfer structures e.g., queues
  • the described techniques for maintaining information on network components may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof.
  • article of manufacture refers to code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.) or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, flash, firmware, programmable logic, etc.).
  • Code in the computer readable medium is accessed and executed by a processor.
  • the code in which preferred embodiments are implemented may further be accessible through a transmission media or from a file server over a network.
  • the article of manufacture in which the code is implemented may comprise a transmission media, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc.
  • the “article of manufacture” may comprise the medium in which the code is embodied.
  • the “article of manufacture” may comprise a combination of hardware and software components in which the code is embodied, processed, and executed.
  • the article of manufacture may comprise any information bearing medium known in the art.
  • the data packets were transferred over a network 116 .
  • the data packets may be transferred to local storage, to a peripheral device, or to another device without being transferred over the network 116 .
  • two structures were described for storing data packets.
  • more than two structures may be maintained and data packets with high priority are added to the shortest structure, while data packets with low priority are added to the longest structure.
  • data packets with a certain priority level e.g., low priority
  • transfer operations were added to structures.
  • any type of operation e.g., a storage operation that is used to store data into a structure may be added.
  • FIG. 3 describes specific logic operations occurring in a particular order. In alternative embodiments, certain of the logic operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, logic operations described herein may occur sequentially or certain logic operations may be processed in parallel, or logic operations described as performed by a single process may be performed by distributed processes.

Abstract

Disclosed is a method, system, and program for adding an operation to a structure. If a priority level associated with a data packet identified by the operation has a first designation, placing the operation into a first structure with a least number of operations. If the priority level associated with the data packet identified by the operation has a second designation, placing the operation into a second structure with a most number of operations.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to a method, system, and program for adding operations identifying data packets to structures. [0002]
  • 2. Description of the Related Art [0003]
  • In computer systems, components are coupled to each other via one or more buses. A variety of components can be coupled to a bus, thereby providing intercommunication between all of the various components. An example of a bus that is used for data transfer between a memory and another device is the peripheral component interconnect (PCI) bus. [0004]
  • In order to relieve a processor of the burden of controlling the movement of blocks of data inside of a computer, direct memory access (DMA) transfers are commonly used. With DMA transfers, data can be transferred from one memory location to another memory location, or from a memory location to an input/output (I/O) device (and vice versa), without having to go through the processor. Additional bus efficiently is achieved by allowing some of the devices connected to the PCI bus to be DMA masters. [0005]
  • When transferring data using DMA techniques, high performance Input/Output I/O controllers, such as gigabit Ethernet media access control (MAC) network controllers may be used. In particular, a host computer includes an Input/Output (I/O) controller for controlling the transfer of data packets to or from, for example, other computers or peripheral devices across a network, such as an Ethernet local area network (LAN). The term “Ethernet” is a reference to a standard for transmission of data packets maintained by the Institute of Electrical and Electronics Engineers (IEEE) and one version of the Ethernet standard is IEEE std. 802.3, published Mar. 8, 2002. [0006]
  • To read a data buffer of a memory using DMA transfers, such as when the data has to be retrieved from memory in response to a transmit command from an operating system so that the data can be transmitted by the I/O controller, a device driver for the I/O controller prepares the data buffer. A transmit command may be any indication that notifies the device driver of a data packet to be transferred, for example, over a network. The device driver writes one or more descriptors (i.e., that include the data buffer's physical memory address and length, etc.) to a command register of the I/O controller to inform the I/O controller that one or more descriptors are ready to be processed by the I/O controller. The I/O controller then DMA transfers the one or more descriptors from memory to another buffer and obtains the data buffer's physical memory address, length, etc. After the I/O controller has processed the one or more descriptors, the I/O controller can DMA transfer the contents/data in the data buffer. [0007]
  • A priority may be assigned to the data packets. For instance, for an Ethernet LAN, data packets are assigned a priority ranging from [0008] level 0 to 7, with 7 reflecting the highest priority level.
  • Some I/O controllers maintain one queue for storing high priority data packets that are waiting to be transferred and another queue for storing low priority data packets that are waiting to be transferred. Then, data packets are selected from the two queues and transferred with a round robin technique (i.e., a data packet from the high priority queue is selected for transfer, then a data packet from the low priority queue is selected for transfer, a data packet from the high priority queue is selected for transfer, etc.) by the I/O controller. Moreover, it is possible that low priority data packets may be transferred before queued high priority data packets. For example, if the majority of data packets are high priority, such as streaming audio or video data, the high priority queue may have several high priority data packets waiting to be transferred. If a low priority data packet is then stored in the low priority queue, which has few or no other pending data packets, the round robin selection of data packets for transfer would select the low priority data packet for transmission before selecting a pending high priority data packet. [0009]
  • This leads to a disruption of the transfer of high priority data packets (e.g., streaming audio or video data or protocol control packets). Therefore, there is a need for an improved technique for processing data packets in queues.[0010]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Referring now to the drawings in which like reference numbers represent corresponding parts throughout: [0011]
  • FIG. 1 illustrates a computing environment in which aspects of the invention may be implemented. [0012]
  • FIG. 2 illustrates a format of a data packet in accordance with certain embodiments of the invention. [0013]
  • FIG. 3 illustrates logic implemented in a device driver in accordance with certain embodiments of the invention. [0014]
  • FIG. 4 illustrates an example set of transfer operations for high and low priority data packets being placed into queues in accordance with certain embodiments of the invention. [0015]
  • FIG. 5 illustrates an example set of transfer operations for mostly high priority data packets being placed into queues in accordance with certain embodiments of the invention. [0016]
  • FIG. 6 illustrates an example set of transfer operations for mostly low priority data packets being placed into queues in accordance with certain embodiments of the invention.[0017]
  • DETAILED DESCRIPTION
  • In the following description, reference is made to the accompanying drawings which form a part hereof and which illustrate several embodiments of the present invention. It is understood that other embodiments may be utilized and structural and operational changes may be made without departing from the scope of the present invention. [0018]
  • FIG. 1 illustrates a computing environment in which aspects of the invention may be implemented. A [0019] computer 102 includes a central processing unit (CPU) 104, a volatile memory 106, non-volatile storage 108 (e.g., magnetic disk drives, optical disk drives, a tape drive, etc.), an operating system 110, and a network adapter 112. The computer 102 may comprise any computing device known in the art, such as a mainframe, server, personal computer, workstation, laptop, handheld computer, telephony device, network appliance, virtualization device, storage controller, etc.
  • Any [0020] CPU 104 and operating system 110 known in the art may be used. The network adapter 112 includes a network protocol for implementing the physical communication layer to send and receive network packets to and from remote devices over a network 116. The network adapter 112 includes an I/O controller 122. In certain embodiments, the I/O controller 122 may comprise an Ethernet Media Access Controller (MAC) or network interface card (NIC), and it is understood that other types of network controllers, I/O controllers such as small computer system interface (SCSI controllers), or cards may be used.
  • The [0021] network 116 may comprise a Local Area Network (LAN), the Internet, a Wide Area Network (WAN), Storage Area Network (SAN), etc. In certain embodiments, the network adapter 112 may implement the Ethernet protocol, token ring protocol, Fibre Channel protocol, Infiniband, Serial Advanced Technology Attachment (SATA), parallel SCSI, serial attached SCSI cable, etc., or any other network communication protocol known in the art.
  • The [0022] storage 108 may comprise an internal storage device or an attached or network accessible storage. Programs in the storage 108 are loaded into the memory 106 and executed by the CPU 104. An input device 130 is used to provide user input to the CPU 104, and may include a keyboard, mouse, pen-stylus, microphone, touch sensitive display screen, or any other activation or input mechanism known in the art. An output device 132 is capable of rendering information transferred from the CPU 104, or other component, such as a display monitor, printer, storage, etc.
  • A [0023] device driver 118 includes network adapter 112 specific operations to communicate with the network adapter 112 and interface between the operating system 110 and the network adapter 112. In particular, the device driver 118 controls operation of the I/O controller 122 and performs other operations related to the reading of data packets from memory 106. The device driver 118 may be software that is executed by CPU 104 in memory 106.
  • In addition to the [0024] device driver 118, the computer 102 may include other drivers, such as a transport protocol driver 128. The transport protocol driver 128 executes in memory 106 and processes the content of messages included in the packets received at the network adapter 112 that are wrapped in a transport layer, such as TCP and/or IP, Internet Small Computer System Interface (iSCSI), Fibre Channel SCSI, parallel SCSI transport, or any other transport layer protocol known in the art.
  • In certain embodiments, the [0025] device driver 118 issues operations to the I/O controller 122. Although an operation may be any type of information, command, etc., for examples described herein, the term “transfer operation” will be used to refer to an operation that provides information about data for transfer (e.g., across an Ethernet LAN). Other operations (e.g., a storage operation that is used to store data into a structure) fall within the scope of the invention. An I/O controller 122 maintains a first structure 124 (e.g., a queue) and a second structure 126 (e.g., a queue) for storing the transfer operations. In certain embodiments, the device driver 118 issues transfer operations to the I/O controller 122 and places the transfer operations in the structures 124, 126. The transfer operations identify data packets stored in one or more data buffers 134. The I/O controller 122 processes the transfer operations in structures 124, 126 to transfer data packets from data buffers 134 to a transfer structure 136 (e.g., a First In First Out (FIFO) queue) for transfer over, for example, network 116.
  • Several of the devices of FIG. 1 maybe directly or indirectly coupled to a bus (not shown). For instance, the [0026] device driver 118 and the I/O controller 122 maybe coupled to the bus.
  • Although structures/[0027] buffers 124, 126, 132, and 134 are illustrated as residing in memory 106, it is to be understood that some or all of these structures/buffers may be located in a storage unit separate from the memory 106 in certain embodiments.
  • FIG. 2 illustrates a format of a [0028] data packet 250 in accordance with certain embodiments of the invention. The network packet 250 is implemented in a format understood by the network protocol, such as an Ethernet packet that would include additional Ethernet components, such as a header and error checking code (not shown). A transport packet 252 is included in the network packet 250. The transport packet may 252 comprise a transport layer capable of being processed by the I/O controller 22, such as the TCP and/or IP protocol, Internet Small Computer System Interface (iSCSI) protocol, Fibre Channel SCSI, parallel SCSI transport, etc. The transport packet 252 includes a priority level 254 as well as other transport layer fields, such as payload data, a header, and an error checking code. The payload data 252 includes the underlying content being transferred, e.g., commands, status and/or data. The operating system may include a device layer, such as a SCSI driver (not shown), to process the content of the payload data and access any status, commands and/or data therein.
  • The invention places transfer operations identifying data packets (e.g., descriptors identifying the physical memory address and length of the data buffers in which the data packets reside) into one of multiple (e.g., two) [0029] structures 124, 126 according to the priority level of the data packet to be placed into one of the structures 124, 126 and the number of pending data packets already stored in each of the structures. High priority data packets are placed on the structure with the fewest number of pending data packets, and low priority data packets are placed on the structure with the highest number of pending data packets.
  • FIG. 3 illustrates logic implemented in a [0030] device driver 118 in accordance with certain embodiments of the invention. Control begins at block 300 with receipt of an operation identifying a data packet. In block 310, a priority level for the data packet is obtained. In certain embodiments, a priority level is included with the data packet. If a priority level is not already associated with the data packet, the priority level of the data packet may be calculated in block 310 or before block 310 based on one or more factors, such as whether the payload data includes audio or video data (e.g., audio or video data may be high priority so that the audio or video data packets are sent together to avoid disruption of an audio or video stream). In certain embodiments, the priority level may be associated with an alphabetic character, symbol, numeric value, or other value.
  • In [0031] block 320, it is determined whether the priority level has a first designation (e.g., a high priority level) or a second designation (e.g., a low priority level), and if the data packet has a first designation, processing continues to block 330, otherwise, processing continues to block 340. The first and second designations may be determined, for example, by a system administrator. For example, if there are eight priority levels, the top four priority levels (e.g., 4, 5, 6, and 7) may be designated as high priorities, while the remaining four priority levels (e.g., 0, 1, 2, and 3) may be treated as low priorities. In certain embodiments, block 320 may be modified to determine whether the priority level falls within a range. For example, if the priority level is associated with a character from “A” through “M”, then in block 320 a determination may be made of whether the priority level of the data packet falls within “A” through “M” or “N” through “Z”.
  • In [0032] block 330, the operation identifying the data packet associated with the first designation (e.g., high priority) is placed in the structure with the least number of data packets. If the structures are an equal length (e.g., have an equal number of data packets), then either structure may be chosen. In block 340, the operation identifying the data packet associated with the second designation (e.g., low priority) is placed in the structure with the most number of data packets.
  • When the operations are transfer operations, the invention ensures that data packets of similar priority are sent in the order that they were issued to the [0033] device driver 118. The transfer operations for data packets of similar priority are not necessarily stored on the same structure. That is, one structure is not designated for transfer operations for high priority data packets, while another structure is designated for transfer operations for low priority data packets.
  • If multiple high priority data packets are requested, the structure that is currently the shortest will change as transfer operations identifying data packets are added to the structure. In this case, the transfer operations for high priority data packets are added to the available structures in a round robin manner. This preserves the data packet order and does not allow subsequent transfer operations to bypass any of the transfer operations for high priority packets that are being added to the structures. [0034]
  • Subsequent transfer operations for low priority traffic is added to the structures behind these transfer operations for high priority requests and added to the longer of the structures. If the structures are an equal length, then either structure is chosen, and the chosen structure becomes the longer structure. [0035]
  • FIG. 4 illustrates an example set of transfer operations for high and low priority data packets being placed into queues in accordance with certain embodiments of the invention. The [0036] device driver 118 inserts a set of transfer operations 400 identifying data packets into queues 410, 420 for access by the I/O controller 122. In the examples of FIGS. 4, 5, and 6, for ease of reference, it will be said that transfer operations are stored in the queues, although data packets may be stored as well. For ease of-reference, a number following “Data Packet” indicates the order of receipt of the transfer operation for the data packet of the specified priority level. For example, Low Priority Data Packet 0, Data Packet 1, and Data Packet 2 are transfer operations that are received in order of 0-1-2 and have low priority, and High Priority Data Packet 0 and Data Packet 1 are transfer operations that are received in the order of 0-1 and have high priority.
  • In this example, Low [0037] Priority Data Packet 0 identifies the first transfer operation received at the I/O controller 122 for transmission. Low Priority Data Packet 0 may be placed into either queue 410 or 420. For this example, Low Priority Data Packet 0 is placed into queue 1 410. Low Priority Data Packet 1 is received next and, since the data packet identified by this transfer operation has a low priority, the transfer operation is placed into the queue with most transfer operations (i.e., the longer queue), which is queue 1 410. Then, High Priority Data Packet 0 is placed into the queue with the least transfer operations (i.e., the shorter queue), which is queue 2 420. High Priority Data Packet 0 is likely to be transferred before Low Priority Data Packet 1. That is, Low Priority Data Packet 0 will be removed from queue 1 410 and processed, then High Priority Data Packet 0 will be removed from queue 2 420 and processed, before Low Priority Data Packet 1 is removed and processed. This depends on how long it takes to process Low Priority Data Packet 0 relative to when High Priority Data Packet 0 is added to queue 2 420.
  • Then, Low [0038] Priority Data Packet 2 is received and is placed into the longer queue 1 410, and High Priority Data Packet 1 is placed into the shorter queue 2 420. For this example, the data packets identified by the transfer operations are likely to be transferred over the network 116 in the following order: Low Priority Data Packet 0, High Priority Data Packet 0, Low Priority Data Packet 1, High Priority Data Packet 1, Low Priority Data Packet 0.
  • Relative to the order of the transfer operations requested, the high priority evens have bypassed lower priority transfer operations. With longer queue lengths, it would be possible for high priority operations to by pass several low priority operations. When queue lengths are long, it is important for high priority data packets to receive preferential treatment in order to avoid high latency. [0039]
  • FIG. 5 illustrates an example set of transfer operations for mostly high priority data packets being placed into queues in accordance with certain embodiments of the invention. The [0040] device driver 118 inserts a set of transfer operations 500 identifying data packets into queues 510, 520 for access by the I/O controller 122. In this example, High Priority Data Packet 0 is received at the I/O controller 122. High Priority Data Packet 0 may be placed into either queue 510, 520. In this example, High Priority Data Packet 0 is placed in queue 1 510. High Priority Data Packet 1 is then placed into shorter queue 2, 520. High Priority Data Packet 2 is placed into the shorter queue 1 510. Low Priority Data Packet 0 is placed into the longer queue 1 510. High Priority Data Packet 3 is placed into the shorter queue 2 520.
  • In this example, the data packets identified by the transfer operations are likely to be transferred over the [0041] network 116 in the following order: High Priority Data Packet 0, High Priority Data Packet 1, High Priority Data Packet 2, High Priority Data Packet 3, and Low Priority Data Packet 0. This allows High Priority Data Packet 3 to bypass Low Priority Data Packet 0.
  • FIG. 6 illustrates an example set of transfer operations for mostly low priority data packets being placed into queues in accordance with certain embodiments of the invention. The [0042] device driver 118 inserts a set of transfer operations 400 identifying data packets into queues 610, 620 for access by the I/O controller 122. In this example, Low Priority Data Packet 0 is placed into queue 1 610. Then each other Low Priority Data Packet 1, 2, 3, 4, 5, 6 are placed into queue 1 610. High Priority Data Packet 0 is placed into the shorter queue 2 620. Depending on the rate the transfer operations are requested and the operation processing time, High Priority Data Packet 0, which was requested last for transfer may be the second data packet to be transferred.
  • In this example, the data packets identified by the transfer operations are likely to be transferred over the [0043] network 116 in the following order: Low Priority Data Packet 0, High Priority Data Packet 0, Low Priority Data Packet 1, Low Priority Data Packet 2, Low Priority Data Packet 3, Low Priority Data Packet 4, Low Priority Data Packet 5, and Low Priority Data Packet 6. This allows a high priority data packet to be given preferential treatment and to bypass multiple low priority data packets. Thus, the high priority data packet was sent with lower latency than the low priority data packets.
  • Thus, the described embodiments of the invention provide a method, system, and program for a technique for using multiple transfer structures (e.g., queues )of equal priority on an I/O controller to provide preferential treatment of high priority data packets that are to be transferred, for example, across a network. [0044]
  • Additional Embodiment Details [0045]
  • The described techniques for maintaining information on network components may be implemented as a method, apparatus or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof. The term “article of manufacture” as used herein refers to code or logic implemented in hardware logic (e.g., an integrated circuit chip, Programmable Gate Array (PGA), Application Specific Integrated Circuit (ASIC), etc.) or a computer readable medium, such as magnetic storage medium (e.g., hard disk drives, floppy disks, tape, etc.), optical storage (CD-ROMs, optical disks, etc.), volatile and non-volatile memory devices (e.g., EEPROMs, ROMs, PROMs, RAMs, DRAMs, SRAMs, flash, firmware, programmable logic, etc.). Code in the computer readable medium is accessed and executed by a processor. The code in which preferred embodiments are implemented may further be accessible through a transmission media or from a file server over a network. In such cases, the article of manufacture in which the code is implemented may comprise a transmission media, such as a network transmission line, wireless transmission media, signals propagating through space, radio waves, infrared signals, etc. Thus, the “article of manufacture” may comprise the medium in which the code is embodied. Additionally, the “article of manufacture” may comprise a combination of hardware and software components in which the code is embodied, processed, and executed. Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope of the present invention, and that the article of manufacture may comprise any information bearing medium known in the art. [0046]
  • In the described embodiments, certain operations were performed by the [0047] device driver 118. In alternative embodiments, these operations may be performed by another device, such as the I/O controller 122 or by firmware.
  • In the described embodiments, the data packets were transferred over a [0048] network 116. In alternative embodiments, the data packets may be transferred to local storage, to a peripheral device, or to another device without being transferred over the network 116.
  • In the described embodiments, two structures were described for storing data packets. In alternative embodiments, more than two structures may be maintained and data packets with high priority are added to the shortest structure, while data packets with low priority are added to the longest structure. In yet other alternative embodiments, with two or more structures available, data packets with a certain priority level (e.g., low priority) may be placed into a buffer and added to the structures at a later time. [0049]
  • In the described embodiments, transfer operations were added to structures. In alternative embodiments, any type of operation (e.g., a storage operation that is used to store data into a structure) may be added. [0050]
  • The illustrated logic of FIG. 3 describes specific logic operations occurring in a particular order. In alternative embodiments, certain of the logic operations may be performed in a different order, modified or removed. Moreover, steps may be added to the above described logic and still conform to the described embodiments. Further, logic operations described herein may occur sequentially or certain logic operations may be processed in parallel, or logic operations described as performed by a single process may be performed by distributed processes. [0051]
  • The foregoing description of the preferred embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto. The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended. [0052]

Claims (30)

What is claimed is:
1. A method for adding an operation to a structure, comprising:
if a priority level associated with a data packet identified by the operation has a first designation, placing the operation into a first structure with a least number of operations; and
if the priority level associated with the data packet identified by the operation has a second designation, placing the operation into a second structure with a most number of operations.
2. The method of claim 1, wherein the first designation comprises a high priority.
3. The method of claim 1, wherein the second designation comprises a low priority.
4. The method of claim 1, further comprising:
generating the priority level associated with the data packet based on content of the data packet.
5. The method of claim 1, wherein the first structure comprises a queue.
6. The method of claim 1, wherein the second structure comprises a queue.
7. The method of claim 1, wherein the operation is placed into the first structure or the second structure by a device driver.
8. The method of claim 1, further comprising:
removing operations from the first and second structures with a round robin technique.
9. The method of claim 1, wherein the priority level associated with the data packet has the first designation or the second designation based on a range within which the priority level falls.
10. The method of claim 1, wherein the data packet comprises an Ethernet data packet.
11. A system for adding an operation to a structure, comprising:
a processor;
memory coupled to the processor;
a first structure with a least number of operations;
a second structure with a most number of operations; and
at least one program executed by the processor in the memory to cause the processor to perform:
(i) if a priority level associated with a data packet identified by the operation has a first designation, placing the operation into the first structure; and
(ii) if the priority level associated with the data packet identified by the operation has a second designation, placing the operation into the second structure.
12. The system of claim 11, wherein the at least one program further causes the processor to perform:
generating the priority level associated with the data packet based on content of the data packet.
13. The system of claim 11, wherein the at least one program comprises a device driver program.
14. The system of claim 11, wherein the at least one program further causes the processor to perform:
removing operations from the first and second structures with a round robin technique.
15. The system of claim 11, wherein the priority level associated with the data packet has the first designation or the second designation based on a range within which the priority level falls.
16. A system, comprising:
a first structure with a least number of operations;
and a second structure with a most number of operations; and
a device driver to,
(i) if a priority level associated with a data packet identified by the operation has a first designation, place the operation into the first structure; and
(ii) if the priority level associated with the data packet identified by the operation has a second designation, place the operation into the second structure.
17. The system of claim 16, wherein the device driver is capable to generate the priority level associated with the data packet based on content of the data packet.
18. The system of claim 16, further comprising:
an input/output controller to read the first structure and the second structure.
19. The system of claim 16, wherein the input/output controller is capable to remove operations from the first and second structures with a round robin technique.
20. The system of claim 16, wherein the priority level associated with the data packet has the first designation or the second designation based on a range within which the priority level falls.
21. An article of manufacture including a program for adding an operation to a structure, wherein the program causes operations to be performed, the operations comprising:
if a priority level associated with a data packet identified by the operation has a first designation, placing the operation into a first structure with a least number of operations; and
if the priority level associated with the data packet identified by the operation has a second designation, placing the operation into a second structure with a most number of operations.
22. The article of manufacture of claim 21, the operations further comprising:
generating the priority level associated with the data packet based on content of the data packet.
23. The article of manufacture of claim 21, wherein the program comprises a device driver program
24. The article of manufacture of claim 21, wherein the operations are removed from the first and second structures with a round robin technique.
25. The article of manufacture of claim 21, wherein the priority level associated with the data packet has the first designation or the second designation based on a range within which the priority level falls.
26. An article of manufacture including an operating system and device driver for adding an operation to a structure, wherein the operating system and device driver cause operations to be performed, the operations comprising:
if a priority level associated with a data packet identified by the operation has a first designation, placing the operation into a first structure with a least number of operations; and
if the priority level associated with the data packet identified by the operation has a second designation, placing the operation into a second structure with a most number of operations.
27. The article of manufacture of claim 26, the operations further comprising:
generating the priority level associated with the data packet based on content of the data packet.
28. The article of manufacture of claim 26, wherein the program comprises a device driver program.
29. The article of manufacture of claim 26, wherein the operations are removed from the first and second structures with a round robin technique.
30. The article of manufacture of claim 26, wherein the priority level associated with the data packet has the first designation or the second designation based on a range within which the priority level falls.
US10/314,473 2002-12-05 2002-12-05 Method, system, and program for adding operations identifying data packets to structures based on priority levels of the data packets Expired - Fee Related US7177913B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/314,473 US7177913B2 (en) 2002-12-05 2002-12-05 Method, system, and program for adding operations identifying data packets to structures based on priority levels of the data packets

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/314,473 US7177913B2 (en) 2002-12-05 2002-12-05 Method, system, and program for adding operations identifying data packets to structures based on priority levels of the data packets

Publications (2)

Publication Number Publication Date
US20040111532A1 true US20040111532A1 (en) 2004-06-10
US7177913B2 US7177913B2 (en) 2007-02-13

Family

ID=32468475

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/314,473 Expired - Fee Related US7177913B2 (en) 2002-12-05 2002-12-05 Method, system, and program for adding operations identifying data packets to structures based on priority levels of the data packets

Country Status (1)

Country Link
US (1) US7177913B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007012919A2 (en) * 2005-07-27 2007-02-01 Adaptec, Inc. Ripple queuing algorithm for a sas wide-port raid controller
US20070153797A1 (en) * 2005-12-30 2007-07-05 Patrick Connor Segmentation interleaving for data transmission requests
US20080288948A1 (en) * 2006-12-22 2008-11-20 Attarde Deepak R Systems and methods of data storage management, such as dynamic data stream allocation
US10996866B2 (en) 2015-01-23 2021-05-04 Commvault Systems, Inc. Scalable auxiliary copy processing in a data storage management system using media agent resources

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7533190B2 (en) * 2004-04-08 2009-05-12 Intel Corporation Network storage target boot and network connectivity through a common network device
US7949806B2 (en) * 2004-11-18 2011-05-24 International Business Machines Corporation Apparatus and method to provide an operation to an information storage device including protocol conversion and assigning priority levels to the operation
JP4516458B2 (en) * 2005-03-18 2010-08-04 株式会社日立製作所 Failover cluster system and failover method
US9031073B2 (en) * 2010-11-03 2015-05-12 Broadcom Corporation Data bridge
US9063938B2 (en) 2012-03-30 2015-06-23 Commvault Systems, Inc. Search filtered file system using secondary storage, including multi-dimensional indexing and searching of archived files
US9639297B2 (en) 2012-03-30 2017-05-02 Commvault Systems, Inc Shared network-available storage that permits concurrent data access
US9810345B2 (en) * 2013-12-19 2017-11-07 Dresser, Inc. Methods to improve online diagnostics of valve assemblies on a process line and implementation thereof
US10169121B2 (en) 2014-02-27 2019-01-01 Commvault Systems, Inc. Work flow management for an information management system
US10313243B2 (en) 2015-02-24 2019-06-04 Commvault Systems, Inc. Intelligent local management of data stream throttling in secondary-copy operations

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335224A (en) * 1992-06-30 1994-08-02 At&T Bell Laboratories Service guarantees/congestion control in high speed networks
US5564062A (en) * 1995-03-31 1996-10-08 International Business Machines Corporation Resource arbitration system with resource checking and lockout avoidance
US5959993A (en) * 1996-09-13 1999-09-28 Lsi Logic Corporation Scheduler design for ATM switches, and its implementation in a distributed shared memory architecture
US6085215A (en) * 1993-03-26 2000-07-04 Cabletron Systems, Inc. Scheduling mechanism using predetermined limited execution time processing threads in a communication network
US6343155B1 (en) * 1998-07-24 2002-01-29 Picsurf, Inc. Memory saving wavelet-like image transform system and method for digital camera and other memory conservative applications
US6470016B1 (en) * 1999-02-09 2002-10-22 Nortel Networks Limited Servicing output queues dynamically according to bandwidth allocation in a frame environment
US6862265B1 (en) * 2000-04-13 2005-03-01 Advanced Micro Devices, Inc. Weighted fair queuing approximation in a network switch using weighted round robin and token bucket filter
US6944171B2 (en) * 2001-03-12 2005-09-13 Switchcore, Ab Scheduler method and device in a switch

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5335224A (en) * 1992-06-30 1994-08-02 At&T Bell Laboratories Service guarantees/congestion control in high speed networks
US6085215A (en) * 1993-03-26 2000-07-04 Cabletron Systems, Inc. Scheduling mechanism using predetermined limited execution time processing threads in a communication network
US5564062A (en) * 1995-03-31 1996-10-08 International Business Machines Corporation Resource arbitration system with resource checking and lockout avoidance
US5959993A (en) * 1996-09-13 1999-09-28 Lsi Logic Corporation Scheduler design for ATM switches, and its implementation in a distributed shared memory architecture
US6343155B1 (en) * 1998-07-24 2002-01-29 Picsurf, Inc. Memory saving wavelet-like image transform system and method for digital camera and other memory conservative applications
US6470016B1 (en) * 1999-02-09 2002-10-22 Nortel Networks Limited Servicing output queues dynamically according to bandwidth allocation in a frame environment
US6862265B1 (en) * 2000-04-13 2005-03-01 Advanced Micro Devices, Inc. Weighted fair queuing approximation in a network switch using weighted round robin and token bucket filter
US6944171B2 (en) * 2001-03-12 2005-09-13 Switchcore, Ab Scheduler method and device in a switch

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007012919A2 (en) * 2005-07-27 2007-02-01 Adaptec, Inc. Ripple queuing algorithm for a sas wide-port raid controller
US20070028062A1 (en) * 2005-07-27 2007-02-01 Adaptec, Inc. Ripple Queuing Algorithm for a SAS Wide-Port RAID Controller
WO2007012919A3 (en) * 2005-07-27 2007-04-05 Adaptec Inc Ripple queuing algorithm for a sas wide-port raid controller
US20070153797A1 (en) * 2005-12-30 2007-07-05 Patrick Connor Segmentation interleaving for data transmission requests
US8325600B2 (en) 2005-12-30 2012-12-04 Intel Corporation Segmentation interleaving for data transmission requests
US8873388B2 (en) 2005-12-30 2014-10-28 Intel Corporation Segmentation interleaving for data transmission requests
US20080288948A1 (en) * 2006-12-22 2008-11-20 Attarde Deepak R Systems and methods of data storage management, such as dynamic data stream allocation
US8468538B2 (en) * 2006-12-22 2013-06-18 Commvault Systems, Inc. Systems and methods of data storage management, such as dynamic data stream allocation
US10996866B2 (en) 2015-01-23 2021-05-04 Commvault Systems, Inc. Scalable auxiliary copy processing in a data storage management system using media agent resources
US11513696B2 (en) 2015-01-23 2022-11-29 Commvault Systems, Inc. Scalable auxiliary copy processing in a data storage management system using media agent resources

Also Published As

Publication number Publication date
US7177913B2 (en) 2007-02-13

Similar Documents

Publication Publication Date Title
US7496699B2 (en) DMA descriptor queue read and cache write pointer arrangement
US8583839B2 (en) Context processing for multiple active write commands in a media controller architecture
US7870268B2 (en) Method, system, and program for managing data transmission through a network
US7162550B2 (en) Method, system, and program for managing requests to an Input/Output device
US6425021B1 (en) System for transferring data packets of different context utilizing single interface and concurrently processing data packets of different contexts
JP7010598B2 (en) QoS-aware I / O management methods, management systems, and management devices for PCIe storage systems with reconfigurable multiports.
US20060168359A1 (en) Method, system, and program for handling input/output commands
US6735662B1 (en) Method and apparatus for improving bus efficiency given an array of frames to transmit
US7177913B2 (en) Method, system, and program for adding operations identifying data packets to structures based on priority levels of the data packets
US7761529B2 (en) Method, system, and program for managing memory requests by devices
US7460531B2 (en) Method, system, and program for constructing a packet
US20060004904A1 (en) Method, system, and program for managing transmit throughput for a network controller
US7404040B2 (en) Packet data placement in a processor cache
US20050165938A1 (en) Method, system, and program for managing shared resources
US9137167B2 (en) Host ethernet adapter frame forwarding
US6820140B2 (en) Method, system, and program for returning data to read requests received over a bus
US20080126599A1 (en) Iscsi target apparatus that does not require creating a buffer in the user space and related method thereof
US20040111549A1 (en) Method, system, and program for improved interrupt processing
US20050141434A1 (en) Method, system, and program for managing buffers
US20040111537A1 (en) Method, system, and program for processing operations
US20050002389A1 (en) Method, system, and program for processing a packet to transmit on a network in a host system including a plurality of network adaptors
US20140068139A1 (en) Data transfer system and method
Dittia et al. DMA Mechanisms for High Performance Network Interfaces

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTEL CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CONNOR, PATRICK L.;REEL/FRAME:013564/0327

Effective date: 20021202

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20190213