US8510403B2 - Self clocking interrupt generation in a network interface card - Google Patents

Self clocking interrupt generation in a network interface card Download PDF

Info

Publication number
US8510403B2
US8510403B2 US12/827,366 US82736610A US8510403B2 US 8510403 B2 US8510403 B2 US 8510403B2 US 82736610 A US82736610 A US 82736610A US 8510403 B2 US8510403 B2 US 8510403B2
Authority
US
United States
Prior art keywords
packets
period
interrupt
time
host
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/827,366
Other versions
US20120005300A1 (en
Inventor
Dharmadeep C. MUPPALLA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juniper Networks Inc
Original Assignee
Juniper Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juniper Networks Inc filed Critical Juniper Networks Inc
Priority to US12/827,366 priority Critical patent/US8510403B2/en
Assigned to JUNIPER NETWORKS, INC. reassignment JUNIPER NETWORKS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MUPPALLA, DHARMADEEP C.
Publication of US20120005300A1 publication Critical patent/US20120005300A1/en
Priority to US13/964,355 priority patent/US8732263B2/en
Application granted granted Critical
Publication of US8510403B2 publication Critical patent/US8510403B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/20Handling requests for interconnection or transfer for access to input/output bus
    • G06F13/24Handling requests for interconnection or transfer for access to input/output bus using interrupt

Definitions

  • Computing devices frequently receive and transmit data over a network.
  • Personal computing devices such as personal computers and laptops, may act as endpoints for data in the network.
  • Other devices such as routers, firewalls, and other network devices, may send and receive data to enable the network.
  • a network interface card may include a hardware device that handles an interface to the network.
  • the NIC allows the computing device to access the network.
  • NICs may process data at the physical layer and the data link layer.
  • An Ethernet NIC may include logic that allows the NIC to communicate with a physical layer and data link layer standard for Ethernet.
  • a NIC is called a “card”, a NIC can include logic that is, for example, embedded within a main computing board of a computing device, and thus does not necessarily need to be implemented on a separate physical card.
  • NICs may use a number of different techniques to transfer data to a host device.
  • One such technique includes polling-based data transfer, in which the host device (e.g., a software device), at time intervals determined by the host device, examines the status of the NIC to determine if data units are available at the NIC.
  • Another possible technique includes an interrupt-driven technique, in which the NIC alerts the host device when a data unit is ready to be transmitted to the host device.
  • Polling-based data transfer techniques can be particularly effective for high bandwidth applications, as the host device may only poll the NIC when it is ready to process data.
  • Interrupt driven techniques can provide lower latency and/or lower host overhead for the delivery of data.
  • the host may, through operation of a software driver, switch between polling and interrupt modes.
  • the NIC may be initially placed in interrupt mode but may be placed in polling mode, by the host device, when the host device detects a high interrupt arrival rate.
  • Such a system can require relatively high software overhead at the host device.
  • One implementation is directed a device that may include one or more ports to connect to physical transport media for a network and a memory to store packets received from the network at the ports.
  • the device may further include an interrupt controller to issue an interrupt that informs a host of the arrival of the packets, the interrupt controller issuing the interrupt in response to arrival of a predetermined number of packets at the device.
  • the interrupt controller may re-calculate the predetermined number based on an arrival rate of the incoming packets.
  • Another possible implementation is directed to a method that may include receiving packets from a communication medium; determining a quantity of the received packets during a time period; and updating a value at the end of the time period, the value defining a number of packets that are to be received before issuing an interrupt to a host to inform the host of the arrival of the packets.
  • the method may further include issuing the interrupt to the host in response to reception of the number of packets defined by the value, and providing the packets to the host.
  • Yet another possible implementation is directed to a host computing system that is connected to a network; and a network interface card, connected to the host computing system, to provide a physical layer and a data link layer connection to the network.
  • the network interface card may include a memory to store packets received from the network; and an interrupt controller to issue an interrupt that informs the host computing system of the arrival of the packets.
  • the interrupt controller may issue the interrupt in response to arrival of a predetermined number of packets at the network interface card, where the interrupt controller re-calculates the predetermined number based on an arrival rate of the incoming packets.
  • FIG. 1 is a diagram of an example of a system in which concepts described herein may be implemented
  • FIG. 2 is a diagram illustrating an example of an implementation of a device illustrated in FIG. 1 ;
  • FIG. 3 is a diagram illustrating an example of an implementation of a network interface card depicted in FIG. 2 ;
  • FIG. 4 is a block diagram conceptually illustrating components of a network interface card that may be used in issuing interrupts to a host;
  • FIG. 5 is a flow chart illustrating an example of a process for updating a packets per interrupt value
  • FIG. 6 is a flow chart illustrating an example of a process for issuing interrupts.
  • a technique for self clocking of interrupts issued by a NIC, to notify a host of incoming packets is described herein.
  • the NIC may change the rate at which interrupts are issued based on an incoming packet rate.
  • the host device may set parameters in the NIC that control how the NIC changes the rate at which interrupts are issued.
  • the technique described herein may act similar to a polling-based reading of packets.
  • the technique may act similar to an interrupt-based reading of packets.
  • FIG. 1 is a diagram of an example of a system 100 in which concepts described herein may be implemented.
  • System 100 may include a number of physical or logical networks.
  • system 100 may include a network 110 connected to one or more additional networks, such as a local area network (LAN) 120 .
  • LAN 120 may include one or more devices that are logically organized into a LAN.
  • network 110 and LAN 120 may include network devices (NDs) 130 , such as switches, gateways, routers, or other devices used to implement network 110 /LAN 120 .
  • NDs network devices
  • Network 110 and LAN 120 may also include end-user computing devices (CDs) 140 .
  • CDs end-user computing devices
  • Network 110 may generally include one or more types of networks.
  • network 110 may include a wide area network (WAN), such as a cellular network, a satellite network, the Internet, or a combination of these networks that that are used to transport data.
  • WAN wide area network
  • IP Internet protocol
  • Network 110 may particularly be an Internet protocol (IP)-based packet network that includes a number of network devices 130 , such as routers, that transmit packets through network 110 .
  • IP Internet protocol
  • LAN 120 may include a number of computing devices, such as, for example, network devices 130 and end-user computing devices 140 .
  • LAN 120 may implement, for example, a proprietary network, such as a corporate network, that may be connected to network 110 through a gateway.
  • Computing devices 140 may include, for example, general-purpose computing devices such as personal computers, laptops (or other portable computing devices), servers, or smartphones. Computing devices 140 may generally be used by end-users or may be used to provide services to other computing devices in system 100 .
  • FIG. 1 shows an example of components that may be included in system 100 .
  • system 100 may include fewer, different, differently arranged, or additional components than depicted in FIG. 1 .
  • one or more components of system 100 may perform one or more tasks described as being performed by one or more other components of system 100 .
  • FIG. 2 is a diagram illustrating an example of an implementation of a device 200 , such as one of network devices 130 or end-user computing devices 140 .
  • device 200 may include a control unit 210 , a memory 220 , a storage device 230 , input/output devices 240 , and a NIC 250 .
  • Control unit 210 may include a processor, microprocessor, or another type of processing logic that interprets and executes instructions. Among other functions, control unit 210 may implement a driver program that is used to communicate with NIC 250 .
  • Memory 220 may include a dynamic or static storage device that may store information and instructions for execution by control unit 210 .
  • memory 220 may include a storage component, such as a random access memory (RAM), a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), a ferroelectric random access memory (FRAM), a read only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM), and/or a flash memory.
  • Storage device 230 may include a magnetic and/or optical recording medium and its corresponding drive.
  • Input/output devices 240 may include mechanisms that permit an operator to input information to or receive information from device 200 .
  • Input/output devices 240 may include, for example, a keyboard, a mouse, a pen, a microphone, voice recognition and/or biometric mechanisms, etc.
  • NIC 250 may include one or more network interface cards that implement an interface, such as an interface for the physical and data link layer, for communicating with other devices in system 100 . Through NIC 250 , device 200 may send and receive data units, such as packets, over networks 110 and 120 . In some implementations, NIC 250 may be implemented as a separate card that can be inserted and removed from device 200 . In other implementations, NIC 250 may be implemented in circuitry that is integrated within or on the same printed circuit board as other elements of device 200 .
  • device 200 may perform certain operations relating to NIC 250 and to the interface between control unit 210 /memory 220 and NIC 250 . Device 200 may perform these operations in response to control unit 210 executing software instructions contained in a computer-readable medium, such as memory 220 .
  • a computer-readable medium may be defined as a physical or logical memory device.
  • a logical memory device may refer to memory space within a single, physical memory device or spread across multiple, physical memory devices.
  • the software instructions may be read into memory 220 from another computer-readable medium or from another device.
  • the software instructions contained in memory 220 may cause control unit 210 to perform processes that will be described later.
  • hardwired circuitry may be used in place of or in combination with software instructions to implement processes described herein.
  • implementations described herein are not limited to any specific combination of hardware circuitry and software.
  • FIG. 2 illustrates example components of device 200
  • device 200 may include fewer, additional, different and/or differently arranged components than those depicted in FIG. 2 .
  • one or more components of device 200 may perform one or more other tasks described as being performed by one or more other components of device 200 .
  • FIG. 3 is a diagram illustrating an example of an implementation of NIC 250 .
  • NIC 250 may include one or more (three are particularly illustrated) Ethernet ports 310 .
  • Each port 310 may be designed to connect to a physical transport medium for the network.
  • Each port 310 may also be associated with physical-layer transceiver (PHY) logic 315 and media access controller (MAC) logic 320 .
  • NIC 250 may additionally include control logic 330 , memory (RAM) 340 , and host interface logic 350 .
  • Ethernet ports 310 may each include a mechanical slot designed to receive a network cable, such as standard category 5 , 5 e , or 6 twisted-pair cables.
  • PHY logic 315 may generally operate to encode and decode data that is transmitted and received over ports 310 .
  • MAC logic 320 may act as an interface between the physical layer, as output from PHY logic 315 , and control logic 330 .
  • MAC logic 320 may provide addressing and channel access control mechanisms that make it possible for several terminals or network nodes to communicate.
  • Control logic 330 may include logic that controls the writing/reading of incoming data to RAM 340 and logic relating to the implementation of host interface logic 350 for communicating with a host (i.e., control unit 210 and/or memory 220 of device 200 ). As described in more detail below, control logic 330 may, for example, issue interrupts to the host to signal the arrival of packets from Ethernet ports 310 . The rate at which interrupts are issued (i.e., the number of packets per interrupt) to signal the host may be based on parameters set by the host, based on incoming packet bandwidth, and based on a previous packets per interrupt value.
  • Control logic 330 may be implemented using, for example, a general-purpose microprocessor or based on other types of control logic, such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • RAM 340 may include memory, such as high speed random access memory, that may be used to buffer incoming and/or outgoing packets.
  • incoming packets may be stored in RAM 340 and the host may read the packets from RAM 340 using a direct memory access (DMA) technique in which the host directly reads the packets from RAM 340 .
  • DMA direct memory access
  • Host interface logic 350 may include an interface through which the host communicates with NIC 250 .
  • host interface logic 350 may implement a peripheral component interconnect (PCI) bus, PCI express (PCI-E), or other bus architecture for communicating with the host.
  • PCI peripheral component interconnect
  • PCI-E PCI express
  • FIG. 3 illustrates example components of NIC 250
  • NIC 250 may include fewer, additional, different and/or differently arranged components than those depicted in FIG. 3 .
  • one or more components of NIC 250 may perform one or more other tasks described as being performed by one or more other components of NIC 250 .
  • FIG. 4 is a block diagram conceptually illustrating components of NIC 250 that may be used in issuing interrupts to the host.
  • the host portion of device 200 is labeled as host 410 .
  • Host 410 may correspond to the portions of device 200 other than NIC 250 .
  • host 410 may be a software driver that is implemented by control unit 210 and/or memory 220 .
  • the driver may be designed to communicate with NIC 250 .
  • NIC 250 may include a direct memory access (DMA) component 415 , an interrupt controller component 420 , and configuration registers 430 .
  • DMA component 415 may include memory, such as static random access memory (SRAM), into which incoming packets are stored.
  • DMA component 415 may be implemented by, for example, RAM 340 .
  • Host 410 may directly read packets from DMA component 415 . The packets may be read from DMA component 415 in response to an interrupt sent from interrupt controller component 420 to host 410 .
  • Interrupt controller component 420 may send interrupts to host 410 at points in time determined by interrupt controller component 420 .
  • interrupt controller component 420 may send an interrupt to host 410 after a certain number of packets are received. The number of packets to receive before sending the interrupt may vary based on the incoming packet rate and based on parameters set by host 410 in configuration registers 430 .
  • Interrupt controller 420 may include a packet counter 422 that counts the number of received packets. Interrupt controller 420 may issue interrupts after a certain number of packets are received. Packet counter 422 may be used to determine when an allotted number of packets have been received.
  • Interrupt controller 420 may calculate or keep track of a number of values used to determine when to send an interrupt to host 410 . Two of the values are illustrated in FIG. 4 : N(t), the number of interrupts delivered in a particular interval, called an epoch, t; and Z(t), the number of packets per interrupt for epoch t.
  • Configuration registers 430 may include one or more registers through which host 410 can set parameters controlling the rate at which interrupts are sent to host 410 by interrupt controller 420 .
  • Configuration registers 430 may be implemented as memory registers that are writable by host 410 .
  • host 410 may set the parameters defined by configuration registers 430 using other techniques, such as by communicating with logic in NIC 250 using a higher level communication protocol.
  • a separate set of configuration registers 430 may be maintained for every class of service supported by NIC 250 .
  • NIC 250 may support different classes of service, in which packets belonging to a higher class of service may be given higher priority by NIC 250 and/or host 410 .
  • NIC 250 may process each class of service using a separate queue to store incoming packets.
  • host 410 may configure configuration registers on a per-class-of-service basis.
  • NIC 250 may deliver interrupts to host 410 on a per-class-of-service basis, in which NIC 250 may send an interrupt to host 410 whenever any of the queues corresponding to the classes is determined to meet the conditions for receiving an interrupt.
  • Configuration registers 430 may include a first register 432 to store a value indicating a target number of interrupts per second. Host 410 may set the target number of interrupts per second based on the capacity of host 410 to handle interrupts from NIC 250 . In some situations, host 410 may adjust the target number of interrupts per second based on load at host 410 or based on other factors. Configuration registers 430 may further include a second register 434 to store a value indicating an epoch interval that is to be used by NIC 250 .
  • the epoch interval may be the interval at which NIC 250 processes incoming packets to generate interrupts before NIC 250 recalculates Z(t) (i.e., the number of packets to receive before generating an interrupt in interval t). In other words, after each epoch, NIC 250 may recalculate the number of packets to receive before generating an interrupt.
  • Host 410 may, for example, set the epoch interval to an interval in which the standard deviation of the traffic pattern is negligible (e.g., 10 milliseconds).
  • Configuration registers 430 may further include a third register 436 to store a damping factor.
  • the damping factor, ⁇ may describe how quickly NIC 250 changes the current value of Z(t) in response to a change in the incoming packet rate. The damping factor will be described in more detail below.
  • Z(t) may define the number of packets to receive before NIC 250 issues an interrupt.
  • Interrupt controller 420 may re-calculate the value of Z(t) for each epoch t.
  • Z(t) may generally be adjusted based on the incoming packet rate pattern. For instance, when the incoming packet rate increases during epoch t, Z(t+1) (packets per interrupt in the next epoch) may be adjusted higher.
  • interrupts issued by interrupt controller 420 may cause host 410 to read a number of packets from DMA component 415 at semi-periodic intervals. In this situation, host 410 may effectively operate as if it were polling NIC 250 .
  • Z(t+1) may be adjusted lower.
  • Z(t) may be set to one, which may effectively operate as a per-packet interrupt scheme. From the perspective of host 410 , the interrupt generation technique of NIC 250 can allow host 410 to effectively handle increases or decreases in incoming packet rates without increasing the processing demands placed on host 410 .
  • N(t) be the number of interrupts delivered in epoch t.
  • Z(t) may refer to the calculated value, for epoch t, that represents the number of packets that are to be received before issuing an interrupt.
  • x represent the value for the target number of interrupts per second (i.e., the value from first register 432 ) and T represent the epoch interval (i.e., the value from second register 434 ).
  • the total number of interrupts that can be handled by host 410 per epoch may thus be calculated as xT (i.e., the hosts interrupt bandwidth per epoch).
  • the value for Z(t) in the next epoch, Z(t+1) may be calculated using an exponential smoothing function of the form:
  • is the damping factor (i.e., the value from third register 436 ) and ceil is the ceiling function.
  • the damping factor, ⁇ may be set between zero and 1.0. Higher values of ⁇ more heavily weight the packet load in the previous epoch when calculating Z(t+1) and lower values of ⁇ more heavily weight the previous output of equation (1) (i.e., Z(t)) when calculating Z(t+1).
  • FIG. 5 is a flow chart illustrating an example of a process 500 for updating Z(t) at each epoch.
  • process 500 may be performed by interrupt controller 420 of NIC 250 .
  • Interrupt controller 420 may keep track of the number of packets received in the current epoch (block 510 ). In one implementation, the number of packets received in the current epoch may be estimated by multiplying the number of interrupts sent in the epoch by Z(t). In an alternative implementation, interrupt controller 420 may directly keep track of the total number packets received, such as through the use of a counter to count the number of incoming packets.
  • Process 500 may further include determining whether the epoch has ended (block 520 ). Z(t) may be updated after each epoch.
  • Z(t) may be updated (i.e., Z(t+1) calculated) based on Z(t), the total number of packets received in the previous epoch, and based on the host's interrupt bandwidth.
  • Z(t) may be updated using equation (1), in which Z(t)*N(t) represents the total number of packets received in the previous epoch and xT represents the host's interrupt bandwidth. The updated value for Z(t), Z(t+1), may then be used to issue interrupts in the next interval.
  • FIG. 6 is a flow chart illustrating an example of a process 600 for issuing interrupts.
  • Process 600 may be implemented by, for example, interrupt controller 420 .
  • Process 600 may include incrementing packet counter 422 based on the number of incoming packets (block 610 ).
  • Packet counter 422 may generally keep track of the number of incoming packets. Packet counter 422 may be incremented each time a packet arrives or is stored in RAM 340 . Other methods of keeping track of the incoming packet rate may alternatively be used.
  • Process 600 may further include determining whether the number of received packets is equal to or greater than Z(t) the number of packets per interrupt (block 620 ). When the number of received packets is equal to or greater than Z(t) (block 620 —YES), interrupt controller 420 may transmit an interrupt to host 410 (block 630 ). The interrupt may cause host 410 to read the packets from DMA component 415 . In one possible implementation, host 410 may first read a value from NIC 250 , such as a value in a specific register or memory location of DMA component 415 , which indicates the location and/or number of packets that are to be read from DMA component 415 . Host 410 may then read the indicated number of packets from DMA component 415 .
  • Process 600 may further include clearing packet counter 422 (block 640 ). Clearing packet counter 422 may reset the count to start the count for the next interrupt.
  • Table I lists example values for Z(t) (column two) over 8 successive epochs t (column one).
  • the third column lists example values for the number of packets received during each epoch t.
  • is 0.6 and xT is equal to 5 (i.e., the host's desired interrupt bandwidth is equal to 5 interrupts per epoch).
  • Z(t) As shown in Table I, assume that the initial value of Z(t) is 200 packets per interrupt, which corresponds to a total estimated packet bandwidth of 1000 packets per epoch. In epoch zero, however, assume 2000 packets are actually received. In epoch one, Z(t) is updated to 320 packets per interrupt. In epoch one, 2500 packets are received, and Z(t) adjusts to, in epoch two, 428 packets per interrupt. As shown, in epochs two through six, the number of received packets decreases and holds at zero packets for a number of epochs, causing Z(t) to adjust down. If zero packets are continued to be received per epoch, Z(t) would eventually reach a minimum value of one.
  • a self clocking technique for generating interrupts is described in which interrupts are issued to inform a host of arriving packets after a certain number of packets have arrived.
  • the number of packets per interrupt may vary based on the incoming packet rate to thus create a self clocking mechanism for issuing the interrupts.
  • the technique may be implemented in a network interface card, thus removing from the host the burden of monitoring and adjusting between polling and interrupt driven packet reception.
  • logic may be implemented as “logic” or as a “component” that performs one or more functions.
  • This logic or component may include hardware, such as an application specific integrated circuit or a field programmable gate array, or a combination of hardware and software.

Abstract

A network interface card may issue interrupts to a host in which the determination of when to issue an interrupt to the host may be based on the incoming packet rate. In one implementation, an interrupt controller of the network interface card may issue interrupts to that informs a host of the arrival of packets. The interrupt controller may issue the interrupts in response to arrival of a predetermined number of packets, where the interrupt controller re-calculates the predetermined number based on an arrival rate of the incoming packets.

Description

BACKGROUND
Computing devices frequently receive and transmit data over a network. Personal computing devices, such as personal computers and laptops, may act as endpoints for data in the network. Other devices, such as routers, firewalls, and other network devices, may send and receive data to enable the network.
Data units, such as packets, may be transmitted between computing devices in the network. Generally, a network interface card (NIC) may include a hardware device that handles an interface to the network. The NIC allows the computing device to access the network. NICs may process data at the physical layer and the data link layer. An Ethernet NIC, for instance, may include logic that allows the NIC to communicate with a physical layer and data link layer standard for Ethernet. Although a NIC is called a “card”, a NIC can include logic that is, for example, embedded within a main computing board of a computing device, and thus does not necessarily need to be implemented on a separate physical card.
NICs may use a number of different techniques to transfer data to a host device. One such technique includes polling-based data transfer, in which the host device (e.g., a software device), at time intervals determined by the host device, examines the status of the NIC to determine if data units are available at the NIC. Another possible technique includes an interrupt-driven technique, in which the NIC alerts the host device when a data unit is ready to be transmitted to the host device. Polling-based data transfer techniques can be particularly effective for high bandwidth applications, as the host device may only poll the NIC when it is ready to process data. Interrupt driven techniques, however, can provide lower latency and/or lower host overhead for the delivery of data.
In some existing NIC/host device interfaces, the host may, through operation of a software driver, switch between polling and interrupt modes. The NIC may be initially placed in interrupt mode but may be placed in polling mode, by the host device, when the host device detects a high interrupt arrival rate. Such a system can require relatively high software overhead at the host device.
SUMMARY
One implementation is directed a device that may include one or more ports to connect to physical transport media for a network and a memory to store packets received from the network at the ports. The device may further include an interrupt controller to issue an interrupt that informs a host of the arrival of the packets, the interrupt controller issuing the interrupt in response to arrival of a predetermined number of packets at the device. The interrupt controller may re-calculate the predetermined number based on an arrival rate of the incoming packets.
Another possible implementation is directed to a method that may include receiving packets from a communication medium; determining a quantity of the received packets during a time period; and updating a value at the end of the time period, the value defining a number of packets that are to be received before issuing an interrupt to a host to inform the host of the arrival of the packets. The method may further include issuing the interrupt to the host in response to reception of the number of packets defined by the value, and providing the packets to the host.
Yet another possible implementation is directed to a host computing system that is connected to a network; and a network interface card, connected to the host computing system, to provide a physical layer and a data link layer connection to the network. The network interface card may include a memory to store packets received from the network; and an interrupt controller to issue an interrupt that informs the host computing system of the arrival of the packets. The interrupt controller may issue the interrupt in response to arrival of a predetermined number of packets at the network interface card, where the interrupt controller re-calculates the predetermined number based on an arrival rate of the incoming packets.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate one or more embodiments described herein and, together with the description, explain the invention. In the drawings,
FIG. 1 is a diagram of an example of a system in which concepts described herein may be implemented;
FIG. 2 is a diagram illustrating an example of an implementation of a device illustrated in FIG. 1;
FIG. 3 is a diagram illustrating an example of an implementation of a network interface card depicted in FIG. 2;
FIG. 4 is a block diagram conceptually illustrating components of a network interface card that may be used in issuing interrupts to a host;
FIG. 5 is a flow chart illustrating an example of a process for updating a packets per interrupt value; and
FIG. 6 is a flow chart illustrating an example of a process for issuing interrupts.
DETAILED DESCRIPTION
The following detailed description refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention.
A technique for self clocking of interrupts issued by a NIC, to notify a host of incoming packets, is described herein. The NIC may change the rate at which interrupts are issued based on an incoming packet rate. The host device may set parameters in the NIC that control how the NIC changes the rate at which interrupts are issued. At high incoming packet rates, the technique described herein may act similar to a polling-based reading of packets. At lower incoming packet rates, the technique may act similar to an interrupt-based reading of packets.
System Overview
FIG. 1 is a diagram of an example of a system 100 in which concepts described herein may be implemented. System 100 may include a number of physical or logical networks. As particularly shown, system 100 may include a network 110 connected to one or more additional networks, such as a local area network (LAN) 120. LAN 120 may include one or more devices that are logically organized into a LAN. In one example implementation, network 110 and LAN 120 may include network devices (NDs) 130, such as switches, gateways, routers, or other devices used to implement network 110/LAN 120. Network 110 and LAN 120 may also include end-user computing devices (CDs) 140.
Network 110 may generally include one or more types of networks. For instance, network 110 may include a wide area network (WAN), such as a cellular network, a satellite network, the Internet, or a combination of these networks that that are used to transport data. Network 110 may particularly be an Internet protocol (IP)-based packet network that includes a number of network devices 130, such as routers, that transmit packets through network 110.
LAN 120 may include a number of computing devices, such as, for example, network devices 130 and end-user computing devices 140. LAN 120 may implement, for example, a proprietary network, such as a corporate network, that may be connected to network 110 through a gateway.
Computing devices 140 may include, for example, general-purpose computing devices such as personal computers, laptops (or other portable computing devices), servers, or smartphones. Computing devices 140 may generally be used by end-users or may be used to provide services to other computing devices in system 100.
FIG. 1 shows an example of components that may be included in system 100. In other implementations, system 100 may include fewer, different, differently arranged, or additional components than depicted in FIG. 1. Alternatively, or additionally, one or more components of system 100 may perform one or more tasks described as being performed by one or more other components of system 100.
FIG. 2 is a diagram illustrating an example of an implementation of a device 200, such as one of network devices 130 or end-user computing devices 140. As shown, device 200 may include a control unit 210, a memory 220, a storage device 230, input/output devices 240, and a NIC 250.
Control unit 210 may include a processor, microprocessor, or another type of processing logic that interprets and executes instructions. Among other functions, control unit 210 may implement a driver program that is used to communicate with NIC 250.
Memory 220 may include a dynamic or static storage device that may store information and instructions for execution by control unit 210. For example, memory 220 may include a storage component, such as a random access memory (RAM), a dynamic random access memory (DRAM), a static random access memory (SRAM), a synchronous dynamic random access memory (SDRAM), a ferroelectric random access memory (FRAM), a read only memory (ROM), a programmable read only memory (PROM), an erasable programmable read only memory (EPROM), an electrically erasable programmable read only memory (EEPROM), and/or a flash memory. Storage device 230 may include a magnetic and/or optical recording medium and its corresponding drive.
Input/output devices 240 may include mechanisms that permit an operator to input information to or receive information from device 200. Input/output devices 240 may include, for example, a keyboard, a mouse, a pen, a microphone, voice recognition and/or biometric mechanisms, etc.
NIC 250 may include one or more network interface cards that implement an interface, such as an interface for the physical and data link layer, for communicating with other devices in system 100. Through NIC 250, device 200 may send and receive data units, such as packets, over networks 110 and 120. In some implementations, NIC 250 may be implemented as a separate card that can be inserted and removed from device 200. In other implementations, NIC 250 may be implemented in circuitry that is integrated within or on the same printed circuit board as other elements of device 200.
As will be described in detail below, device 200 may perform certain operations relating to NIC 250 and to the interface between control unit 210/memory 220 and NIC 250. Device 200 may perform these operations in response to control unit 210 executing software instructions contained in a computer-readable medium, such as memory 220. A computer-readable medium may be defined as a physical or logical memory device. A logical memory device may refer to memory space within a single, physical memory device or spread across multiple, physical memory devices.
The software instructions may be read into memory 220 from another computer-readable medium or from another device. The software instructions contained in memory 220 may cause control unit 210 to perform processes that will be described later. Alternatively, hardwired circuitry may be used in place of or in combination with software instructions to implement processes described herein. Thus, implementations described herein are not limited to any specific combination of hardware circuitry and software.
Although FIG. 2 illustrates example components of device 200, in other implementations, device 200 may include fewer, additional, different and/or differently arranged components than those depicted in FIG. 2. Alternatively, or additionally, one or more components of device 200 may perform one or more other tasks described as being performed by one or more other components of device 200.
Network Interface Card
FIG. 3 is a diagram illustrating an example of an implementation of NIC 250. NIC 250 may include one or more (three are particularly illustrated) Ethernet ports 310. Each port 310 may be designed to connect to a physical transport medium for the network. Each port 310 may also be associated with physical-layer transceiver (PHY) logic 315 and media access controller (MAC) logic 320. NIC 250 may additionally include control logic 330, memory (RAM) 340, and host interface logic 350.
Ethernet ports 310 may each include a mechanical slot designed to receive a network cable, such as standard category 5, 5 e, or 6 twisted-pair cables. PHY logic 315 may generally operate to encode and decode data that is transmitted and received over ports 310. MAC logic 320 may act as an interface between the physical layer, as output from PHY logic 315, and control logic 330. MAC logic 320 may provide addressing and channel access control mechanisms that make it possible for several terminals or network nodes to communicate.
Control logic 330 may include logic that controls the writing/reading of incoming data to RAM 340 and logic relating to the implementation of host interface logic 350 for communicating with a host (i.e., control unit 210 and/or memory 220 of device 200). As described in more detail below, control logic 330 may, for example, issue interrupts to the host to signal the arrival of packets from Ethernet ports 310. The rate at which interrupts are issued (i.e., the number of packets per interrupt) to signal the host may be based on parameters set by the host, based on incoming packet bandwidth, and based on a previous packets per interrupt value.
Control logic 330 may be implemented using, for example, a general-purpose microprocessor or based on other types of control logic, such as an application specific integrated circuit (ASIC) or field programmable gate array (FPGA).
RAM 340 may include memory, such as high speed random access memory, that may be used to buffer incoming and/or outgoing packets. In one implementation, incoming packets may be stored in RAM 340 and the host may read the packets from RAM 340 using a direct memory access (DMA) technique in which the host directly reads the packets from RAM 340.
Host interface logic 350 may include an interface through which the host communicates with NIC 250. For example, host interface logic 350 may implement a peripheral component interconnect (PCI) bus, PCI express (PCI-E), or other bus architecture for communicating with the host.
Although FIG. 3 illustrates example components of NIC 250, in other implementations, NIC 250 may include fewer, additional, different and/or differently arranged components than those depicted in FIG. 3. Alternatively, or additionally, one or more components of NIC 250 may perform one or more other tasks described as being performed by one or more other components of NIC 250.
Self Clocking Interrupt Operation
FIG. 4 is a block diagram conceptually illustrating components of NIC 250 that may be used in issuing interrupts to the host. In FIG. 4, the host portion of device 200 is labeled as host 410. Host 410 may correspond to the portions of device 200 other than NIC 250. In one implementation, host 410 may be a software driver that is implemented by control unit 210 and/or memory 220. The driver may be designed to communicate with NIC 250.
As shown in FIG. 4, NIC 250 may include a direct memory access (DMA) component 415, an interrupt controller component 420, and configuration registers 430. DMA component 415 may include memory, such as static random access memory (SRAM), into which incoming packets are stored. DMA component 415 may be implemented by, for example, RAM 340. Host 410 may directly read packets from DMA component 415. The packets may be read from DMA component 415 in response to an interrupt sent from interrupt controller component 420 to host 410.
Interrupt controller component 420 may send interrupts to host 410 at points in time determined by interrupt controller component 420. In one implementation, and as will be described in more detail below, interrupt controller component 420 may send an interrupt to host 410 after a certain number of packets are received. The number of packets to receive before sending the interrupt may vary based on the incoming packet rate and based on parameters set by host 410 in configuration registers 430.
Interrupt controller 420 may include a packet counter 422 that counts the number of received packets. Interrupt controller 420 may issue interrupts after a certain number of packets are received. Packet counter 422 may be used to determine when an allotted number of packets have been received.
Interrupt controller 420 may calculate or keep track of a number of values used to determine when to send an interrupt to host 410. Two of the values are illustrated in FIG. 4: N(t), the number of interrupts delivered in a particular interval, called an epoch, t; and Z(t), the number of packets per interrupt for epoch t.
Configuration registers 430 may include one or more registers through which host 410 can set parameters controlling the rate at which interrupts are sent to host 410 by interrupt controller 420. Configuration registers 430 may be implemented as memory registers that are writable by host 410. In alternative implementations, host 410 may set the parameters defined by configuration registers 430 using other techniques, such as by communicating with logic in NIC 250 using a higher level communication protocol.
In one implementation, a separate set of configuration registers 430 may be maintained for every class of service supported by NIC 250. NIC 250 may support different classes of service, in which packets belonging to a higher class of service may be given higher priority by NIC 250 and/or host 410. NIC 250 may process each class of service using a separate queue to store incoming packets. When a separate set of configuration registers 430 is maintained for different classes of service, host 410 may configure configuration registers on a per-class-of-service basis. In this case, NIC 250 may deliver interrupts to host 410 on a per-class-of-service basis, in which NIC 250 may send an interrupt to host 410 whenever any of the queues corresponding to the classes is determined to meet the conditions for receiving an interrupt.
Configuration registers 430 may include a first register 432 to store a value indicating a target number of interrupts per second. Host 410 may set the target number of interrupts per second based on the capacity of host 410 to handle interrupts from NIC 250. In some situations, host 410 may adjust the target number of interrupts per second based on load at host 410 or based on other factors. Configuration registers 430 may further include a second register 434 to store a value indicating an epoch interval that is to be used by NIC 250. The epoch interval may be the interval at which NIC 250 processes incoming packets to generate interrupts before NIC 250 recalculates Z(t) (i.e., the number of packets to receive before generating an interrupt in interval t). In other words, after each epoch, NIC 250 may recalculate the number of packets to receive before generating an interrupt. Host 410 may, for example, set the epoch interval to an interval in which the standard deviation of the traffic pattern is negligible (e.g., 10 milliseconds). Configuration registers 430 may further include a third register 436 to store a damping factor. The damping factor, α, may describe how quickly NIC 250 changes the current value of Z(t) in response to a change in the incoming packet rate. The damping factor will be described in more detail below.
Z(t), as previously mentioned, may define the number of packets to receive before NIC 250 issues an interrupt. Interrupt controller 420 may re-calculate the value of Z(t) for each epoch t. Z(t) may generally be adjusted based on the incoming packet rate pattern. For instance, when the incoming packet rate increases during epoch t, Z(t+1) (packets per interrupt in the next epoch) may be adjusted higher. For relatively high incoming packet rates, interrupts issued by interrupt controller 420 may cause host 410 to read a number of packets from DMA component 415 at semi-periodic intervals. In this situation, host 410 may effectively operate as if it were polling NIC 250. When the incoming packet rate decreases, however, Z(t+1) may be adjusted lower. In the limiting situation, Z(t) may be set to one, which may effectively operate as a per-packet interrupt scheme. From the perspective of host 410, the interrupt generation technique of NIC 250 can allow host 410 to effectively handle increases or decreases in incoming packet rates without increasing the processing demands placed on host 410.
One possible technique for adjusting Z(t), at each epoch t, based on the incoming packet rate will now be described.
Let N(t) be the number of interrupts delivered in epoch t. Z(t), as previously mentioned, may refer to the calculated value, for epoch t, that represents the number of packets that are to be received before issuing an interrupt. Further, let x represent the value for the target number of interrupts per second (i.e., the value from first register 432) and T represent the epoch interval (i.e., the value from second register 434). The total number of interrupts that can be handled by host 410 per epoch may thus be calculated as xT (i.e., the hosts interrupt bandwidth per epoch). The value for Z(t) in the next epoch, Z(t+1), may be calculated using an exponential smoothing function of the form:
ceil [ α · ( ceil ( Z ( t ) · N ( t xT ) ) + ( 1 - α ) · Z ( t ) ] ( 1 )
In equation (1), α is the damping factor (i.e., the value from third register 436) and ceil is the ceiling function. The damping factor, α, may be set between zero and 1.0. Higher values of α more heavily weight the packet load in the previous epoch when calculating Z(t+1) and lower values of α more heavily weight the previous output of equation (1) (i.e., Z(t)) when calculating Z(t+1).
FIG. 5 is a flow chart illustrating an example of a process 500 for updating Z(t) at each epoch. In one implementation, process 500 may be performed by interrupt controller 420 of NIC 250.
Interrupt controller 420 may keep track of the number of packets received in the current epoch (block 510). In one implementation, the number of packets received in the current epoch may be estimated by multiplying the number of interrupts sent in the epoch by Z(t). In an alternative implementation, interrupt controller 420 may directly keep track of the total number packets received, such as through the use of a counter to count the number of incoming packets.
Process 500 may further include determining whether the epoch has ended (block 520). Z(t) may be updated after each epoch.
When the epoch has ended (block 520—YES), Z(t) may be updated (i.e., Z(t+1) calculated) based on Z(t), the total number of packets received in the previous epoch, and based on the host's interrupt bandwidth. Z(t) may be updated using equation (1), in which Z(t)*N(t) represents the total number of packets received in the previous epoch and xT represents the host's interrupt bandwidth. The updated value for Z(t), Z(t+1), may then be used to issue interrupts in the next interval.
FIG. 6 is a flow chart illustrating an example of a process 600 for issuing interrupts. Process 600 may be implemented by, for example, interrupt controller 420.
Process 600 may include incrementing packet counter 422 based on the number of incoming packets (block 610). Packet counter 422 may generally keep track of the number of incoming packets. Packet counter 422 may be incremented each time a packet arrives or is stored in RAM 340. Other methods of keeping track of the incoming packet rate may alternatively be used.
Process 600 may further include determining whether the number of received packets is equal to or greater than Z(t) the number of packets per interrupt (block 620). When the number of received packets is equal to or greater than Z(t) (block 620—YES), interrupt controller 420 may transmit an interrupt to host 410 (block 630). The interrupt may cause host 410 to read the packets from DMA component 415. In one possible implementation, host 410 may first read a value from NIC 250, such as a value in a specific register or memory location of DMA component 415, which indicates the location and/or number of packets that are to be read from DMA component 415. Host 410 may then read the indicated number of packets from DMA component 415.
Process 600 may further include clearing packet counter 422 (block 640). Clearing packet counter 422 may reset the count to start the count for the next interrupt.
An example of how Z(t) may be dynamically re-calculated over a number of epochs will now be described with reference to Table I, below. Table I lists example values for Z(t) (column two) over 8 successive epochs t (column one). The third column lists example values for the number of packets received during each epoch t. In the example shown in table I, assume that α is 0.6 and xT is equal to 5 (i.e., the host's desired interrupt bandwidth is equal to 5 interrupts per epoch).
As shown in Table I, assume that the initial value of Z(t) is 200 packets per interrupt, which corresponds to a total estimated packet bandwidth of 1000 packets per epoch. In epoch zero, however, assume 2000 packets are actually received. In epoch one, Z(t) is updated to 320 packets per interrupt. In epoch one, 2500 packets are received, and Z(t) adjusts to, in epoch two, 428 packets per interrupt. As shown, in epochs two through six, the number of received packets decreases and holds at zero packets for a number of epochs, causing Z(t) to adjust down. If zero packets are continued to be received per epoch, Z(t) would eventually reach a minimum value of one.
TABLE I
T Z(t) Z(t) * N(t)
0 200 2000
1 320 2500
2 428 500
3 232 400
4 141 0
5 57 0
6 23 0
7 10
CONCLUSION
A self clocking technique for generating interrupts is described in which interrupts are issued to inform a host of arriving packets after a certain number of packets have arrived. The number of packets per interrupt may vary based on the incoming packet rate to thus create a self clocking mechanism for issuing the interrupts. In one implementation, the technique may be implemented in a network interface card, thus removing from the host the burden of monitoring and adjusting between polling and interrupt driven packet reception.
It will also be apparent that aspects described herein may be implemented in many different forms of software, firmware, and hardware in the implementations illustrated in the figures. The actual software code or specialized control hardware used to implement aspects described herein is not intended to limit the scope of the invention. Thus, the operation and behavior of the aspects were described without reference to the specific software code—it being understood that software and control hardware can be designed to implement the aspects based on the description herein.
While series of blocks have been described in FIGS. 5 and 6 the order of the blocks may vary in other implementations. Also, non-dependent blocks may be performed in parallel. Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the invention. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification.
Further, certain aspects described herein may be implemented as “logic” or as a “component” that performs one or more functions. This logic or component may include hardware, such as an application specific integrated circuit or a field programmable gate array, or a combination of hardware and software.
No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. The scope of the invention is defined by the claims and their equivalents.

Claims (23)

What is claimed:
1. A device comprising:
one or more ports to connect to physical transport media for a network;
a memory to store packets received from the network at the one or more ports; and
an interrupt controller to:
store an initial number used for determining an interrupt;
determine a first quantity of packets received during a first period of time;
determine, upon an expiration of the first period of time, a number associated with a second quantity of packets to receive before generating the interrupt, the interrupt controller, when determining the number, being to:
apply a function to a value to produce a result,
the value being based on the initial number, a number of interrupts generated during another period of time, and a threshold number of interrupts,
 the other period of time occurring prior to the first period of time, and
use the produced result to determine the number;
determine a third quantity of packets received during a second period of time;
update the number to a second number based on the determined third quantity of packets;
determine a fourth quantity of packets received during a third period of time;
determine a relationship between the fourth quantity of packets and the second number;
update the second number based on the relationship; and
issue the interrupt based on the updated second number.
2. The device of claim 1, where the device is a network interface card.
3. The device of claim 1, where the memory is accessible using a direct memory access technique to read the packets from the memory.
4. The device of claim 1, where the interrupt controller is further to:
periodically update the second number; and
issue the interrupt based on the periodically updated second number.
5. The device of claim 1, further comprising:
configuration registers to store parameters that relate to updating the second number, where
the interrupt controller is further to:
update the number to the second number based on the stored parameters.
6. The device of claim 5, where the parameters include:
an epoch value that defines the first period of time.
7. The device of claim 6, where the interrupt controller updates the second number using an exponential smoothing function.
8. The device of claim 5, where the parameters include:
a third number defining a desired rate for issuing the interrupt; and
a fourth number defining a damping factor used for updating the second number.
9. The device of claim 1, where the second number is decreased when the fourth quantity of packets is less than the third quantity of packets.
10. The device of claim 1, where the second number is increased when the fourth quantity of packets is greater than the third quantity of packets.
11. A method comprising:
storing, by a device, an initial number used for determining an interrupt;
receiving, by the device, a first quantity of packets during a first period of time;
determining, by the device and upon an expiration of the first period of time, a number associated with a second quantity of packets to receive before generating the interrupt, the number being determined based on:
applying, by the device, a function to a value to produce a result,
the value being based on the initial number, a number of interrupts generated during another period of time, and a threshold number of interrupts,
the other period of time occurring prior to the first period of time, and
using the produced result to determine the number;
determining, by the device, a third quantity of packets received during a second period of time;
updating, by the device and based on the determined third quantity of packets, the number to a second number at the end of the second period of time;
determining, by the device, a fourth quantity of packets received during a third period of time;
determining, by the device, a relationship between the fourth quantity of packets and the second number;
updating, by the device, the second number based on the relationship; and
issuing, by the device, the interrupt to a host based on the updated second number.
12. The method of claim 11, where the second number is decreased when the fourth quantity of packets is less than the third quantity of packets and the second number is increased when the fourth quantity of packets is greater than the third quantity of packets.
13. The method of claim 11, where updating the second number includes:
updating the second number using an exponential smoothing function.
14. The method of claim 11, where updating the second number includes:
periodically updating the second number.
15. The method of claim 13, where the exponential smoothing function is based on a damping factor.
16. A device comprising:
a processor to:
store an initial number used for determining an interrupt;
determine a first quantity of packets received during a first period of time;
determine, upon an expiration of the first period of time, a number associated with a second quantity of packets to receive before generating the interrupt, the processor, when determining the number, being to:
apply a function to a value to produce a result,
the value being based on the initial number, a number of interrupts generated during another period of time, and a threshold number of interrupts,
 the other period of time occurring prior to the first period of time, and
use the produced result to determine the number;
determine a third quantity of packets received during a second period of time;
update the number to a second number based on the determined third quantity of packets;
determine a fourth quantity of packets received during a third period of time;
determine a relationship between the fourth quantity of packets and the second number;
update the second number based on the relationship; and
issue the interrupt based on the updated second number.
17. The device of claim 16, where the second number is decreased when the fourth quantity of packets is less than the third quantity of packets.
18. The device of claim 16, where the processor is further to:
periodically update the second number; and
issue the interrupt based on the periodically updated second number.
19. The device of claim 16, where the processor is further to:
store parameters that relate to updating the second number, and
update the number to the second number based on the stored parameters.
20. The device of claim 19, where the parameters include:
an epoch value that defines the first period of time.
21. The device of claim 19, where the stored parameters include:
a third number defining a desired rate for issuing the interrupt, and
a fourth number defining a damping factor used for updating the second value.
22. The device of claim 16, where the function is an exponential smoothing function.
23. The device of claim 16, where the second number is increased when the fourth quantity of packets is greater than the third quantity of packets.
US12/827,366 2010-06-30 2010-06-30 Self clocking interrupt generation in a network interface card Active 2031-05-19 US8510403B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US12/827,366 US8510403B2 (en) 2010-06-30 2010-06-30 Self clocking interrupt generation in a network interface card
US13/964,355 US8732263B2 (en) 2010-06-30 2013-08-12 Self clocking interrupt generation in a network interface card

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/827,366 US8510403B2 (en) 2010-06-30 2010-06-30 Self clocking interrupt generation in a network interface card

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/964,355 Continuation US8732263B2 (en) 2010-06-30 2013-08-12 Self clocking interrupt generation in a network interface card

Publications (2)

Publication Number Publication Date
US20120005300A1 US20120005300A1 (en) 2012-01-05
US8510403B2 true US8510403B2 (en) 2013-08-13

Family

ID=45400555

Family Applications (2)

Application Number Title Priority Date Filing Date
US12/827,366 Active 2031-05-19 US8510403B2 (en) 2010-06-30 2010-06-30 Self clocking interrupt generation in a network interface card
US13/964,355 Active US8732263B2 (en) 2010-06-30 2013-08-12 Self clocking interrupt generation in a network interface card

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/964,355 Active US8732263B2 (en) 2010-06-30 2013-08-12 Self clocking interrupt generation in a network interface card

Country Status (1)

Country Link
US (2) US8510403B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8732263B2 (en) * 2010-06-30 2014-05-20 Juniper Networks, Inc. Self clocking interrupt generation in a network interface card

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013202887A1 (en) * 2013-02-22 2014-08-28 Zf Friedrichshafen Ag Multi-speed transmission in planetary construction
CN105991471B (en) * 2015-02-16 2019-08-09 新华三技术有限公司 The flow control method and flow control apparatus and the network equipment of the network equipment
KR102450972B1 (en) * 2015-12-07 2022-10-05 삼성전자주식회사 Device and method for transmitting a packet to application
US10185675B1 (en) * 2016-12-19 2019-01-22 Amazon Technologies, Inc. Device with multiple interrupt reporting modes
KR102529761B1 (en) 2021-03-18 2023-05-09 에스케이하이닉스 주식회사 PCIe DEVICE AND OPERATING METHOD THEREOF
KR102496994B1 (en) 2021-03-23 2023-02-09 에스케이하이닉스 주식회사 Peripheral component interconnect express interface device and operating method thereof
KR102521902B1 (en) * 2021-03-23 2023-04-17 에스케이하이닉스 주식회사 Peripheral component interconnect express interface device and operating method thereof
CN115632948B (en) * 2022-12-19 2023-03-07 无锡沐创集成电路设计有限公司 Interrupt regulation and control method and device applied to network card, storage medium and electronic equipment

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020087716A1 (en) * 2000-07-25 2002-07-04 Shakeel Mustafa System and method for transmitting customized multi priority services on a single or multiple links over data link layer frames
US6434651B1 (en) * 1999-03-01 2002-08-13 Sun Microsystems, Inc. Method and apparatus for suppressing interrupts in a high-speed network environment
US6453360B1 (en) * 1999-03-01 2002-09-17 Sun Microsystems, Inc. High performance network interface
US6467008B1 (en) * 1999-03-01 2002-10-15 Sun Microsystems, Inc. Method and apparatus for indicating an interrupt in a network interface
US20020188749A1 (en) * 2001-06-06 2002-12-12 Gaur Daniel R. Receive performance of a network adapter by dynamically tuning its interrupt delay
US20030200369A1 (en) * 2002-04-18 2003-10-23 Musumeci Gian-Paolo D. System and method for dynamically tuning interrupt coalescing parameters
US20030200368A1 (en) * 2002-04-18 2003-10-23 Musumeci Gian-Paolo D. System and method for dynamically tuning interrupt coalescing parameters
US20040221080A1 (en) * 2001-09-27 2004-11-04 Connor Patrick L. Apparatus and method for packet incress interrupt moderation
US20060282579A1 (en) * 2005-05-19 2006-12-14 Rudolf Dederer Method for interface adaptation of a hardware baseband receiver in satellite communication systems, interface adapter for hardware baseband receiver, a corresponding computer program, and a corresponding computer-readable storage medium
US20080140468A1 (en) * 2006-12-06 2008-06-12 International Business Machines Corporation Complex exponential smoothing for identifying patterns in business data
US7444451B2 (en) * 2005-12-16 2008-10-28 Industrial Technology Research Institute Adaptive interrupts coalescing system with recognizing minimum delay packets
US20090268611A1 (en) * 2008-04-28 2009-10-29 Sun Microsystems, Inc. Method and system for bandwidth control on a network interface card
US7813352B2 (en) * 2004-05-11 2010-10-12 Packeteer, Inc. Packet load shedding
US8103809B1 (en) * 2009-01-16 2012-01-24 F5 Networks, Inc. Network devices with multiple direct memory access channels and methods thereof

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100924693B1 (en) * 2002-09-04 2009-11-03 삼성전자주식회사 Network Interface Card for reducing the number of interrupt and method thereof
US7917677B2 (en) * 2008-09-15 2011-03-29 International Business Machines Corporation Smart profiler
US8205020B2 (en) * 2009-12-23 2012-06-19 Xerox Corporation High-performance digital image memory allocation and control system
US8510403B2 (en) * 2010-06-30 2013-08-13 Juniper Networks, Inc. Self clocking interrupt generation in a network interface card

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434651B1 (en) * 1999-03-01 2002-08-13 Sun Microsystems, Inc. Method and apparatus for suppressing interrupts in a high-speed network environment
US6453360B1 (en) * 1999-03-01 2002-09-17 Sun Microsystems, Inc. High performance network interface
US6467008B1 (en) * 1999-03-01 2002-10-15 Sun Microsystems, Inc. Method and apparatus for indicating an interrupt in a network interface
US20020087716A1 (en) * 2000-07-25 2002-07-04 Shakeel Mustafa System and method for transmitting customized multi priority services on a single or multiple links over data link layer frames
US20020188749A1 (en) * 2001-06-06 2002-12-12 Gaur Daniel R. Receive performance of a network adapter by dynamically tuning its interrupt delay
US20040221080A1 (en) * 2001-09-27 2004-11-04 Connor Patrick L. Apparatus and method for packet incress interrupt moderation
US6981084B2 (en) * 2001-09-27 2005-12-27 Intel Corporation Apparatus and method for packet ingress interrupt moderation
US6889277B2 (en) * 2002-04-18 2005-05-03 Sun Microsystems, Inc. System and method for dynamically tuning interrupt coalescing parameters
US20030200368A1 (en) * 2002-04-18 2003-10-23 Musumeci Gian-Paolo D. System and method for dynamically tuning interrupt coalescing parameters
US20030200369A1 (en) * 2002-04-18 2003-10-23 Musumeci Gian-Paolo D. System and method for dynamically tuning interrupt coalescing parameters
US6988156B2 (en) * 2002-04-18 2006-01-17 Sun Microsystems, Inc. System and method for dynamically tuning interrupt coalescing parameters
US7813352B2 (en) * 2004-05-11 2010-10-12 Packeteer, Inc. Packet load shedding
US20060282579A1 (en) * 2005-05-19 2006-12-14 Rudolf Dederer Method for interface adaptation of a hardware baseband receiver in satellite communication systems, interface adapter for hardware baseband receiver, a corresponding computer program, and a corresponding computer-readable storage medium
US7444451B2 (en) * 2005-12-16 2008-10-28 Industrial Technology Research Institute Adaptive interrupts coalescing system with recognizing minimum delay packets
US20080140468A1 (en) * 2006-12-06 2008-06-12 International Business Machines Corporation Complex exponential smoothing for identifying patterns in business data
US20090268611A1 (en) * 2008-04-28 2009-10-29 Sun Microsystems, Inc. Method and system for bandwidth control on a network interface card
US8103809B1 (en) * 2009-01-16 2012-01-24 F5 Networks, Inc. Network devices with multiple direct memory access channels and methods thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8732263B2 (en) * 2010-06-30 2014-05-20 Juniper Networks, Inc. Self clocking interrupt generation in a network interface card

Also Published As

Publication number Publication date
US20130332638A1 (en) 2013-12-12
US8732263B2 (en) 2014-05-20
US20120005300A1 (en) 2012-01-05

Similar Documents

Publication Publication Date Title
US8510403B2 (en) Self clocking interrupt generation in a network interface card
US11336581B2 (en) Automatic rate limiting based on explicit network congestion notification in smart network interface card
US11876701B2 (en) System and method for facilitating operation management in a network interface controller (NIC) for accelerators
EP3707882B1 (en) Multi-path rdma transmission
Lu et al. {Multi-Path} transport for {RDMA} in datacenters
US6170022B1 (en) Method and system for monitoring and controlling data flow in a network congestion state by changing each calculated pause time by a random amount
Bai et al. {Information-Agnostic} flow scheduling for commodity data centers
US10346326B2 (en) Adaptive interrupt moderation
US10333848B2 (en) Technologies for adaptive routing using throughput estimation
US8238239B2 (en) Packet flow control
US10380047B2 (en) Traffic-dependent adaptive interrupt moderation
CN108989235B (en) Message forwarding control method and device
US10873882B2 (en) System and method of a pause watchdog
WO2018004977A1 (en) Technologies for adaptive routing using aggregated congestion information
CN111526095A (en) Flow control method and device
US9210095B2 (en) Arbitration of multiple-thousands of flows for convergence enhanced ethernet
US20200252337A1 (en) Data transmission method, device, and computer storage medium
US10467161B2 (en) Dynamically-tuned interrupt moderation
US20170366460A1 (en) Rdma-over-ethernet storage system with congestion avoidance without ethernet flow control
WO2021002022A1 (en) Communication device, communication method, and program
US11108697B2 (en) Technologies for controlling jitter at network packet egress
US7646724B2 (en) Dynamic blocking in a shared host-network interface
CN110968403A (en) CPU work control method, device, equipment and storage medium
US11924106B2 (en) Method and system for granular dynamic quota-based congestion management
US9325640B2 (en) Wireless network device buffers

Legal Events

Date Code Title Description
AS Assignment

Owner name: JUNIPER NETWORKS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MUPPALLA, DHARMADEEP C.;REEL/FRAME:024618/0042

Effective date: 20100630

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8