WO2001080504A1 - Packet switch including usage monitor and scheduler - Google Patents

Packet switch including usage monitor and scheduler Download PDF

Info

Publication number
WO2001080504A1
WO2001080504A1 PCT/US2001/004180 US0104180W WO0180504A1 WO 2001080504 A1 WO2001080504 A1 WO 2001080504A1 US 0104180 W US0104180 W US 0104180W WO 0180504 A1 WO0180504 A1 WO 0180504A1
Authority
WO
WIPO (PCT)
Prior art keywords
queue
customer
allotment
customers
network
Prior art date
Application number
PCT/US2001/004180
Other languages
French (fr)
Inventor
Harsh Kapoor
Paul Gallo
Douglas Walker
Brian Myrick
Original Assignee
Appian Communications, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Appian Communications, Inc. filed Critical Appian Communications, Inc.
Priority to AU2001236810A priority Critical patent/AU2001236810A1/en
Publication of WO2001080504A1 publication Critical patent/WO2001080504A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/627Queue scheduling characterised by scheduling criteria for service slots or service orders policing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/522Dynamic queue service slot or variable bandwidth allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/621Individual queue per connection or flow, e.g. per VC
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/10Packet switching elements characterised by the switching fabric construction
    • H04L49/103Packet switching elements characterised by the switching fabric construction using a shared central buffer; using a shared memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/205Quality of Service based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3018Input queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches

Definitions

  • This invention relates to packet switches for communication networks, and in particular to packet switches for allocating network access among customers.
  • a network typically includes a number of customers all of whom share a common transmission line, path, or trunk. Since only one customer can use the trunk at any instant, a procedure must exist for permitting one customer to use the trunk while excluding all other customers.
  • this allocation procedure is at its simplest.
  • a customer who wants to send a packet on the trunk determines whether the trunk is in use. If the trunk is not in use, the customer places a packet on the trunk. If the trunk is already in use, the customer waits and tries again.
  • a disadvantage of this conventional allocation procedure is its unpredictability. With the growth of the number of customers using a network comes increased traffic and progressively longer waits for network access. In addition, with even a small number of customers it is possible for a single user to monopolize the network for extended periods. Consequently, in the conventional allocation procedure, it is not possible to guarantee to any one customer a fixed amount of network access.
  • a conventional approach to guaranteeing to a customer a fixed amount of network access is to allocate specific time slots to each customer.
  • a customer takes a turn at using the network for a limited time.
  • that customer takes another turn at using the network for another limited time.
  • time-division multiplexing does succeed in guaranteeing a lower bound on a customer's access to the network, it also creates an upper bound on that access.
  • a customer is always precluded from using the network during a competing customer's time slot. It is immaterial, in such a system, whether or not a competing customer actually needs to use the network when it is his turn to do so. Because data communication occurs in bursts, with long periods of silence between bursts, there is a significant probability in such a network that time slots will remain unused, and hence wasted.
  • the present invention addresses the disadvantages of the art by providing a packet switch for allocating network access among a plurality of network users, each of whom has an allotment of guaranteed access to a network.
  • the packet switch includes a queuing unit for maintaining a plurality of queue- sets.
  • Each queue-set corresponds to a user from the plurality of network users.
  • the queue-set corresponding to a particular user accepts data packets from that user to the exclusion of other users.
  • the packet switch further includes a usage monitor that tracks the extent to which each user has depleted his allotment of guaranteed access. On the basis of this usage information, a scheduler, in communication with both the queuing unit and the usage monitor, selects a queue-set and retrieves from that queue-set a data packet for transmission on the network.
  • the scheduler selects first those packets from queue-sets associated with customers who have not depleted their allotment of guaranteed network access.
  • the scheduler grants network access on the basis of a supplemental allotment provided to each customer. Customers who have been allocated higher supplemental allotments receive proportionately more network access than customers who have been allocated lower supplemental allotments.
  • each queue-set includes a plurality of queues, each corresponding to a different data packet priority.
  • a data classifier causes data packets to be placed in the correct priority queue.
  • the scheduler selects data packets among the queues according to customer provided queue weights for each queue. If, as a result of network congestion, a particular packet must be dropped, the scheduler preferentially drops those in a lower priority queue before those in a higher priority queue. In this way, the packet switch ensures that the most important packets are most likely to be transmitted on the network even though they may have been queued later than the less important packets.
  • FIG. 1 shows a network having a packet switch incorporating the subject matter of the invention
  • FIG. 2 is a block diagram of the architecture of a selected packet switch from FIG. 1;
  • FIG. 3 shows the steps implemented by the input scheduler shown in FIG. 2;
  • FIG. 4 shows the steps implemented by the output scheduler shown in FIG. 2 in satisfying a customer's GBR
  • FIG. 5 shows the steps implemented by the output scheduler shown in FIG. 2 in satisfying a customer's MBR.
  • a packet switch 10 incorporating the principles of the invention has a local area network (LAN) interface 12 and a wide area network (WAN) interface 14.
  • the LAN interface 12 is in communication with a plurality of local customers 16a-d on a packet switched network.
  • Tie wide area network interface 14 is in communication with a trunk 18 serving a wide area network.
  • Additional packet switches 20, 22, 24, each of which is likewise in communication with a plurality of remote customers 26, 28, 30, are also connected to the trunk 18.
  • An example of a packet switch network suitable for connection to the LAN interface 12 is an ethernet.
  • a network suitable for use as a wide area network 18 is a telecommunication network such as a SONET (Synchronous Optical Network) ring.
  • Each local customer 16a-d is guaranteed an allotment of network usage time.
  • This allotment which translates into a guaranteed bit rate, is assigned by a service provider on the basis of how much network access the customer is willing to pay for and on how much network access the service provider can guarantee.
  • the amount of network access that the service provider can guarantee depends on the difference between the maximum bit rate of the wide area network trunk 18 and the extent to which that maximum bit rate has already been committed to other customers.
  • the sum of the guaranteed bit rates for all the customers, both local customers 16a-d and remote customers 26, 28, 30, is less than or equal to the bandwidth of the trunk 18 serving the wide area network.
  • Each customer 16a-d is also provided with a supplemental allotment of network access that translates into a maximum burst rate.
  • This maximum burst rate is between the guaranteed bit rate and the bandwidth, or carrying capacity of the wide area network trunk 18.
  • the service provider assigns a maximum burst rate on the basis of how much network access the customer is willing to buy and the available bandwidth of the wide area network trunk 18.
  • Each packet switch 10, 20, 22, 24 shown in FIG. 1 guarantees that each customer will obtain his guaranteed allotment of network usage. If a customer has already depleted his allotment, the packet switches 10, 20, 22, 24 determine if all the competing customers have had their guaranteed allotments satisfied. If this is the case, and if the network is idle, the packet switches 10, 20, 22, 24 grant each customer supplemental network access. The amount of supplemental network access granted to each customer depends on that customer's maximum burst rate, the maximum burst rates of all competing customers, and the unused capacity of the trunk 18.
  • both the guaranteed allotment and the supplemental allotment can be changed through software, without the need to alter existing hardware.
  • This feature of the invention reduces the cost of altering service to any particular customer and also enhances the customer's flexibility. Because both allotments can easily be changed by software, a customer can experiment with different combinations in order to find a combination suitable for his needs.
  • a typical packet switch 10 shown in greater detail in FIG. 2, includes a first packet classifier 32 in communication with the local customers 16a-d on a packet- switched network 34.
  • Each packet transmitted by a customer 16a includes a header that contains information identifying the customer 16a and information indicating the priority that the customer 16a has assigned to the packet.
  • an input queuing unit 36 maintains separate input queue-sets 38a-d for each customer 16a-d on the network 34.
  • each input queue-set 38a-d includes four queues corresponding to four priority levels.
  • a system incorporating the invention can have any number of priority levels or only one priority level.
  • the first packet classifier 32 instructs an input DMA (direct memory access) module 40 to place the incoming packet into the input queue-set corresponding to that customer.
  • the first packet classifier 32 instructs the input DMA module 40 to place the incoming packet in the particular queue from that customer's input queue-set that corresponds to the packet's priority.
  • each customer is allotted a number of bits guaranteed to be transmitted during that cycle. This number is communicated to each of the packet switches 10, 20, 22, 24 on the network. In each packet switch, this number is placed in a corresponding location in an allocated guaranteed bit rate (GBR) array 42 in communication with an input scheduler 44.
  • GRR allocated guaranteed bit rate
  • each customer is allotted a number of bits that may be, but need not be, transmitted during that cycle. This number is also communicated to each of the packet switches 10, 20, 22, 24 on the network. In each packet switch, this number is placed in a corresponding allocated maximum burst rate (MBR) array 46, also in communication with the input scheduler 44.
  • MRR allocated maximum burst rate
  • the input scheduler 44 selects from the input queue- sets 38a-d those data packets that are to be transmitted during that cycle.
  • the procedure used by the input scheduler 44 is a weighted round-robin in which each customer's data packets are selected on the basis of that customer's maximum burst rate, the maximum burst rates of all other customers, and the overall bandwidth of the trunk 18.
  • FIG. 3 shows the weighted round-robin procedure 48 followed by the input scheduler in selecting data packets for transmission.
  • the input scheduler first determines the bandwidth of the trunk serving the wide area network (step 50).
  • the input scheduler looks up the guaranteed bit rate for each local customer (step 52). Since these bit rates are guaranteed, the input scheduler must accept data packets offered by all local customers to the extent that the number of such data packets does not cause that local customer to exceed his guaranteed bit rate (step 54).
  • the input scheduler 44 could service a particular local customer completely before moving on to the next customer, such an algorithm presents several disadvantages. It would be unfair, for example, for a customer who only needs to send one data packet during a cycle to have to wait until another customer has finished sending hundreds of data packets. As a result, in the preferred embodiment, the input scheduler 44 accepts only a limited number of data packets from each customer during each iteration of the round-robin.
  • the input scheduler determines how much trunk bandwidth is left over (step 56). In no case is this residual trunk bandwidth less than the trunk bandwidth reduced by the sum of the guaranteed bit rates for all customers, both local and remote. In fact, because data communication tends to occur in short bursts, there may be many cycles during which only a few customers offer data packets for transmission. During such cycles, the residual trunk bandwidth can be considerably greater.
  • the next step is to equitably allocate this residual trunk bandwidth among all customers (step 58).
  • the input scheduler first looks up the maximum burst rate for each customer (step 60). Then, to the extent that there exist data packets offered for transmission, the input scheduler selects from each customer's queue-set an equitable number of data packets (step 62). In the preferred embodiment, this equitable number is proportional to the ratio of a particular customer's maximum burst rate to the sum of the maximum burst rates of all local customers weighted by the residual trunk bandwidth.
  • the input scheduler selects a data packet from a particular customer's queue-set, that input scheduler must decide from which of the individual queues within the queue-set the data packet is to be retrieved. This decision is made whether the data packet is being selected to consume that customer's guaranteed bit rate (step 54) or to consume residual bandwidth (step 62).
  • the order in which individual queues from a queue- set are selected and the number of data packets to be selected from each queue can be adjusted by the customer.
  • the customer does so by adjusting the queue weights in a weighted round-robin implemented by the input scheduler 44.
  • the input scheduler performs this weighted round-robin procedure as part of selecting data packets to meet the customer's guaranteed bit rate (step 54) and as part of selecting data packets to consume residual trunk bandwidth (step 62).
  • the number of weights available for adjustment in the weighted round-robin is equal to the number of queues in each queue-set.
  • the customer can specify that all data packets from high priority queues within the queue-set are to be sent before any data packets from lower priority queues are sent.
  • the customer can specify that data packets from lower priority queues can be sent once a specified number of data packets from higher priority queues have been sent. This feature is useful when network congestion results in the need to-drop certain data packets.
  • the input scheduler 44 sends data packets to a second packet classifier 64.
  • This second packet classifier 64 also receives data packets transmitted by other packet switches 20, 22, 24 on the trunk 18 serving the wide area network.
  • the second packet classifier 64 determines the destination of each data packet that it receives. If the destination of that packet is one of the local customers 16a-d, the second packet classifier 64 routes that packet to a local queuing unit 66. If the destination of the packet is a remote customer, the second packet classifier 64 routes the packet to a network queuing unit 68.
  • the network queuing unit 68 maintains as many queue-sets 70 as there are customers in all local area networks serviced by all the packet switches 10, 20, 22, 24. Each such queue-set has as many queues as there are priority levels.
  • the network queuing unit 68 therefore maintains a queue structure identical to that maintained by the input queuing unit 36 with the exception that the number of queue-sets is equal to the sum of the number of local customers 16a-d and the number of remote customers 26, 28, 30.
  • a network output scheduler 72 selects packets from the queue-sets maintained by the network queuing unit 68 and transmits those packets onto the trunk 18 serving the wide area network.
  • the network output scheduler 72 is in communication with a usage monitor 74 that maintains two counter-arrays: a guaranteed bit rate (GBR) counter array 76 and a maximum burst rate (MBR) counter array 78. Like the input scheduler 44, the network output scheduler 72 selects data packets from the queue-sets 70 on the basis of the guaranteed bit rate and maximum burst rate of each customer. Each element of the GBR counter array 76 is initialized to the corresponding value in the allocated GBR array 42. Similarly, each element of the MBR counter array 78 is initialized to the corresponding value in the allocated MBR array 46.
  • GBR guaranteed bit rate
  • MBR maximum burst rate
  • the local queuing unit 66 maintains one queue-set 80a-d for each local customer, with each queue-set having one queue for each priority.
  • the queue structure maintained by the local queuing unit 66 is therefore identical to that maintained by the input queuing unit 36.
  • a local output scheduler 82 is in communication with the usage monitor 74.
  • the operation of the local output scheduler 82 and that of the network output scheduler 72 are essentially identical and best understood with reference to FIGS. 4 and 5. Because of the similarity in the operation of both output schedulers, the following discussion is written as it applies to the network output scheduler 72. It will be understood by one of ordinary skill in the art that the local output scheduler 82 operates in a like manner.
  • the network output scheduler 72 determines if all customers have depleted their respective allotments of guaranteed network usage (step 83). If so, the network output scheduler 72 begins the process of depleting the customers' allotments of supplemental network access (step 84) as discussed below in connection with FIG. 5. Otherwise, the network output scheduler 72 proceeds to the next customer (step 85) and polls that customer's queue-set for traffic (step 86). If the network output scheduler 72 detects an empty queue-set, it proceeds to the next customer (step 87).
  • the network output scheduler 72 detects traffic on the input queue-set for a particular customer, it interrogates the usage monitor 74 to determine if the corresponding element of the GBR counter array 64 indicates that the customer's allotment of guaranteed network usage has been depleted (step 88).
  • the network output scheduler 72 buffers selected packets from that customer's input queue-set for transmission on the trunk 18 (step 90). The network output scheduler 72 then updates the corresponding element of the GBR counter array 64 to reflect the customer's network usage (step 92).
  • the network output scheduler 72 proceeds to the next customer and repeats the process. Any packets remaining on the bypassed customer's input queue-sets will remain there until all competing customers have passed through the loop and depleted their guaranteed allotments of network usage.
  • the network output scheduler 72 apportions the remaining time in that cycle among the customers in a manner that is proportional to their respective allotted maximum burst rates. The network output scheduler 72 does so by dividing a particular customer's maximum burst rate by the sum of all the customers' maximum burst rates, thereby generating a ratio indicative of that customer's priority relative to all other customers. The scheduler 72 then multiples this ratio by the time remaining in the cycle This results in the most equitable manner of sharing the remaining trunk bandwidth.
  • the network output scheduler 72 proceeds with the allocation of residual bandwidth, as shown in FIG. 5. It does so by first determining if each customer has depleted his supplemental allotment of network access (step 93). If so, the network output scheduler 72 waits for the beginning of the next cycle (step 94). Otherwise, the network output scheduler 72 proceeds to the next customer (step 95) and again polls each queue-set for traffic (step 96). If the network output scheduler 72 detects an empty queue-set, it proceeds to the next customer (step 97).
  • the network output scheduler 72 detects traffic on the input queue-set for a particular customer, it interrogates the usage monitor 74 to determine if the corresponding element of the MBR counter array 66 indicates that the customer's allotment of supplemental network usage has been depleted (step 98).
  • the network output scheduler 72 buffers selected packets from that customer's queue-set for transmission on the trunk 18 (step 100). The network output scheduler 72 then updates the MBR counter 66 to reflect the customer's network usage (step 102). This procedure is repeated until the beginning of the next interval, at which point each customer receives a new allotment of guaranteed network access and a new allotment of supplemental network access.
  • the network output scheduler retrieves a data packet from a particular customer's queue-set, it must decide from which of the individual queues within the queue-set the data packet is to be retrieved. This decision is made whether the data packet is being selected to consume that customer's guaranteed bit rate (step 90) or to consume residual bandwidth (step 98).
  • the order in which individual queues from a queue-set are selected and the number of data packets to be selected from each queue can be adjusted by the customer.
  • the customer does so by adjusting the queue weights, and hence the queue priorities, in a weighted round-robin implemented by the network output scheduler 72.
  • the network output scheduler 72 performs this weighted round-robin procedure as part of selecting data packets to meet the customer's guaranteed bit rate (step 90) and as part of selecting data packets to consume residual trunk bandwidth (step 98).
  • the number of weights available for adjustment in the weighted round-robin is equal to the number of queues in each queue-set.
  • the customer can specify that all data packets from high priority queues within the queue-set be sent before any data packets from lower priority queues are sent.
  • the customer can specify that data packets from lower priority queues can be sent once a specified number of data packets from higher priority queues have been sent.
  • the operation of the local output scheduler 82 is identical to that of the network output scheduler 72 as described above. Data packets selected by the local output scheduler as described in connection with FIGS. 4 and 5 proceed to an output DMA 104. If the local area network 34 is not busy, the output DMA 104 places the packet onto the local area network 34. Otherwise, the output DMA 104 passes the data packet to an output queuing unit 106 for placement in the queue-sets 108a-d pending eventual transmission onto the local area network 34.
  • a data packet sent from one remote customer to another remote customer is scheduled by the network output scheduler 72.
  • a data packet sent from a remote customer to a local customer is scheduled by the local output scheduler 82.
  • a data packet sent by a local customer to a remote customer is scheduled by the network output scheduler 72.
  • a data packet sent by a local customer to another local customer is scheduled by the local output scheduler 82.
  • the scheduling method of the invention therefore does not depend on either the source or the destination of the data packet upon which it operates.

Abstract

A packet switch for allocating access to a communication of network among a plurality of customers, each having an allotment of guaranteed accesses to the network, includes a queuing unit for maintaining a plurality of queue-sets, each of which accepts a data packet from a corresponding customer. The switch also include a usage monitor for monitoring the extent to which each customer has depleted his allotment of guaranteed access. The usage monitor and the queuing unit both communicate with a scheduler that retrieves a data packet for transmission on the network. The queue-set is selected on the basis of the usage information stored by the usage monitor.

Description

PACKET SWITCH INCLUDING USAGE MONITOR AND SCHEDULER
This invention relates to packet switches for communication networks, and in particular to packet switches for allocating network access among customers.
BACKGROUND
A network typically includes a number of customers all of whom share a common transmission line, path, or trunk. Since only one customer can use the trunk at any instant, a procedure must exist for permitting one customer to use the trunk while excluding all other customers.
In a conventional packet-switched network, such as the ethernet, this allocation procedure is at its simplest. A customer who wants to send a packet on the trunk determines whether the trunk is in use. If the trunk is not in use, the customer places a packet on the trunk. If the trunk is already in use, the customer waits and tries again.
Because data communication between computers on a network tends to be sporadic, with long periods of silence between data bursts, the foregoing procedure is adequate for computer networks having a limited number of users. If the number of users is not too large, the average waiting time for use of the trunk remains relatively low.
A disadvantage of this conventional allocation procedure is its unpredictability. With the growth of the number of customers using a network comes increased traffic and progressively longer waits for network access. In addition, with even a small number of customers it is possible for a single user to monopolize the network for extended periods. Consequently, in the conventional allocation procedure, it is not possible to guarantee to any one customer a fixed amount of network access.
A conventional approach to guaranteeing to a customer a fixed amount of network access is to allocate specific time slots to each customer. In such a system, a customer takes a turn at using the network for a limited time. When all the other customers have had their turn, that customer takes another turn at using the network for another limited time.
Although the foregoing method, referred to as "time-division multiplexing," does succeed in guaranteeing a lower bound on a customer's access to the network, it also creates an upper bound on that access. In a system that uses time-division multiplexing, a customer is always precluded from using the network during a competing customer's time slot. It is immaterial, in such a system, whether or not a competing customer actually needs to use the network when it is his turn to do so. Because data communication occurs in bursts, with long periods of silence between bursts, there is a significant probability in such a network that time slots will remain unused, and hence wasted.
It is desirable, therefore, to provide a system and method for allocating the use of a network among a community of customers that guarantees a lower bound of network access but that nevertheless accommodates the sporadic nature of data communications.
SUMMARY
The present invention addresses the disadvantages of the art by providing a packet switch for allocating network access among a plurality of network users, each of whom has an allotment of guaranteed access to a network.
The packet switch includes a queuing unit for maintaining a plurality of queue- sets. Each queue-set corresponds to a user from the plurality of network users. The queue-set corresponding to a particular user accepts data packets from that user to the exclusion of other users.
The packet switch further includes a usage monitor that tracks the extent to which each user has depleted his allotment of guaranteed access. On the basis of this usage information, a scheduler, in communication with both the queuing unit and the usage monitor, selects a queue-set and retrieves from that queue-set a data packet for transmission on the network.
The scheduler selects first those packets from queue-sets associated with customers who have not depleted their allotment of guaranteed network access. In an optional feature of the invention, once all customers have received their guaranteed allotments, the scheduler grants network access on the basis of a supplemental allotment provided to each customer. Customers who have been allocated higher supplemental allotments receive proportionately more network access than customers who have been allocated lower supplemental allotments.
In a preferred embodiment, each queue-set includes a plurality of queues, each corresponding to a different data packet priority. A data classifier causes data packets to be placed in the correct priority queue. The scheduler selects data packets among the queues according to customer provided queue weights for each queue. If, as a result of network congestion, a particular packet must be dropped, the scheduler preferentially drops those in a lower priority queue before those in a higher priority queue. In this way, the packet switch ensures that the most important packets are most likely to be transmitted on the network even though they may have been queued later than the less important packets.
These and other features of the invention will be apparent upon reading the following detailed description in connection with the figures in which:
BRIEF DESCRIPTION OF THE FIGURES
FIG. 1 shows a network having a packet switch incorporating the subject matter of the invention; and
FIG. 2 is a block diagram of the architecture of a selected packet switch from FIG. 1;
FIG. 3 shows the steps implemented by the input scheduler shown in FIG. 2;
FIG. 4 shows the steps implemented by the output scheduler shown in FIG. 2 in satisfying a customer's GBR; and
FIG. 5 shows the steps implemented by the output scheduler shown in FIG. 2 in satisfying a customer's MBR.
DETAILED DESCRIPTION
Referring to FIG. 1, a packet switch 10 incorporating the principles of the invention has a local area network (LAN) interface 12 and a wide area network (WAN) interface 14. The LAN interface 12 is in communication with a plurality of local customers 16a-d on a packet switched network. Tie wide area network interface 14 is in communication with a trunk 18 serving a wide area network. Additional packet switches 20, 22, 24, each of which is likewise in communication with a plurality of remote customers 26, 28, 30, are also connected to the trunk 18. An example of a packet switch network suitable for connection to the LAN interface 12 is an ethernet. A network suitable for use as a wide area network 18 is a telecommunication network such as a SONET (Synchronous Optical Network) ring.
Each local customer 16a-d is guaranteed an allotment of network usage time. This allotment, which translates into a guaranteed bit rate, is assigned by a service provider on the basis of how much network access the customer is willing to pay for and on how much network access the service provider can guarantee. The amount of network access that the service provider can guarantee depends on the difference between the maximum bit rate of the wide area network trunk 18 and the extent to which that maximum bit rate has already been committed to other customers. Preferably, the sum of the guaranteed bit rates for all the customers, both local customers 16a-d and remote customers 26, 28, 30, is less than or equal to the bandwidth of the trunk 18 serving the wide area network.
Each customer 16a-d is also provided with a supplemental allotment of network access that translates into a maximum burst rate. This maximum burst rate is between the guaranteed bit rate and the bandwidth, or carrying capacity of the wide area network trunk 18. The service provider assigns a maximum burst rate on the basis of how much network access the customer is willing to buy and the available bandwidth of the wide area network trunk 18.
Each packet switch 10, 20, 22, 24 shown in FIG. 1 guarantees that each customer will obtain his guaranteed allotment of network usage. If a customer has already depleted his allotment, the packet switches 10, 20, 22, 24 determine if all the competing customers have had their guaranteed allotments satisfied. If this is the case, and if the network is idle, the packet switches 10, 20, 22, 24 grant each customer supplemental network access. The amount of supplemental network access granted to each customer depends on that customer's maximum burst rate, the maximum burst rates of all competing customers, and the unused capacity of the trunk 18.
It will be apparent from the description below that both the guaranteed allotment and the supplemental allotment can be changed through software, without the need to alter existing hardware. This feature of the invention reduces the cost of altering service to any particular customer and also enhances the customer's flexibility. Because both allotments can easily be changed by software, a customer can experiment with different combinations in order to find a combination suitable for his needs.
A typical packet switch 10, shown in greater detail in FIG. 2, includes a first packet classifier 32 in communication with the local customers 16a-d on a packet- switched network 34. Each packet transmitted by a customer 16a includes a header that contains information identifying the customer 16a and information indicating the priority that the customer 16a has assigned to the packet.
As shown in FIG. 2, an input queuing unit 36 maintains separate input queue-sets 38a-d for each customer 16a-d on the network 34. In the preferred embodiment, each input queue-set 38a-d includes four queues corresponding to four priority levels. However, a system incorporating the invention can have any number of priority levels or only one priority level.
On the basis of a customer's identity, the first packet classifier 32 instructs an input DMA (direct memory access) module 40 to place the incoming packet into the input queue-set corresponding to that customer. On the basis of the packet's priority, the first packet classifier 32 instructs the input DMA module 40 to place the incoming packet in the particular queue from that customer's input queue-set that corresponds to the packet's priority.
It is useful to consider the packet switch 10 as operating in successive arbitration cycles. At the beginning of each cycle, each customer is allotted a number of bits guaranteed to be transmitted during that cycle. This number is communicated to each of the packet switches 10, 20, 22, 24 on the network. In each packet switch, this number is placed in a corresponding location in an allocated guaranteed bit rate (GBR) array 42 in communication with an input scheduler 44. Similarly, each customer is allotted a number of bits that may be, but need not be, transmitted during that cycle. This number is also communicated to each of the packet switches 10, 20, 22, 24 on the network. In each packet switch, this number is placed in a corresponding allocated maximum burst rate (MBR) array 46, also in communication with the input scheduler 44.
At least once during a cycle, the input scheduler 44 selects from the input queue- sets 38a-d those data packets that are to be transmitted during that cycle. The procedure used by the input scheduler 44 is a weighted round-robin in which each customer's data packets are selected on the basis of that customer's maximum burst rate, the maximum burst rates of all other customers, and the overall bandwidth of the trunk 18.
FIG. 3 shows the weighted round-robin procedure 48 followed by the input scheduler in selecting data packets for transmission. The input scheduler first determines the bandwidth of the trunk serving the wide area network (step 50). The input scheduler then looks up the guaranteed bit rate for each local customer (step 52). Since these bit rates are guaranteed, the input scheduler must accept data packets offered by all local customers to the extent that the number of such data packets does not cause that local customer to exceed his guaranteed bit rate (step 54).
Although the input scheduler 44 could service a particular local customer completely before moving on to the next customer, such an algorithm presents several disadvantages. It would be unfair, for example, for a customer who only needs to send one data packet during a cycle to have to wait until another customer has finished sending hundreds of data packets. As a result, in the preferred embodiment, the input scheduler 44 accepts only a limited number of data packets from each customer during each iteration of the round-robin.
After having met the guaranteed bit rates for each local customer, the input scheduler determines how much trunk bandwidth is left over (step 56). In no case is this residual trunk bandwidth less than the trunk bandwidth reduced by the sum of the guaranteed bit rates for all customers, both local and remote. In fact, because data communication tends to occur in short bursts, there may be many cycles during which only a few customers offer data packets for transmission. During such cycles, the residual trunk bandwidth can be considerably greater.
Having determined the residual trunk bandwidth, the next step is to equitably allocate this residual trunk bandwidth among all customers (step 58). To do so, the input scheduler first looks up the maximum burst rate for each customer (step 60). Then, to the extent that there exist data packets offered for transmission, the input scheduler selects from each customer's queue-set an equitable number of data packets (step 62). In the preferred embodiment, this equitable number is proportional to the ratio of a particular customer's maximum burst rate to the sum of the maximum burst rates of all local customers weighted by the residual trunk bandwidth.
Whenever the input scheduler selects a data packet from a particular customer's queue-set, that input scheduler must decide from which of the individual queues within the queue-set the data packet is to be retrieved. This decision is made whether the data packet is being selected to consume that customer's guaranteed bit rate (step 54) or to consume residual bandwidth (step 62).
In the preferred embodiment, the order in which individual queues from a queue- set are selected and the number of data packets to be selected from each queue can be adjusted by the customer. The customer does so by adjusting the queue weights in a weighted round-robin implemented by the input scheduler 44. The input scheduler performs this weighted round-robin procedure as part of selecting data packets to meet the customer's guaranteed bit rate (step 54) and as part of selecting data packets to consume residual trunk bandwidth (step 62).
The number of weights available for adjustment in the weighted round-robin is equal to the number of queues in each queue-set. For example, the customer can specify that all data packets from high priority queues within the queue-set are to be sent before any data packets from lower priority queues are sent. Alternatively, the customer can specify that data packets from lower priority queues can be sent once a specified number of data packets from higher priority queues have been sent. This feature is useful when network congestion results in the need to-drop certain data packets. By correctly adjusting
-1- the priorities of each queue, a customer can cause=the transmission of a higher priority packet before the transmission of a lower priority packet even though the higher priority packet may have entered the input queue-set after the lower priority packet.
Referring back to FIG. 2, the input scheduler 44 sends data packets to a second packet classifier 64. This second packet classifier 64 also receives data packets transmitted by other packet switches 20, 22, 24 on the trunk 18 serving the wide area network.
The second packet classifier 64 determines the destination of each data packet that it receives. If the destination of that packet is one of the local customers 16a-d, the second packet classifier 64 routes that packet to a local queuing unit 66. If the destination of the packet is a remote customer, the second packet classifier 64 routes the packet to a network queuing unit 68.
The network queuing unit 68 maintains as many queue-sets 70 as there are customers in all local area networks serviced by all the packet switches 10, 20, 22, 24. Each such queue-set has as many queues as there are priority levels. The network queuing unit 68 therefore maintains a queue structure identical to that maintained by the input queuing unit 36 with the exception that the number of queue-sets is equal to the sum of the number of local customers 16a-d and the number of remote customers 26, 28, 30.
Using a deficit round-robin algorithm, a network output scheduler 72 selects packets from the queue-sets maintained by the network queuing unit 68 and transmits those packets onto the trunk 18 serving the wide area network.
The network output scheduler 72 is in communication with a usage monitor 74 that maintains two counter-arrays: a guaranteed bit rate (GBR) counter array 76 and a maximum burst rate (MBR) counter array 78. Like the input scheduler 44, the network output scheduler 72 selects data packets from the queue-sets 70 on the basis of the guaranteed bit rate and maximum burst rate of each customer. Each element of the GBR counter array 76 is initialized to the corresponding value in the allocated GBR array 42. Similarly, each element of the MBR counter array 78 is initialized to the corresponding value in the allocated MBR array 46.
Like the input queuing unit 36, the local queuing unit 66 maintains one queue-set 80a-d for each local customer, with each queue-set having one queue for each priority. The queue structure maintained by the local queuing unit 66 is therefore identical to that maintained by the input queuing unit 36.
Like the network output scheduler 72, a local output scheduler 82 is in communication with the usage monitor 74. The operation of the local output scheduler 82 and that of the network output scheduler 72 are essentially identical and best understood with reference to FIGS. 4 and 5. Because of the similarity in the operation of both output schedulers, the following discussion is written as it applies to the network output scheduler 72. It will be understood by one of ordinary skill in the art that the local output scheduler 82 operates in a like manner.
Referring first to FIG. 4, the network output scheduler 72 determines if all customers have depleted their respective allotments of guaranteed network usage (step 83). If so, the network output scheduler 72 begins the process of depleting the customers' allotments of supplemental network access (step 84) as discussed below in connection with FIG. 5. Otherwise, the network output scheduler 72 proceeds to the next customer (step 85) and polls that customer's queue-set for traffic (step 86). If the network output scheduler 72 detects an empty queue-set, it proceeds to the next customer (step 87). If the network output scheduler 72 detects traffic on the input queue-set for a particular customer, it interrogates the usage monitor 74 to determine if the corresponding element of the GBR counter array 64 indicates that the customer's allotment of guaranteed network usage has been depleted (step 88).
If the corresponding element of the GBR counter array 64 indicates that the customer's allotment of guaranteed network usage has not been depleted, the network output scheduler 72 buffers selected packets from that customer's input queue-set for transmission on the trunk 18 (step 90). The network output scheduler 72 then updates the corresponding element of the GBR counter array 64 to reflect the customer's network usage (step 92).
If the corresponding element of the GBR counter array 64 indicates that the customer has already depleted his allotment of guaranteed network usage, the network output scheduler 72 proceeds to the next customer and repeats the process. Any packets remaining on the bypassed customer's input queue-sets will remain there until all competing customers have passed through the loop and depleted their guaranteed allotments of network usage.
It is possible that at some time during the cycle, all the customers will have depleted their respective guaranteed allotments of network usage. It is also possible that at some time during the cycle, the only customers who have traffic to send are those who have already depleted their guaranteed allotment of network access. When either of these conditions occur, the network output scheduler 72 apportions the remaining time in that cycle among the customers in a manner that is proportional to their respective allotted maximum burst rates. The network output scheduler 72 does so by dividing a particular customer's maximum burst rate by the sum of all the customers' maximum burst rates, thereby generating a ratio indicative of that customer's priority relative to all other customers. The scheduler 72 then multiples this ratio by the time remaining in the cycle This results in the most equitable manner of sharing the remaining trunk bandwidth.
Having attended to satisfying the guaranteed allotment of network access for each customer, as shown in FIG. 4, the network output scheduler 72 proceeds with the allocation of residual bandwidth, as shown in FIG. 5. It does so by first determining if each customer has depleted his supplemental allotment of network access (step 93). If so, the network output scheduler 72 waits for the beginning of the next cycle (step 94). Otherwise, the network output scheduler 72 proceeds to the next customer (step 95) and again polls each queue-set for traffic (step 96). If the network output scheduler 72 detects an empty queue-set, it proceeds to the next customer (step 97). If the network output scheduler 72 detects traffic on the input queue-set for a particular customer, it interrogates the usage monitor 74 to determine if the corresponding element of the MBR counter array 66 indicates that the customer's allotment of supplemental network usage has been depleted (step 98).
If the corresponding element of the MBR counter array 66 indicates that the customer's allotment of supplemental network usage has not been depleted, the network output scheduler 72 buffers selected packets from that customer's queue-set for transmission on the trunk 18 (step 100). The network output scheduler 72 then updates the MBR counter 66 to reflect the customer's network usage (step 102). This procedure is repeated until the beginning of the next interval, at which point each customer receives a new allotment of guaranteed network access and a new allotment of supplemental network access.
Whenever the network output scheduler retrieves a data packet from a particular customer's queue-set, it must decide from which of the individual queues within the queue-set the data packet is to be retrieved. This decision is made whether the data packet is being selected to consume that customer's guaranteed bit rate (step 90) or to consume residual bandwidth (step 98).
As noted above, in the preferred embodiment, the order in which individual queues from a queue-set are selected and the number of data packets to be selected from each queue can be adjusted by the customer. The customer does so by adjusting the queue weights, and hence the queue priorities, in a weighted round-robin implemented by the network output scheduler 72. The network output scheduler 72 performs this weighted round-robin procedure as part of selecting data packets to meet the customer's guaranteed bit rate (step 90) and as part of selecting data packets to consume residual trunk bandwidth (step 98).
The number of weights available for adjustment in the weighted round-robin is equal to the number of queues in each queue-set. For example, the customer can specify that all data packets from high priority queues within the queue-set be sent before any data packets from lower priority queues are sent. Alternatively, the customer can specify that data packets from lower priority queues can be sent once a specified number of data packets from higher priority queues have been sent. The operation of the local output scheduler 82 is identical to that of the network output scheduler 72 as described above. Data packets selected by the local output scheduler as described in connection with FIGS. 4 and 5 proceed to an output DMA 104. If the local area network 34 is not busy, the output DMA 104 places the packet onto the local area network 34. Otherwise, the output DMA 104 passes the data packet to an output queuing unit 106 for placement in the queue-sets 108a-d pending eventual transmission onto the local area network 34.
It is apparent that the scheduling method of the invention is applied along four different paths within the nacket switch 10. A data packet sent from one remote customer to another remote customer is scheduled by the network output scheduler 72. A data packet sent from a remote customer to a local customer is scheduled by the local output scheduler 82. A data packet sent by a local customer to a remote customer is scheduled by the network output scheduler 72. Finally, a data packet sent by a local customer to another local customer is scheduled by the local output scheduler 82. The scheduling method of the invention therefore does not depend on either the source or the destination of the data packet upon which it operates.
Having described the invention and a preferred embodiment thereof, what is claimed as new and secured by letters patent is:

Claims

1. A packet switch for allocating access to a communication network among a plurality of customers, each of said customers having an allotment of guaranteed access, said packet switch comprising:
a queuing unit for maintaining a plurality of queue-sets, each of said queue- sets accepting a data packet from a corresponding customer from said plurality of customers;
a usage monitor for storing, for each customer from said plurality of customers, usage information indicative of an extent to which said customer has depleted said allotment of guaranteed access; and;
a scheduler in communication with said usage monitor and said queuing unit, said scheduler retrieving, from a selected queue-set from said plurality of queue-sets, a data packet for transmission on said communication network, said selected queue-set being selected on the basis of said usage information from said usage monitor.
2. The packet switch of claim 1 wherein said scheduler comprises:
a first polling element for determining an unused portion of a selected customer's allotment of guaranteed network access; and
a first dequeuing element for selecting, on the basis of said unused portion, a first allotment of data packets for transmission on said network.
3. The packet switch of claim 2 wherein each of said plurality of customers has a supplemental allotment of network access, and said scheduler further comprises:
a monitoring element for detecting the existence of residual bandwidth on said communication network; a second polling element for determining and unused portion of a selected customer's supplemental allotment of network access; and
a second dequeuing element for selecting, on the basis of said unused portion of said selected customer's supplemental allotment, a second allotment of data packets for transmission on said communication network.
4. The packet switch which of claim 3 further comprising:
a weighting element for evaluating a weight associated with said selected customer, said weight being proportional to a ratio of said supplemental allotment of network access of said selected customer to a total of supplemental allotments of network accesses of all customers from said plurality of customers.
5. The packet switch of claim 1 wherein said scheduler comprises an allocation element for granting network access to a customer on the basis of a supplemental allotment of network access of said customer and supplemental allotments of network access of all customers from said plurality of customers.
6. The packet switch of claim 1 wherein said queue-set includes a higher-priority queue and a lower-priority queue, and said packet switch further comprises a packet classifier for placing said data packet into a queue selected from said higher-priority queue and lower-priority queue.
7. The packet switch of claim 6 wherein said queue-set includes a plurality of queues, each of said queues having an associated priority.
8. The packet switch of claim 6 wherein said scheduler selects a data packet from said higher-priority queue in preference to a data packet from said lower-priority queue.
9. The packet switch of claim 6 wherein said scheduler discards a data packet from said lower-priority queue in preference to a. data packet from said higher-priority queue.
10. The packet switch of claim 1 wherein said usage monitor comprises:
a first counter for storing information indicative of an extent to which each of said customers has depleted said allotment of guaranteed network access associated with said customer; and
a second counter indicative of an extent to which each of said customers has depleted an allotment of supplemental network access associated with said . customer.
11. The packet switch of claim 1 wherein said usage monitor is in communication with other packet switches in said communication network.
12. A method for allocating access to a communication network among a plurality of customers, each of said customers having an allotment of guaranteed access to said communication network, said method comprising:
maintaining a plurality of queue-sets, each of said queue-sets accepting a data packet from a corresponding customer from said plurality of customers;
storing, for each customer from said plurality of customers, usage information indicative of an extent to which said customer has depleted said allotment of guaranteed access to said communication network; and
retrieving, from a selected queue-set from said plurality of queue-sets, a data packet for transmission on said communication network said selected queue-set being selected on the basis of said usage information.
13. The method of claim 12 wherein said retrieving step comprises: determining an unused portion of a selected customer's allotment of guaranteed network access; and
selecting on the basis of said unused portion, a first allotment of data packets for transmission on said communieation network.
14. The method of claim 13 wherein said retrieving step further comprises: detecting the existence of residual bandwidth on said communication network;
determining an unused portion of a selected customer's supplemental allotment of network access;
selecting, on the basis of said unused portion, a second allotment of data packets for transmission on said communication network.
15. The method of claim 14 wherein said step of selecting said second allotment of data packets comprises evaluating a weight associated with said selected customer, said weight being proportional to a ratio of said supplemental allotment of network access of said selected customer to a total of supplemental allotments of network accesses of all customers from said plurality of customers.
16. The method of claim 12 wherein said retrieval step comprises granting network access to a customer on the basis of a supplemental allotment of network access of said customer and supplemental allotments of network access for all customers from said plurality of customers.
17. The method of claim 12 further comprising:
providing a higher-priority queue and a lower-priority queue in said queue- set; and
placing said data packet into a selected queue from said queue-set.
18. The method of claim 17 further comprising selecting a data packet from said higher-priority queue in preference to a data packet from said lower-priority queue.
19. The method of claim 17 further comprising discarding a data packet from said lower-priority queue in preference to a date packet from said higher-priority queue.
20. The method of claim 12 further comprising providing a plurality of queues, each of said queues having an associated priority.
21. The method of claim 19 further comprising controlling said associated priorities.
22. The method of claim 12 wherein said step of storing usage information further comprises:
storing information indicative of an extent of which each of said customers has depleted said allotment of guaranteed network access; and
storing information indicative of an extent of which each of said customers has depleted a supplemental allotment of network access.
23. The method of claim 12 further comprising communicating with other packet switches in said communication network to obtain information indicative of said allotment of network access.
24. A packet switch for allocating access to a communication network among a plurality of customers, each of said customers having an allotment of guaranteed access and an allotment of supplemental network access, said packet switch comprising:
a queuing unit for maintaining a plurality of queue-sets, each of said queue- sets accepting a data packet from a corresponding customer from said plurality of customers;
a usage monitor for storing, for each customer from said plurality of customers, usage information indicative of an extent to which said customer has depleted said allotment of guaranteed access and said allotment of supplemental network access; and;
a scheduler in communication with said usage monitor and said queuing unit, said scheduler retrieving, from a selected queue-set from said plurality of queue-sets, a data packet for transmission on said communication network, said selected queue-set being selected on the basis of said usage information from said usage monitor.
25. A method for allocating access to a communication network among a plurality of customers, each of said customers having an allotment of guaranteed access to said communication network and an allotment of supplemental access to said communication network, said method comprising:
maintaining a plurality of queue-sets, each of said queue-sets accepting a data packet from a corresponding customer from said plurality of customers;
storing, for each customer from said plurality of customers, usage information indicative of an extent to which said customer has depleted said allotment of guaranteed access to said communication network and an extent to which said customer has depleted said allotment of supplemental access to said communication network; and
retrieving, from a selected queue-set from said plurality of queue-sets, a data packet for transmission on said communication network said selected queue-set being selected on the basis of said usage information.
PCT/US2001/004180 2000-04-10 2001-02-09 Packet switch including usage monitor and scheduler WO2001080504A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001236810A AU2001236810A1 (en) 2000-04-10 2001-02-09 Packet switch including usage monitor and scheduler

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US54609000A 2000-04-10 2000-04-10
US09/546,090 2000-04-10

Publications (1)

Publication Number Publication Date
WO2001080504A1 true WO2001080504A1 (en) 2001-10-25

Family

ID=24178818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2001/004180 WO2001080504A1 (en) 2000-04-10 2001-02-09 Packet switch including usage monitor and scheduler

Country Status (2)

Country Link
AU (1) AU2001236810A1 (en)
WO (1) WO2001080504A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491531B2 (en) 2016-09-13 2019-11-26 Gogo Llc User directed bandwidth optimization
US10511680B2 (en) 2016-09-13 2019-12-17 Gogo Llc Network profile configuration assistance tool
US10523524B2 (en) 2016-09-13 2019-12-31 Gogo Llc Usage-based bandwidth optimization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5392280A (en) * 1994-04-07 1995-02-21 Mitsubishi Electric Research Laboratories, Inc. Data transmission system and scheduling protocol for connection-oriented packet or cell switching networks
EP0817436A2 (en) * 1996-06-27 1998-01-07 Xerox Corporation Packet switched communication system
EP0901301A2 (en) * 1997-09-05 1999-03-10 Nec Corporation Dynamic rate control scheduler for ATM networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5392280A (en) * 1994-04-07 1995-02-21 Mitsubishi Electric Research Laboratories, Inc. Data transmission system and scheduling protocol for connection-oriented packet or cell switching networks
EP0817436A2 (en) * 1996-06-27 1998-01-07 Xerox Corporation Packet switched communication system
EP0901301A2 (en) * 1997-09-05 1999-03-10 Nec Corporation Dynamic rate control scheduler for ATM networks

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10491531B2 (en) 2016-09-13 2019-11-26 Gogo Llc User directed bandwidth optimization
US10511680B2 (en) 2016-09-13 2019-12-17 Gogo Llc Network profile configuration assistance tool
US10523524B2 (en) 2016-09-13 2019-12-31 Gogo Llc Usage-based bandwidth optimization
US11038805B2 (en) 2016-09-13 2021-06-15 Gogo Business Aviation Llc User directed bandwidth optimization
US11296996B2 (en) 2016-09-13 2022-04-05 Gogo Business Aviation Llc User directed bandwidth optimization

Also Published As

Publication number Publication date
AU2001236810A1 (en) 2001-10-30

Similar Documents

Publication Publication Date Title
KR100212104B1 (en) Method for assigning transfer capacity to network
US5675573A (en) Delay-minimizing system with guaranteed bandwidth delivery for real-time traffic
US7123622B2 (en) Method and system for network processor scheduling based on service levels
CN101057481B (en) Method and device for scheduling packets for routing in a network with implicit determination of packets to be treated as a priority
USRE44119E1 (en) Method and apparatus for packet transmission with configurable adaptive output scheduling
US6909691B1 (en) Fairly partitioning resources while limiting the maximum fair share
US5831971A (en) Method for leaky bucket traffic shaping using fair queueing collision arbitration
CA2366269C (en) Method and apparatus for integrating guaranteed-bandwidth and best-effort traffic in a packet network
US7796610B2 (en) Pipeline scheduler with fairness and minimum bandwidth guarantee
US7159219B2 (en) Method and apparatus for providing multiple data class differentiation with priorities using a single scheduling structure
US7149227B2 (en) Round-robin arbiter with low jitter
US6646986B1 (en) Scheduling of variable sized packet data under transfer rate control
US6721796B1 (en) Hierarchical dynamic buffer management system and method
US7321554B1 (en) Method and apparatus for preventing blocking in a quality of service switch
US7764703B1 (en) Apparatus and method for dynamically limiting output queue size in a quality of service network switch
KR100463697B1 (en) Method and system for network processor scheduling outputs using disconnect/reconnect flow queues
JP2000512442A (en) Event-driven cell scheduler in communication network and method for supporting multi-service categories
US6952424B1 (en) Method and system for network processor scheduling outputs using queueing
JP4163044B2 (en) BAND CONTROL METHOD AND BAND CONTROL DEVICE THEREOF
US7894347B1 (en) Method and apparatus for packet scheduling
US11336582B1 (en) Packet scheduling
US7619971B1 (en) Methods, systems, and computer program products for allocating excess bandwidth of an output among network users
WO2001080504A1 (en) Packet switch including usage monitor and scheduler
EP2063580B1 (en) Low complexity scheduler with generalized processor sharing GPS like scheduling performance
US8467401B1 (en) Scheduling variable length packets

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA CN IL JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP