WO2012116761A1 - Scheduling for delay sensitive packets - Google Patents

Scheduling for delay sensitive packets Download PDF

Info

Publication number
WO2012116761A1
WO2012116761A1 PCT/EP2011/056010 EP2011056010W WO2012116761A1 WO 2012116761 A1 WO2012116761 A1 WO 2012116761A1 EP 2011056010 W EP2011056010 W EP 2011056010W WO 2012116761 A1 WO2012116761 A1 WO 2012116761A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
packets
scheduler
queues
delay sensitive
Prior art date
Application number
PCT/EP2011/056010
Other languages
French (fr)
Inventor
Orazio Toscano
Sergio Lanzone
Original Assignee
Telefonaktiebolaget L M Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget L M Ericsson (Publ) filed Critical Telefonaktiebolaget L M Ericsson (Publ)
Publication of WO2012116761A1 publication Critical patent/WO2012116761A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6215Individual queue per QOS, rate or priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/32Flow control; Congestion control by discarding or delaying data units, e.g. packets or frames
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/58Changing or combining different scheduling modes, e.g. multimode scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/628Queue scheduling characterised by scheduling criteria for service slots or service orders based on packet size, e.g. shortest packet first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • H04L47/527Quantum based scheduling, e.g. credit or deficit based scheduling or token bank
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • H04L47/623Weighted service order
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing

Definitions

  • This invention relates to methods of scheduling, to schedulers for scheduling use of an output path shared by two or more queues of variable length packets, at a node of a telecommunications network, to corresponding nodes and to corresponding computer programs.
  • timing packets are sent to provide accurate frequency and/or time synchronization references in order to properly operate, for example mobile technologies such as GSM, WCDMA and in the future LTE.
  • a number of approaches are known for handling of delay sensitive services over a shared medium in a packet switched network handling packets of variable length.
  • the delay sensitive traffic can be prioritized by a scheduler which can be used to define a predetermined scheduling interval for the delay sensitive traffic services. Prior to allocating a time slot, a central node consults the scheduler to determine if a scheduling interval has elapsed.
  • a time slot is allocated to the network node carrying the corresponding delay sensitive traffic service. If no scheduling interval has elapsed for the time slot consulted, the central node allocates a time slot to a non-delay sensitive service.
  • Another known technique is to measure delays through a node, and compensate or account for such delays.
  • Another technique is to buffer the packets to regroup and resynchronize them after passing through a number of nodes.
  • An object of the invention is to provide improved apparatus or methods.
  • the invention provides a method of scheduling use of an output path shared by two or more queues of variable length packets at a node of a telecommunications network, by generating a timing of free intervals during which intervals the queued packets are not to use the shared output path, so that it is free for use by delay sensitive packets, and before a start of a next of the free intervals, using a scheduling algorithm before a start of a next of the free intervals, to determine which of the two or more queues supplies a next packet for output.
  • the method also involves determining before outputting the next packet, whether there is enough time to complete the outputting of the next packet before the start of the next free interval, and if there is enough time, outputting the next packet on the output path and otherwise delaying the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm.
  • the shared output path is used for outputting the delay sensitive packet. This can enable a reduction in delay variation or total delay for the delay sensitive packets with relatively little effect on the existing scheduling algorithms. This can enable delay sensitive packets such as sync packets or constant bit rate video or audio to be sent through more nodes for a given amount of delay variation, or can enable nodes to be allowed to become more congested for a given amount of delay variation for example.
  • Another aspect of the invention provides a scheduler for scheduling use of an output path shared by two or more queues of variable length packets, at a node of a telecommunications network, the scheduler having a timer for generating a timing of free intervals during which intervals the queued packets are not to use the shared output path, so that it is free for delay sensitive packets to use, and a selector arranged to use a scheduling algorithm to select, before a start of a next of the free intervals, which of the queues is to supply a next packet for output on the shared output path.
  • a comparator is provided to determine whether there is enough time to complete the outputting of the selected packet before the start of the next free interval, and a controller can output the next packet on the shared output path if there is enough time, and otherwise delay the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm. During the free intervals the shared output path is used for outputting the delay sensitive packets.
  • Other aspects of the invention provide corresponding nodes having such schedulers, corresponding networks of such nodes and corresponding computer programs for controlling nodes to carry out such methods.
  • Fig. 1 shows a schematic view of a scheduler according to a first embodiment
  • Fig. 2 shows a view of steps of methods according to an embodiment
  • Figs. 3 to 5 show steps of methods according to further embodiments
  • Fig. 6 shows a timing of free intervals using a counter for use in embodiments
  • Fig. 7 shows an example of queues and values used by a Deficit Round Robin Algorithm for use in embodiments
  • Fig. 8 shows an example of queues and length queues for use in embodiments
  • Fig 9 shows a schematic view of a scheduler according to a another embodiment
  • Fig. 10 shows a derivative circuit for use in the embodiment of fig 9
  • Fig. 11 shows a schematic view of a node having a scheduler at an exit side, according to another embodiment.
  • Elements or parts of the described nodes or networks may comprise logic encoded in media for performing any kind of information processing.
  • Logic may comprise software encoded in a disk or other computer-readable medium and/or instructions encoded in an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other processor or hardware.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • references to nodes can encompass any kind of switching node, not limited to the types described, not limited to any level of integration, or size or bandwidth or bit rate and so on.
  • references to programs or software can encompass any type of programs in any language executable directly or indirectly on processing hardware.
  • references to hardware, processing hardware or circuitry can encompass any kind of logic or analog circuitry, integrated to any degree, and not limited to general purpose processors, digital signal processors, ASICs, FPGAs, discrete components or logic and so on.
  • Temporary Document 10HA-034, G.vdsl A Method for Accurate Distribution of Time-of-Day over VDSL2 Links, Huntsville, AL, 22-26 March, 2010
  • VDSL2 Very high speed digital subscriber line transceivers 2
  • Scheduling of resources such as link bandwidth and available buffers is key to providing performance guarantees to applications that require QoS support from the network.
  • the routers and switches need to distinguish between the flows requiring different QoS (and possibly sort them into separate queue) and then, based on a scheduling algorithm, send these packets to the outgoing links.
  • Best-effort traffic doesn't demand any performance guarantees from the network. However, if there are multiple competing best effort flows, the scheduler is required to perform fair allocation of resources (such as bandwidth).
  • resources such as bandwidth.
  • a scheduler decides the order in which it serves packets. The service order has impact on delay suffered by packets (eventually flows or users of these flows) waiting in the queue. Packets sharing the same source and destination address, same source destination port, and same protocol identification are considered to belong to a flow.
  • the server can allocate bandwidth to packets from a flow by servicing a certain number of packets from that flow within a time interval.
  • packets are arriving at the output buffer at a rate faster than the server can serve them, packets will have to wait in the queue for service. If the buffer is of limited size, packets will be dropped. Again, the service order has an impact on packet loss, and a scheduler is capable of guaranteeing that the loss will be below minimum level.
  • fairness is an important criterion for competing best effort flows.
  • the scheduler should allocate resources in a fair manner to these flows. Fairness is not an issue for a class-based network, where traffic demands have differing QoS (possibly a user is willing to pay more).
  • One simple way to provide differential treatment to flows is to use multiple queues with associated priorities. Multiple queues with different priority levels, 0 to n -1, are maintained. The number of priorities to be used will depend on the number of priority levels supported by a particular protocol. For example, the IPv4 header supports a field called type of service (ToS). The source can use this field to request preferential treatment from the network. If there are packets queued in both higher and lower priority queues, the scheduler serves packets from the higher priority queue before it attends to the lower priority queue.
  • ToS type of service
  • a simple scheduler implementation is the round robin (RR) scheduling.
  • RR round robin
  • the round robin scheduler maintains one queue for each flow. Each incoming packet is placed in an appropriate queue. The queues are served in a round robin fashion, taking one packet from each nonempty queue in turn. Empty queues are skipped over.
  • This scheme is fair in that each busy flow gets to send exactly one packet per cycle. Further, this is a load balancing among the various flows. Note that there is no advantage to being greedy. A greedy flow finds that its queue becomes long, increasing its delay, whereas other flows are unaffected by this behavior.
  • round robin provides a fair allocation of link bandwidth. If packet sizes are fixed, such as in ATM networks, round robin provides a fair allocation of link bandwidth. If packet sizes are variable, which is the case in the Internet, there is a fairness problem. Consider a queue with very large packets and several other queues with very small packets. With round robin, the scheduler will come back to the large-packet queue quickly and spend long times there. On average, the large-packet queue will get the lion's share of the link bandwidth. Another problem with round robin is that it tries to allocate fair bandwidth to all queues and hence differential treatment, or any specific allocation of bandwidth to specific queues, is not achieved. Weighted Round Robin
  • Weighted round robin is a simple modification to round robin. Instead of serving a single packet from a queue per turn, it serves n packets. Here n is adjusted to allocate a specific fraction of link bandwidth to that queue. Each flow is given a weight that corresponds to the fraction of link bandwidth it is going to receive. The number of packets to serve in one turn is calculated from this weight and the link capacity.
  • the WRR works fine with fixed size packets, such as in ATM networks.
  • WRR has difficulty in maintaining bandwidth guarantees with variable size packets (the Internet).
  • the problem with a variable size packet is that flows with large packets will receive more than the allocated weight.
  • the WRR server needs to know the mean packet size of sources a priori.
  • Deficit round robin improves WRR by being able to serve variable length packets without knowing the mean packet size of connections a priori.
  • the algorithms work as follows: Initially a variable quantum is initialized to represent the number of bits to be served from each queue. The scheduler starts serving each queue that has a packet to be served. If the packet size is less than or equal to the quantum, the packet is served. However, if the packet is bigger than the quantum size, the packet has to wait for another round. In this case another counter, called a deficit counter, is initialized for this queue. If a packet can't be served in a round, its deficit counter is incremented by the size of the quantum.
  • WFQ weighted fair queue
  • R(t) is called a round number. This is the number a bit-by-bit round robin scheduler has completed at a given time.
  • the round number is a variable that depends on the number of active queues to be served (inversely proportional to the active queue number). The more queues to serve, the longer a round will take to complete.
  • P is the time required to transmit m th packet from ⁇ 3 ⁇ 4 connection
  • w(c) is the weight of connection c.
  • the rate of change of the round number with the progress of real time will keep varying depending upon the number of active queues.
  • a different weight can be assigned to each connection, to enable them to be served in proportion to their weights. These weights can either be configured manually or by using some form of signaling protocols.
  • An aim of at least some of the embodiments is to provide more intelligence to the standard scheduler algorithms in order to avoid any circumstance where they are occupied to serve others queues or transmit long packets when any delay sensitive (e.g.
  • PTP PTP
  • the proposed algorithms should not change the fairness characteristics of the schedulers, should maximize the bandwidth efficiency and be simple in order to be easily and cost effectively implemented into some cheap device (such as an FPGA integrated circuit device).
  • the first problem is to inform the scheduler about the time that new delay sensitive packets will need to be issued.
  • the schedulers will have to decide their scheduling decision reserving the required resources for very delay sensitive packets (e.g. PTP) but not changing their qualities (fairness and so on) and minimizing the bandwidth waste.
  • very delay sensitive packets e.g. PTP
  • FIG. 1 shows a schematic view of a scheduler 70 according to a first embodiment.
  • the scheduler is arranged to control a number of queues 50 which share an output path and share the path with delay sensitive packets 60, which may be stored or may be passed through without storage.
  • the scheduler has a timer 10 provided for generating a signal indicating a timing of a next free interval for the delay sensitive packets. This indication can be in any form, including a count down, or a single value denoting a remaining time, or a start time of the interval from which the comparator 30 can derive a remaining time from an internal clock for example.
  • a selector 20 is provided for selecting a next packet from one of the queues according to a scheduling algorithm.
  • the comparator needs to be fed with an indication of the length of the selected next packet. This can be derived from the packet at the time of selecting or can be determined earlier and stored separately.
  • the length can be in terms of bytes or other size units, or in terms of a time needed to transmit for example.
  • a controller 40 is provided for controlling the queues to output the next packet on the shared output path if there is enough time, and otherwise delay the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm.
  • the shared output path is used for outputting the delay sensitive packets, either controlled by the controller or independently of the controller.
  • Fig 2 shows some of the operational steps of this or other embodiments.
  • the timing of the free intervals is generated.
  • the scheduling algorithm is used to select a next packet from one of the queues.
  • the shared output path is used for outputting the delay sensitive packet, either under the control of the controller or independently.
  • any additional features may be added and some are discussed below.
  • An example of this is shown in figure 3 which is similar to figure 2 but shows in between steps 140 and 150 an additional step of repeating steps 110 and 120 for a subsequent packet.
  • Figure 4 shows an example similar to that of figure 2, in which step 120 has a number of steps.
  • a counter provides an output representing the time remaining to the next free interval, and the step of determining whether there is enough time comprises comparing a length of the selected packet with the counter output. This is a relatively efficient way of determining if there is enough time.
  • a length value of the next packet is obtained.
  • the counter is used to count down. The length value is compared to the counter output, to determine if there is enough time. Appropriate scaling can be applied to either input of the comparator to ensure the inputs are comparable.
  • Figure 4 also shows a further step before step 1 10, of setting the times of the free intervals (102) according to information about the frequency of the delay sensitive packets received by the node at the time of setting up a flow of the delay sensitive packets. This helps enable the timings of the free intervals to be adapted to different flows, and to be set in advance of the actual packet arrivals.
  • the delay sensitive packets can belong to a constant bit rate flow. Such flows are particularly valuable as video traffic grows, so it is beneficial to be able to send them efficiently, through many nodes.
  • the selecting of the next packet can involve a deficit round robin scheduling step. This is one useful way of providing fair scheduling, so it is beneficial to be able to combine this with the reduction in delay variation for the delay sensitive packets without losing the fairness.
  • the selecting of the next packet can involve a weighted fair queueing step. This is another useful way of providing fair scheduling, so it is beneficial to be able to combine this with the reduction in delay variation for the delay sensitive packets.
  • An example is shown in figure 5 which is similar to figure 2 but step 1 10 is replaced by step 1 12 of using a WFQ algorithm to select a next packet from the queues.
  • Step 140 is replaced by step 142 of delaying the next packet, leaving its tag unchanged so that it will be a first choice for transmission after the free interval.
  • the queues can be queues at an exit side of a node. This can help enable any delay variation introduced by other parts of the node to be reduced.
  • the scheduler is in a node having a packet switch, and there is a packet delay variation compensator coupled after a packet switch and before the scheduler, to compensate for variations in packet delays between different ones of the delay sensitive packets. This can complement the use of the free interval, so that delays introduced by the node are compensated, and so that no delays are added by the scheduler.
  • the compensator can be implemented in various ways, such as gathering the delay sensitive packets into a fixed time window or resynchronising them in some other way, or by measuring a delay through the node and adding a corresponding delay so that all the delay sensitive packets are delayed by the same amount for example.
  • a simple counter can be set to M-l at every high priority packet issue and will decrease at every clock edge in such a way to reach the zero value at the next high priority packet issue as shown in figure 6. Now if the port transmission speed is ⁇ Mbit/s then at every counter change of state it is easy to calculate the maximum packet size that can occupy the transmission circuits (that can be scheduled) without impact on the next delay sensitive packet (e.g. PTP) transmission.
  • PTP delay sensitive packet
  • a small FIFO can be associated with every queue and in this FIFO can be stored the enqueued packet lengths.
  • the size of this FIFO will be: P bytes (size of the relative queue in bytes) / L bytes (minimum packet size).
  • the word used by this FIFO will be 14bits for every Ethernet applications (lengths from 64 bytes to 9600) or will be easily calculated for other protocols according to the following formula
  • Words _ bits [log 2 (Maximum _ Packet _ Length ]
  • Figs 7, 8 DDRA example Figure 7 shows an example of queues and values used by a Deficit Round Robin Algorithm. This can run as usual as described above, but every time that a packet becomes a candidate to be selected for transmission the scheduler will have to check if the relative length is less than or equal to MPS. Otherwise the packet will not be chosen and the choice will pass to another one according to the scheduler logic. There are four queues ql-q4 in the example illustrated. Initial deficits and end deficits for each queue are shown, and a packet sent value. E.g.
  • Figure 8 shows an example of the packet lengths stored for each of the queues ql-q4, for use in the comparison of length to MPS.
  • the length FIFO has values 500 then 700.
  • the length FIFO for q2 is empty since q2 is empty.
  • the length FIFO for q3 has values 200 and 500, and the length FIFO for q4 has one length value of 400.
  • the scheduler serves the smaller and eligible ones in order to use the gap as much as possible.
  • Very small delays are added to the delayed next packet (and small delays decreased by the subsequent packet that may take advantage of this delay if it is short enough) but the overall mean delay will be negligible.
  • the scheduling delay added to the very delay sensitive packets e.g. PTP
  • PTP very delay sensitive packets
  • Figure 9 shows a schematic view of an example of a scheduler and associated queues.
  • Two of the packet queues 1-M are shown in the form of FIFO 51 and FIFO 52, having packets 1 to S in queue 1 and packets 1 to P in queue M.
  • the packets have different lengths to the length values are stored in associated length queues, two are shown, length queue 1 in FIFO 76 and length queue M in FIFO 78. New incoming packets and their length values are written into these FIFOs under the control of write enable signals WEQ_ l-WEQ M.
  • the queues of packets share the same output path and so a scheduling algorithm selects which of the queues outputs at any time.
  • a processor 72 runs the scheduling algorithm and runs a controller for approving the output to avoid the free intervals.
  • Checking circuitry is provided for generating the timing of the free intervals and checking there is sufficient time to complete the sending before the start of the next free interval.
  • This circuitry includes a down counter 74 which is fed by a clock and a timing signal in the form of a delay sensitive packet transmission enable signal. This causes the counter to be preset to a value representing a number of clock periods remaining. The clock period corresponds to a time needed to send each byte of a packet.
  • An output of the counter is fed to a comparator 30 which compares the number of clock periods remaining, to the byte length of the packet selected by the scheduler, output from the corresponding length queue FIFO.
  • the correct length value is chosen by means of a signal from the processor indicating which queue is selected, fed to a decoder 84.
  • This decoder has connections to the read enable inputs of the length queue FIFOs, and activates one of these to enables.
  • the FIFO outputs are fed to a multiplexer 82 which is controlled from the decoder to couple the output from the selected length queue FIFO to the comparator 30.
  • the output of the comparator is fed to the processor, providing a binary packet approval signal indicating whether or not the selected packet is short enough to be transmitted without interfering with the free interval.
  • the controller uses this to approve the transmission and output an approved queue output select signal to the packet queues. If the packet is too long, the controller can cause the scheduling algorithm to pause until after the free interval, or to delay the selected packet and to continue with a subsequent packet selection step to see if a shorter packet is present which can be transmitted in time before the free interval.
  • a control signal to control the output of the delay sensitive packet, which may be controlled by the processor or independently.
  • the derivative circuit (DER) applied to the transmission enable signal will set a decremental counter to the value M-1.
  • the derivative circuit is used on many control signals in figure 9 to cut the respective signal length to a single clock period [the first one]. It is used on the write enable signals, on the read enable signals of the length queue FIFOs, and on the delay sensitive packet transmission enable signal.
  • FIG. 10 An example of such a circuit is shown in figure 10. After the scheduler has selected a candidate (e.g. From queue number M) the selection bus will be used to generate the right FIFO read enable signal and to correctly move the FIFO output selector as described.
  • a candidate e.g. From queue number M
  • the selection bus will be used to generate the right FIFO read enable signal and to correctly move the FIFO output selector as described.
  • the comparator will be able to compare the current length with the MPS value (directly resulting form the M value, the relative logic can be included in the comparator circuit) and to assert (or not) the packet approval signal (labeled PKT APPR).
  • Figure 10 shows a typical derivative circuit DER for use in figure 9.
  • a first latch 86 a second latch 88, and an output gate 92.
  • the output of the first latch is fed to the input of the second latch, clocked by the same clock signal.
  • the output of the second latch is fed to an inverting input of the output gate.
  • a second input of the output gate is fed by the input of the first latch.
  • FIG 11 shows a view of a node according to an embodiment.
  • a switch 300 has a number of input queues 310 coupled at one side. At an exit or output side, a number of queues 50 are provided, coupled to a shared output path.
  • a scheduler 70 as describe above can be provided as an egress scheduler to control the outputs of the queues. The scheduler can be applied to ingress queues as desired.
  • a delay variation compensator 320 may be provided on some or all of the output paths, to compensate for delay variations through the earlier parts of the node such as the input queues and the switch. This can be implemented in various ways. One way is to measure the delay for each packet separately then add a complementary delay so that the packets for that flow have the same delay.
  • delay sensitive packets e.g. for synchronization proposes
  • Other current solutions e.g. work "on the flow” after the schedulers
  • the embodiments discussed can provide an improvement with little modification to the most widely adopted schedulers. This modification is simple enough to be cost effective (possibly a simple FPGA implementation), doesn't affect the native schedulers good properties (fairness) and can provide very good scalability properties (a single circuit for all ports).
  • Other variations and embodiments can be envisaged within the claims.

Abstract

Scheduling use of an output path shared by two or more queues of variable length packets at a node of a telecommunications network involves generating (100) a timing of free intervals for use by delay sensitive packets, and using (110) a scheduling algorithm to determine a next packet for output. Before outputting the next packet, a check is made whether there is enough time to complete the outputting before the start of a next free interval. If yes, the next packet is output (130), otherwise it is delayed (140) until after the free interval without distorting the scheduling algorithm. The free interval is then used (150) for outputting the delay sensitive packets. This can enable a reduction in delay variation or total delay for the delay sensitive packets with relatively little effect on the fairness of the scheduling algorithm.

Description

SCHEDULING FOR DELAY SENSITIVE PACKETS
Technical Field: This invention relates to methods of scheduling, to schedulers for scheduling use of an output path shared by two or more queues of variable length packets, at a node of a telecommunications network, to corresponding nodes and to corresponding computer programs.
Background:
There are a number of applications requiring transmission of packets through telecommunications networks with little delay, or little delay variation. For example video streams, audio streams, and network timing packets amongst others. In some networks, timing packets are sent to provide accurate frequency and/or time synchronization references in order to properly operate, for example mobile technologies such as GSM, WCDMA and in the future LTE. A number of approaches are known for handling of delay sensitive services over a shared medium in a packet switched network handling packets of variable length. In one case the delay sensitive traffic can be prioritized by a scheduler which can be used to define a predetermined scheduling interval for the delay sensitive traffic services. Prior to allocating a time slot, a central node consults the scheduler to determine if a scheduling interval has elapsed. If an interval has elapsed, a time slot is allocated to the network node carrying the corresponding delay sensitive traffic service. If no scheduling interval has elapsed for the time slot consulted, the central node allocates a time slot to a non-delay sensitive service.
Another known technique is to measure delays through a node, and compensate or account for such delays. Another technique is to buffer the packets to regroup and resynchronize them after passing through a number of nodes.
However, delay sensitive packets are still difficult for nodes to handle efficiently without disrupting other traffic.
Summary:
An object of the invention is to provide improved apparatus or methods. According to a first aspect, the invention provides a method of scheduling use of an output path shared by two or more queues of variable length packets at a node of a telecommunications network, by generating a timing of free intervals during which intervals the queued packets are not to use the shared output path, so that it is free for use by delay sensitive packets, and before a start of a next of the free intervals, using a scheduling algorithm before a start of a next of the free intervals, to determine which of the two or more queues supplies a next packet for output. The method also involves determining before outputting the next packet, whether there is enough time to complete the outputting of the next packet before the start of the next free interval, and if there is enough time, outputting the next packet on the output path and otherwise delaying the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm. During the free interval the shared output path is used for outputting the delay sensitive packet. This can enable a reduction in delay variation or total delay for the delay sensitive packets with relatively little effect on the existing scheduling algorithms. This can enable delay sensitive packets such as sync packets or constant bit rate video or audio to be sent through more nodes for a given amount of delay variation, or can enable nodes to be allowed to become more congested for a given amount of delay variation for example.
Another aspect of the invention provides a scheduler for scheduling use of an output path shared by two or more queues of variable length packets, at a node of a telecommunications network, the scheduler having a timer for generating a timing of free intervals during which intervals the queued packets are not to use the shared output path, so that it is free for delay sensitive packets to use, and a selector arranged to use a scheduling algorithm to select, before a start of a next of the free intervals, which of the queues is to supply a next packet for output on the shared output path. A comparator is provided to determine whether there is enough time to complete the outputting of the selected packet before the start of the next free interval, and a controller can output the next packet on the shared output path if there is enough time, and otherwise delay the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm. During the free intervals the shared output path is used for outputting the delay sensitive packets. Other aspects of the invention provide corresponding nodes having such schedulers, corresponding networks of such nodes and corresponding computer programs for controlling nodes to carry out such methods.
Any additional features can be added to these aspects, or disclaimed from them, and some are described in more detail below. Any of the additional features can be combined together and combined with any of the aspects. Other effects and consequences will be apparent to those skilled in the art, especially over compared to other prior art. Numerous variations and modifications can be made without departing from the claims of the present invention. Therefore, it should be clearly understood that the form of the present invention is illustrative only and is not intended to limit the scope of the present invention.
Brief Description of the Drawings:
How the present invention may be put into effect will now be described by way of example with reference to the appended drawings, in which:
Fig. 1 shows a schematic view of a scheduler according to a first embodiment,
Fig. 2 shows a view of steps of methods according to an embodiment,
Figs. 3 to 5 show steps of methods according to further embodiments,
Fig. 6 shows a timing of free intervals using a counter for use in embodiments,
Fig. 7 shows an example of queues and values used by a Deficit Round Robin Algorithm for use in embodiments,
Fig. 8 shows an example of queues and length queues for use in embodiments,
Fig 9 shows a schematic view of a scheduler according to a another embodiment, Fig. 10 shows a derivative circuit for use in the embodiment of fig 9, and
Fig. 11 shows a schematic view of a node having a scheduler at an exit side, according to another embodiment.
Detailed Description:
The present invention will be described with respect to particular embodiments and with reference to certain drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting. In the drawings, the size of some of the elements may be exaggerated and not drawn on scale for illustrative purposes.
Definitions
Where the term "comprising" is used in the present description and claims, it does not exclude other elements or steps. Where an indefinite or definite article is used when referring to a singular noun e.g. "a" or "an", "the", this includes a plural of that noun unless something else is specifically stated. The term "comprising", used in the claims, should not be interpreted as being restricted to the means listed thereafter; it does not exclude other elements or steps.
Elements or parts of the described nodes or networks may comprise logic encoded in media for performing any kind of information processing. Logic may comprise software encoded in a disk or other computer-readable medium and/or instructions encoded in an application specific integrated circuit (ASIC), field programmable gate array (FPGA), or other processor or hardware.
References to nodes can encompass any kind of switching node, not limited to the types described, not limited to any level of integration, or size or bandwidth or bit rate and so on.
References to programs or software can encompass any type of programs in any language executable directly or indirectly on processing hardware.
References to hardware, processing hardware or circuitry can encompass any kind of logic or analog circuitry, integrated to any degree, and not limited to general purpose processors, digital signal processors, ASICs, FPGAs, discrete components or logic and so on.
Abbreviations
NTP Network Time Protocol
PTP Precision Time Protocol
References
1. Temporary Document 10HA-034, G.vdsl : A Method for Accurate Distribution of Time-of-Day over VDSL2 Links, Huntsville, AL, 22-26 March, 2010
2. ITU-T G.993.2, Very high speed digital subscriber line transceivers 2 (VDSL2)
3. ITU-T Recommendation G.984.3 (03/2008) "GPON: Transmission convergence layer specification" (Amendment 2)
4. IEEE1588-2008, IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems
5. Temporary Document 10GS-044, Asymmetry in Propagation Times of DSL
Systems, 22-26 March, 2010
6. China Mobile (WSTS 2010), Test and Analysis of Time Synchronization Using 1588v2 for Transport Network 7. IEEE JQE, vol. QE 1 7 , no . 6, June 1 98 1 : Optical Time Domain Reflectometry in a Single Mode Fiber
Introduction
By way of introduction to the embodiments, some issues with conventional designs will be explained.
Schedulers
Scheduling of resources such as link bandwidth and available buffers is key to providing performance guarantees to applications that require QoS support from the network. The routers and switches need to distinguish between the flows requiring different QoS (and possibly sort them into separate queue) and then, based on a scheduling algorithm, send these packets to the outgoing links.
Following are some goals to be achieved by the different scheduling techniques to support QoS in packet switching networks:
• Sharing bandwidth;
• Providing fairness to competing flows;
• Meeting bandwidth guarantees (minimum and maximum);
• Meeting loss guarantees (multiple levels);
• Meeting delay guarantees (multiple levels);
• Reducing delay variations.
Best-effort traffic doesn't demand any performance guarantees from the network. However, if there are multiple competing best effort flows, the scheduler is required to perform fair allocation of resources (such as bandwidth). The list of goals suggests that scheduling in QoS networks is nontrivial. A scheduler (server) decides the order in which it serves packets. The service order has impact on delay suffered by packets (eventually flows or users of these flows) waiting in the queue. Packets sharing the same source and destination address, same source destination port, and same protocol identification are considered to belong to a flow. The server can allocate bandwidth to packets from a flow by servicing a certain number of packets from that flow within a time interval. If packets are arriving at the output buffer at a rate faster than the server can serve them, packets will have to wait in the queue for service. If the buffer is of limited size, packets will be dropped. Again, the service order has an impact on packet loss, and a scheduler is capable of guaranteeing that the loss will be below minimum level.
As discussed earlier, fairness is an important criterion for competing best effort flows. The scheduler should allocate resources in a fair manner to these flows. Fairness is not an issue for a class-based network, where traffic demands have differing QoS (possibly a user is willing to pay more).
Priority Queuing
One simple way to provide differential treatment to flows is to use multiple queues with associated priorities. Multiple queues with different priority levels, 0 to n -1, are maintained. The number of priorities to be used will depend on the number of priority levels supported by a particular protocol. For example, the IPv4 header supports a field called type of service (ToS). The source can use this field to request preferential treatment from the network. If there are packets queued in both higher and lower priority queues, the scheduler serves packets from the higher priority queue before it attends to the lower priority queue.
Round Robin
A simple scheduler implementation is the round robin (RR) scheduling. To address the fairness problem of a single FCFS queue, the round robin scheduler maintains one queue for each flow. Each incoming packet is placed in an appropriate queue. The queues are served in a round robin fashion, taking one packet from each nonempty queue in turn. Empty queues are skipped over. This scheme is fair in that each busy flow gets to send exactly one packet per cycle. Further, this is a load balancing among the various flows. Note that there is no advantage to being greedy. A greedy flow finds that its queue becomes long, increasing its delay, whereas other flows are unaffected by this behavior.
If the packet sizes are fixed, such as in ATM networks, round robin provides a fair allocation of link bandwidth. If packet sizes are variable, which is the case in the Internet, there is a fairness problem. Consider a queue with very large packets and several other queues with very small packets. With round robin, the scheduler will come back to the large-packet queue quickly and spend long times there. On average, the large-packet queue will get the lion's share of the link bandwidth. Another problem with round robin is that it tries to allocate fair bandwidth to all queues and hence differential treatment, or any specific allocation of bandwidth to specific queues, is not achieved. Weighted Round Robin
Weighted round robin (WRR) is a simple modification to round robin. Instead of serving a single packet from a queue per turn, it serves n packets. Here n is adjusted to allocate a specific fraction of link bandwidth to that queue. Each flow is given a weight that corresponds to the fraction of link bandwidth it is going to receive. The number of packets to serve in one turn is calculated from this weight and the link capacity.
Assume three ATM sources (same cell size) with weights of 0.75, 1.0, and 1.5, respectively. If these weights are normalized to integer values, then the three sources will be served 3, 4, and 6 ATM cells in each round.
The WRR works fine with fixed size packets, such as in ATM networks. However, WRR has difficulty in maintaining bandwidth guarantees with variable size packets (the Internet). The problem with a variable size packet is that flows with large packets will receive more than the allocated weight. In order to overcome this problem, the WRR server needs to know the mean packet size of sources a priori.
Deficit Round Robin
Deficit round robin (DRR) improves WRR by being able to serve variable length packets without knowing the mean packet size of connections a priori. The algorithms work as follows: Initially a variable quantum is initialized to represent the number of bits to be served from each queue. The scheduler starts serving each queue that has a packet to be served. If the packet size is less than or equal to the quantum, the packet is served. However, if the packet is bigger than the quantum size, the packet has to wait for another round. In this case another counter, called a deficit counter, is initialized for this queue. If a packet can't be served in a round, its deficit counter is incremented by the size of the quantum.
Weighted Fair Queuing
For variable size packets (Internet), a complex scheduler such as weighted fair queue (WFQ) can be used. Each packet is tagged on the ingress with a value identifying, theoretically, the time the last bit of the packet should be transmitted. Each time the link is available to send a packet, the packet with the lowest tag value is selected.
The following equation is used to calculate the virtual finish time F (that the router would have finished sending the packet m on connection c): where R(t) is called a round number. This is the number a bit-by-bit round robin scheduler has completed at a given time. The round number is a variable that depends on the number of active queues to be served (inversely proportional to the active queue number). The more queues to serve, the longer a round will take to complete. P is the time required to transmit mth packet from <¾ connection, and w(c) is the weight of connection c.
The rate of change of the round number with the progress of real time will keep varying depending upon the number of active queues. A different weight can be assigned to each connection, to enable them to be served in proportion to their weights. These weights can either be configured manually or by using some form of signaling protocols.
Introduction to features of embodiments of the invention
An aim of at least some of the embodiments is to provide more intelligence to the standard scheduler algorithms in order to avoid any circumstance where they are occupied to serve others queues or transmit long packets when any delay sensitive (e.g.
PTP) packet is available to be transmitted. In other words the schedulers have to be aware about the delay sensitive packet transmission period in order for the shared output path to be kept free at every new request.
Meantime the proposed algorithms should not change the fairness characteristics of the schedulers, should maximize the bandwidth efficiency and be simple in order to be easily and cost effectively implemented into some cheap device (such as an FPGA integrated circuit device).
Considering the issue of maintaining the shared transmitting circuits available at every delay sensitive packet (e.g. PTP) request to avoid any delay for them and having reviewed the logic of two of the more widely used and sophisticated schedulers (deficit round robin and weighted fair queueing) the first problem is to inform the scheduler about the time that new delay sensitive packets will need to be issued.
With this piece of information the schedulers will have to decide their scheduling decision reserving the required resources for very delay sensitive packets (e.g. PTP) but not changing their qualities (fairness and so on) and minimizing the bandwidth waste.
Figs 1,2, embodiments of the invention Figure 1 shows a schematic view of a scheduler 70 according to a first embodiment. The scheduler is arranged to control a number of queues 50 which share an output path and share the path with delay sensitive packets 60, which may be stored or may be passed through without storage. The scheduler has a timer 10 provided for generating a signal indicating a timing of a next free interval for the delay sensitive packets. This indication can be in any form, including a count down, or a single value denoting a remaining time, or a start time of the interval from which the comparator 30 can derive a remaining time from an internal clock for example. This can be used by the comparator to determine whether there is enough time before the next free interval to send the next packet from the queues without interfering with the delay sensitive packet. A selector 20 is provided for selecting a next packet from one of the queues according to a scheduling algorithm. The comparator needs to be fed with an indication of the length of the selected next packet. This can be derived from the packet at the time of selecting or can be determined earlier and stored separately. The length can be in terms of bytes or other size units, or in terms of a time needed to transmit for example. A controller 40 is provided for controlling the queues to output the next packet on the shared output path if there is enough time, and otherwise delay the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm. During the free intervals the shared output path is used for outputting the delay sensitive packets, either controlled by the controller or independently of the controller.
Fig 2 shows some of the operational steps of this or other embodiments. At step 100 the timing of the free intervals is generated. At step 110 before a start of a next free interval, the scheduling algorithm is used to select a next packet from one of the queues. At step 120, it is determined whether there is enough time to output the next packet before the start of the next free interval. If yes, at step 130 the next packet is output. If no, at step 140 the next packet is delayed until after the free interval. After either of these options, at step 150 during the free interval the shared output path is used for outputting the delay sensitive packet, either under the control of the controller or independently.
Figs 3 to 5, Additional features of some embodiments
Any additional features may be added and some are discussed below. There can be steps, if the next packet is delayed, of determining which of the queues supplies a subsequent packet using the scheduling algorithm, and determining before outputting the subsequent packet, whether there is enough time to complete the outputting of the subsequent packet before the start of the free interval. This can help limit the disruption to the scheduling caused by the free intervals. An example of this is shown in figure 3 which is similar to figure 2 but shows in between steps 140 and 150 an additional step of repeating steps 110 and 120 for a subsequent packet.
Figure 4 shows an example similar to that of figure 2, in which step 120 has a number of steps. A counter provides an output representing the time remaining to the next free interval, and the step of determining whether there is enough time comprises comparing a length of the selected packet with the counter output. This is a relatively efficient way of determining if there is enough time. As shown in figure 4, at step 122 a length value of the next packet is obtained. At step 124 the counter is used to count down. The length value is compared to the counter output, to determine if there is enough time. Appropriate scaling can be applied to either input of the comparator to ensure the inputs are comparable.
Figure 4 also shows a further step before step 1 10, of setting the times of the free intervals (102) according to information about the frequency of the delay sensitive packets received by the node at the time of setting up a flow of the delay sensitive packets. This helps enable the timings of the free intervals to be adapted to different flows, and to be set in advance of the actual packet arrivals.
The delay sensitive packets can belong to a constant bit rate flow. Such flows are particularly valuable as video traffic grows, so it is beneficial to be able to send them efficiently, through many nodes.
The selecting of the next packet can involve a deficit round robin scheduling step. This is one useful way of providing fair scheduling, so it is beneficial to be able to combine this with the reduction in delay variation for the delay sensitive packets without losing the fairness.
The selecting of the next packet can involve a weighted fair queueing step. This is another useful way of providing fair scheduling, so it is beneficial to be able to combine this with the reduction in delay variation for the delay sensitive packets. An example is shown in figure 5 which is similar to figure 2 but step 1 10 is replaced by step 1 12 of using a WFQ algorithm to select a next packet from the queues. Step 140 is replaced by step 142 of delaying the next packet, leaving its tag unchanged so that it will be a first choice for transmission after the free interval. The queues can be queues at an exit side of a node. This can help enable any delay variation introduced by other parts of the node to be reduced.
In some embodiments the scheduler is in a node having a packet switch, and there is a packet delay variation compensator coupled after a packet switch and before the scheduler, to compensate for variations in packet delays between different ones of the delay sensitive packets. This can complement the use of the free interval, so that delays introduced by the node are compensated, and so that no delays are added by the scheduler. The compensator can be implemented in various ways, such as gathering the delay sensitive packets into a fixed time window or resynchronising them in some other way, or by measuring a delay through the node and adding a corresponding delay so that all the delay sensitive packets are delayed by the same amount for example.
Figure 6 Time Counters
More details of an example of the timing generated by the timer will now be discussed. If it is supposed that the delay sensitive packet has a transmission rate of a packets per second, this means a period (between transmissions) of (1000 / a) ms called N as shown in figure 6 (E.g. 30 packets per second -> N = 33ms). If the higher operational frequency clock used by the scheduler is equal to β Mhz then M = α* β* 103 clock edges every N ms.
A simple counter can be set to M-l at every high priority packet issue and will decrease at every clock edge in such a way to reach the zero value at the next high priority packet issue as shown in figure 6. Now if the port transmission speed is γ Mbit/s then at every counter change of state it is easy to calculate the maximum packet size that can occupy the transmission circuits (that can be scheduled) without impact on the next delay sensitive packet (e.g. PTP) transmission.
MPS (Maximum packet size) = (M * γ) / (β* 103) bits or = (M * γ) / (8*β* 103) bytes
A small FIFO can be associated with every queue and in this FIFO can be stored the enqueued packet lengths. The size of this FIFO will be: P bytes (size of the relative queue in bytes) / L bytes (minimum packet size). The word used by this FIFO will be 14bits for every Ethernet applications (lengths from 64 bytes to 9600) or will be easily calculated for other protocols according to the following formula
Words _ bits = [log2 (Maximum _ Packet _ Length ]
Figs 7, 8 DDRA example Figure 7 shows an example of queues and values used by a Deficit Round Robin Algorithm. This can run as usual as described above, but every time that a packet becomes a candidate to be selected for transmission the scheduler will have to check if the relative length is less than or equal to MPS. Otherwise the packet will not be chosen and the choice will pass to another one according to the scheduler logic. There are four queues ql-q4 in the example illustrated. Initial deficits and end deficits for each queue are shown, and a packet sent value. E.g. If MPS is 300 and ql has a deficit counter bigger that 500 the scheduler will use it as a next packet (candidate), but after the length check it will decide to postpone its transmission and will go to q3, since q2 is empty. If queue number 3 has a deficit counter larger than 200 then the scheduler will select it as a candidate and after the length check it will begin its transmission.
Figure 8 shows an example of the packet lengths stored for each of the queues ql-q4, for use in the comparison of length to MPS. For ql the length FIFO has values 500 then 700. The length FIFO for q2 is empty since q2 is empty. The length FIFO for q3 has values 200 and 500, and the length FIFO for q4 has one length value of 400.
It can now be seen that the fairness behaviour of the scheduler is not changed (with only small changes in the packet scheduling order around the very high priority packet issue but with no change in the overall traffic scheduled per queue).
At the same time the bandwidth waste is minimized because when a candidate is stopped and forced to wait, the scheduler serves the smaller and eligible ones in order to use the gap as much as possible.
Very small delays are added to the delayed next packet (and small delays decreased by the subsequent packet that may take advantage of this delay if it is short enough) but the overall mean delay will be negligible. On the other hand the scheduling delay added to the very delay sensitive packets (e.g. PTP) can be reduced to 0 and they may never need to wait for the transmission circuits to end the current packet transmission .
WFQ example
The same type of idea can obviously work also with the other types of schedulers. For example in the widely used case of Weighted Fair Queueing schedulers again the scheduler will act as usual choosing the minimum packet tag. After that a next packet is selected as a candidate for transmission and the scheduler will have to check if the relative length is less than or equal to MPS. Otherwise the packet will not be chosen and the choice will pass to another one according to the scheduler logic. Again this operation will not change the packet tag so it will be the first packet served after the delay sensitive packet transmission. Therefore it is again evident that the fairness behavior of the scheduler is not changed (with only small changes in the packet scheduling order around the very high priority packet issue but with no change in the overall traffic scheduled per queue).
Figures 9, 10 Scheduler according to an embodiment
Figure 9 shows a schematic view of an example of a scheduler and associated queues. Two of the packet queues 1-M are shown in the form of FIFO 51 and FIFO 52, having packets 1 to S in queue 1 and packets 1 to P in queue M. The packets have different lengths to the length values are stored in associated length queues, two are shown, length queue 1 in FIFO 76 and length queue M in FIFO 78. New incoming packets and their length values are written into these FIFOs under the control of write enable signals WEQ_ l-WEQ M.
The queues of packets share the same output path and so a scheduling algorithm selects which of the queues outputs at any time. A processor 72 runs the scheduling algorithm and runs a controller for approving the output to avoid the free intervals. Checking circuitry is provided for generating the timing of the free intervals and checking there is sufficient time to complete the sending before the start of the next free interval. This circuitry includes a down counter 74 which is fed by a clock and a timing signal in the form of a delay sensitive packet transmission enable signal. This causes the counter to be preset to a value representing a number of clock periods remaining. The clock period corresponds to a time needed to send each byte of a packet. An output of the counter is fed to a comparator 30 which compares the number of clock periods remaining, to the byte length of the packet selected by the scheduler, output from the corresponding length queue FIFO. The correct length value is chosen by means of a signal from the processor indicating which queue is selected, fed to a decoder 84. This decoder has connections to the read enable inputs of the length queue FIFOs, and activates one of these to enables. The FIFO outputs are fed to a multiplexer 82 which is controlled from the decoder to couple the output from the selected length queue FIFO to the comparator 30.
The output of the comparator is fed to the processor, providing a binary packet approval signal indicating whether or not the selected packet is short enough to be transmitted without interfering with the free interval. As described above, the controller uses this to approve the transmission and output an approved queue output select signal to the packet queues. If the packet is too long, the controller can cause the scheduling algorithm to pause until after the free interval, or to delay the selected packet and to continue with a subsequent packet selection step to see if a shorter packet is present which can be transmitted in time before the free interval. Not shown is a control signal to control the output of the delay sensitive packet, which may be controlled by the processor or independently.
At every delay sensitive free interval generation, the derivative circuit (DER) applied to the transmission enable signal will set a decremental counter to the value M-1. Note that the derivative circuit is used on many control signals in figure 9 to cut the respective signal length to a single clock period [the first one]. It is used on the write enable signals, on the read enable signals of the length queue FIFOs, and on the delay sensitive packet transmission enable signal.
An example of such a circuit is shown in figure 10. After the scheduler has selected a candidate (e.g. From queue number M) the selection bus will be used to generate the right FIFO read enable signal and to correctly move the FIFO output selector as described.
Therefore the comparator will be able to compare the current length with the MPS value (directly resulting form the M value, the relative logic can be included in the comparator circuit) and to assert (or not) the packet approval signal (labeled PKT APPR).
Note the new incoming packets (typically from a classification stage by a previous Network Processor) will arrive at the scheduler enqueuing circuits with an indication of the required queue number (according to the previous flow identification resulting from the classification) and the relative packet length (to allow the right number of bytes enqueuing). As a consequence it is very easy to use the enqueuing write enable signals (following a derivation as described above) to write the packets length indications into the length queue FIFO, which effectively needs only store a relative packet length. Note also that after every effective packet transmission the scheduler will have to update its queues removing the transmitted packet payload and length from the length queue FIFOs, but this can be implemented in various ways without difficulty for those skilled in the art.
Figure 10 shows a typical derivative circuit DER for use in figure 9. There is a first latch 86, a second latch 88, and an output gate 92. The output of the first latch is fed to the input of the second latch, clocked by the same clock signal. The output of the second latch is fed to an inverting input of the output gate. A second input of the output gate is fed by the input of the first latch.
Figure 11 , node view with option of delay compensation
Figure 11 shows a view of a node according to an embodiment. A switch 300 has a number of input queues 310 coupled at one side. At an exit or output side, a number of queues 50 are provided, coupled to a shared output path. A scheduler 70 as describe above can be provided as an egress scheduler to control the outputs of the queues. The scheduler can be applied to ingress queues as desired. A delay variation compensator 320 may be provided on some or all of the output paths, to compensate for delay variations through the earlier parts of the node such as the input queues and the switch. This can be implemented in various ways. One way is to measure the delay for each packet separately then add a complementary delay so that the packets for that flow have the same delay.
Such harmonising of delay might be undone if the egress scheduler is unable to avoid further delay variation. Hence there can be great value in adding some more intelligent egress schedulers in this described above, to avoid undoing the previous work in compensating the packet variable delay which adds considerable complexity.
Applications and consequences
Applications for these intelligent schedulers include improving transmission of video streams. In this case reducing the delay variation of video packets can improve the final video quality or allow it to pass a much higher number of nodes in the network for a given quality level.
As has been discussed, there are some cases where delay sensitive packets (e.g. for synchronization proposes) should be issued without a risk of having to wait for other packets completing their transmission state (as a natural consequence of the behavior of the most widely used output schedulers). Other current solutions (e.g. work "on the flow" after the schedulers) are complex, expensive and barely scalable because they work at a port level. The embodiments discussed can provide an improvement with little modification to the most widely adopted schedulers. This modification is simple enough to be cost effective (possibly a simple FPGA implementation), doesn't affect the native schedulers good properties (fairness) and can provide very good scalability properties (a single circuit for all ports). Other variations and embodiments can be envisaged within the claims.

Claims

Claims:
1. A method of scheduling use of an output path shared by two or more queues of variable length packets at a node of a telecommunications network, the method having the steps of:
generating a timing of free intervals during which intervals the queued packets are not to use the shared output path, so that it is free for use by delay sensitive packets, before a start of a next of the free intervals, using a scheduling algorithm to determine which of the two or more queues supplies a next packet for output,
determining before outputting the next packet, whether there is enough time to complete the outputting of the next packet before the start of the next free interval,
if there is enough time, outputting the next packet on the output path and otherwise delaying the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm, and
during the free interval using the shared output path for outputting the delay sensitive packet
2. The method of claim 1 , and having the steps, if the next packet is delayed, of determining which of the queues supplies a subsequent packet using the scheduling algorithm, and determining before outputting the subsequent packet, whether there is enough time to complete the outputting of the subsequent packet before the start of the free interval.
3. The method of claim 1 or 2, having the step of using a counter to provide an output representing the time remaining to the next free interval, and the step of determining whether there is enough time comprises comparing a length of the selected packet with the counter output.
4. The method of any preceding claim having the step of setting the times of the free intervals according to information about the frequency of the delay sensitive packets received by the node at the time of setting up a flow of the delay sensitive packets.
5. The method of any preceding claim, the delay sensitive packets belonging to a constant bit rate flow.
6. The method of any preceding claim, the selecting of the next packet involving a deficit round robin scheduling step.
7. The method of any preceding claim, the selecting of the next packet involving a weighted fair queueing step.
8. The method of any preceding claim, the queues being at an exit side of a node.
9. A computer program stored on a machine readable medium and having instructions which when executed by a processor, cause the processor to carry out the steps of any of claims 1 to 8.
10. A scheduler for scheduling use of an output path shared by two or more queues of variable length packets, at a node of a telecommunications network, the scheduler having:
a timer for generating a timing of free intervals during which intervals the queued packets are not to use the shared output path, so that it is free for delay sensitive packets to use,
a selector arranged to use a scheduling algorithm to select, before a start of a next of the free intervals, which of the queues is to supply a next packet for output on the shared output path,
a comparator to determine whether there is enough time to complete the outputting of the selected packet before the start of the next free interval, and
a controller to output the next packet on the shared output path if there is enough time, and otherwise delay the next packet to be output after the free interval without distorting the sharing of the output path between the different queues set by the scheduling algorithm, and during the free intervals to use the shared output path for outputting the delay sensitive packets.
11. The scheduler of claim 10 in the form of one or more integrated circuits.
12. The scheduler of claim 10 or 11, having a counter to provide an output representing the time remaining to the next free interval, and the comparator being arranged to compare a length of the selected packet with the counter output to determine whether there is enough time.
13. The scheduler of claim 12, having length buffers to store a length value for each of the packets in the buffers, and to provide the length of the selected packet to the comparator.
14. The scheduler of any of claims 10 to 13 and arranged to set the times of the free intervals according to information about the frequency of the delay sensitive packets received by the node at the time of setting up a flow of the delay sensitive packets.
15. The scheduler of any of claims 10 to 14, the delay sensitive packets belonging to a constant bit rate flow.
16. The scheduler of any of claims 10 to 15, the selector being arranged to select the next packet using a deficit round robin schedule.
17. The scheduler of any of claims 10 to 15, the selector being arranged to select the next packet using a weighted fair queueing schedule.
18. A node for a telecommunications network having a packet switch and the scheduler of any of claims 10 to 17.
19. The node claim 18, the one or more queues being at an exit side of the packet switch.
20. The node of claim 18 or 19, and having a packet delay variation compensator coupled after the packet switch and before the scheduler, to compensate for variations in packet delays between different ones of the delay sensitive packets.
PCT/EP2011/056010 2011-03-01 2011-04-15 Scheduling for delay sensitive packets WO2012116761A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP11156345 2011-03-01
EP11156345.8 2011-03-01

Publications (1)

Publication Number Publication Date
WO2012116761A1 true WO2012116761A1 (en) 2012-09-07

Family

ID=44534281

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/056010 WO2012116761A1 (en) 2011-03-01 2011-04-15 Scheduling for delay sensitive packets

Country Status (1)

Country Link
WO (1) WO2012116761A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105453480A (en) * 2013-09-30 2016-03-30 西门子公司 A merging unit
CN113660144A (en) * 2021-09-15 2021-11-16 佳缘科技股份有限公司 Network loopback time-based springboard detection method and system thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292484B1 (en) * 1997-06-11 2001-09-18 Data Race, Inc. System and method for low overhead multiplexing of real-time and non-real-time data
EP1303083A1 (en) * 2000-06-29 2003-04-16 NEC Corporation Packet scheduling apparatus
US6570849B1 (en) * 1999-10-15 2003-05-27 Tropic Networks Inc. TDM-quality voice over packet
US6570883B1 (en) * 1999-08-28 2003-05-27 Hsiao-Tung Wong Packet scheduling using dual weight single priority queue
US20040117577A1 (en) * 2002-12-13 2004-06-17 Equator Technologies, Inc. Method and apparatus for scheduling real-time and non-real-time access to a shared resource

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292484B1 (en) * 1997-06-11 2001-09-18 Data Race, Inc. System and method for low overhead multiplexing of real-time and non-real-time data
US6570883B1 (en) * 1999-08-28 2003-05-27 Hsiao-Tung Wong Packet scheduling using dual weight single priority queue
US6570849B1 (en) * 1999-10-15 2003-05-27 Tropic Networks Inc. TDM-quality voice over packet
EP1303083A1 (en) * 2000-06-29 2003-04-16 NEC Corporation Packet scheduling apparatus
US20040117577A1 (en) * 2002-12-13 2004-06-17 Equator Technologies, Inc. Method and apparatus for scheduling real-time and non-real-time access to a shared resource

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Asymmetry in Propagation Times of DSL Systems", TEMPORARY DOCUMENT 1 OGS-044, 22 March 2010 (2010-03-22)
"G.vdsl: A Method for Accurate Distribution of Time-of-Day over VDSL2 Links", TEMPORARY DOCUMENT 10HA-034, 22 March 2010 (2010-03-22)
"GPON: Transmission convergence layer specification", ITU-T RECOMMENDATION G.984.3, March 2008 (2008-03-01)
"Optical Time Domain Reflectometry in a Single Mode Fiber", IEEE JQE, vol. QE17, no. 6, June 1981 (1981-06-01)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105453480A (en) * 2013-09-30 2016-03-30 西门子公司 A merging unit
EP3017560A4 (en) * 2013-09-30 2016-11-16 Siemens Ag A merging unit
CN113660144A (en) * 2021-09-15 2021-11-16 佳缘科技股份有限公司 Network loopback time-based springboard detection method and system thereof

Similar Documents

Publication Publication Date Title
US5831971A (en) Method for leaky bucket traffic shaping using fair queueing collision arbitration
KR100431191B1 (en) An apparatus and method for scheduling packets by using a round robin based on credit
EP2302843B1 (en) Method and device for packet scheduling
US9722942B2 (en) Communication device and packet scheduling method
JPH10200549A (en) Cell scheduling device
US8681609B2 (en) Method to schedule multiple traffic flows through packet-switched routers with near-minimal queue sizes
JPH10200550A (en) Cell scheduling method and its device
US9439102B2 (en) Transmitting apparatus, transmission method, and transmission system
EP3032785B1 (en) Transport method in a communication network
CN112866134A (en) Method for sending message, first network equipment and computer readable storage medium
WO2012116761A1 (en) Scheduling for delay sensitive packets
Dwekat et al. A practical fair queuing scheduler: Simplification through quantization
KR100739493B1 (en) Packet traffic management system and method for developing the quality of service for ip network
Tong et al. Quantum varying deficit round robin scheduling over priority queues
Lenzini et al. Performance analysis of modified deficit round robin schedulers
Hong et al. Fair scheduling on parallel bonded channels with intersecting bonding groups
EP4307641A1 (en) Guaranteed-latency networking
Wu et al. High-performance packet scheduling to provide relative delay differentiation in future high-speed networks
Kos et al. Sub-Critical Deficit Round Robin
Wu Link-sharing method for ABR/UBR services in ATM networks
Wu et al. Delivering relative differentiated services in future high-speed networks using hierarchical dynamic deficit round robin
احمدی Delay Control Using Media Sensitivity in Multimedia Environments
Elnaka et al. Fair and Delay Adaptive Scheduler (FDAS) preliminary modeling and optimization
Ahmadi Delay Control Using Media Sensitivity in Multimedia Environments
SWITCH A Combined Low Latency and Weighted Fair Queuing Based Scheduling of an Input-Queued Switch

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11719491

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11719491

Country of ref document: EP

Kind code of ref document: A1